AI hallucinations in legal filings risk "loss of public confidence", immigration tribunal warns
Judges issue stern AI warning after immigration lawyer filed an appeal containing false information.
A senior UK judicial body has warned lawyers against uploading client documents into large language models after identifying AI-generated hallucinations in a legal filing by an immigration solicitor.
The Upper Tribunal, a superior court of record that hears appeals and reviews legal decisions, issued its ruling after an immigration lawyer admitted to including false information while drafting the grounds of appeal.
Last October, the lawyer, who we are not naming, voluntarily reported that he had unknowingly inserted a non-existent case into an appeal.
During the hearing, the solicitor said he was “disappointed in himself” and explained that personal difficulties had placed him under “a lot of distress and anxiety”.
He also said he could not “dismiss the fact” that the false material was “an AI creation”.
In its judgment, the Upper Tribunal wrote: "Uploading confidential documents into an open-source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege."
This could then warrant referral to a regulatory body and the Information Commissioner’s Office.
AI hallucination and "a fool's errand"
The Tribunal warned that investigating false information was a waste of time and could have an impact on how the public perceives its work.
"The citation of cases which do not exist sends that judge on a fool’s errand," it added. "The time spent on such an errand is at the expense of other judicial business and is not in the interests of justice.
"Further, time spent on applications containing false legal information also risks a loss of public confidence in the processes of the Upper Tribunal."
The tribunal made clear that responsibility for hallucinated information rests with the lawyer, not the tool. Mistakes made by machines will be regarded as mistakes made by humans.
Tim Ward, CEO and co-founder at Redflags, told Machine: "This judgment shows the real risk isn’t ‘AI’ in the abstract: it’s untrained staff using consumer tools outside an organisation’s regulatory and security perimeter.
"In legal and other regulated sectors, AI should be treated like any high-risk technology, supported by clear policies, approved tools, and targeted security awareness so employees understand what they can use, where, and with which data.
"Organisations should also focus on reinforcing secure behaviours in the moment, helping staff recognise and avoid risky AI interactions before they create compliance or data-protection issues."
Oliver Simonnet, Lead Cybersecurity Researcher at CultureAI, also said: "This incident highlights the consequences of the ungoverned use of AI, particularly in highly regulated industries such as the legal sector, where professionals have strict duties to keep client information confidential. Public AI tools can introduce significant data security and privacy risks if sensitive information is shared without appropriate safeguards.
"With over 90% of organisations expecting AI adoption to grow in the next 12 months, and 41% anticipating significant growth, the urgency to implement responsible controls has never been greater. It is imperative that law firms establish clear AI governance frameworks, including formal policies, approved tools, and employee training.
"Organisations must ensure staff understand what data can and cannot be entered into AI systems, while implementing technical controls to prevent accidental exposure. Responsible AI adoption requires balancing innovation with strong data protection and human oversight to maintain client trust and regulatory compliance. Crucially, banning AI usage is not the answer."