Harber v HMRC (First-tier Tribunal) – 4 December 2023


In this appeal against an HMRC penalty for “failure to notify” a liability to capital gains tax, it was found that the authorities provided to support the appellant’s case were not genuine judgments but instead had been generated by an artificial intelligence system such as ChatGPT (“AI”).

Our summary focuses on the issue of providing fictitious judgments and, indirectly, the use of AI, rather than on the substantive issue in the appeal.


Mrs H disposed of a property and failed to notify her liability to capital gains tax. HMRC issued her with a “failure to notify” penalty of £3,265.11 which she appealed on the basis that she had a reasonable excuse (because of her mental health condition and/or because it was reasonable for her to be ignorant of the law).

In her response, she provided the Tribunal with summaries of nine First-tier Tribunal (“FTT”) decisions in which the appellant had been successful in showing that a reasonable excuse existed. Mrs H told the Tribunal that the cases had been provided to her by “a friend in a solicitor’s office” whom she had asked to assist with her appeal. However, the Tribunal held that none of those authorities were genuine; they had instead been generated by AI. While the Tribunal accepted that Mrs H had been unaware that the AI cases were not genuine and that she did not know how to check their validity, it found that she did not have a reasonable excuse and therefore dismissed her appeal and upheld the penalty.

The judge took this opportunity to comment on the serious and important issue of providing authorities which are not genuine.


In finding that the cases were not genuine FTT judgments but had instead been generated by AI, the Tribunal noted the following points:

(1) none of the cases were included in the FTT website or other legal websites

(2) Mrs H accepted that it was “possible” that the cases had been generated by an AI system, and she had no alternative explanation for the fact that no copy of the cases could be located on any publicly available database of FTT judgments

(3) in its Risk Outlook report, the Solicitors’ Regulation Authority (“SRA”) recently said this about results obtained from AI systems:

“All computers can make mistakes. AI language models such as ChatGPT, however, can be more prone to this. That is because they work by anticipating the text that should follow the input they are given, but do not have a concept of ‘reality’. The result is known as ‘hallucination’, where a system produces highly plausible but incorrect results.”

(4) the cases were “plausible but incorrect” because each case had similarities to real cases (eg same surname, similar facts or similar wording).

The output of ChatGPT has already been caught out in the same way. The Tribunal noted the US case of Mata v Avianca, in which two barristers sought to rely on fake cases generated by ChatGPT. Like Mrs H, they placed reliance on summaries of court decisions which had “some traits that [were] superficially consistent with actual judicial decisions”.

The judge noted the harms of citing invented judgements, including causing the Tribunal and HMRC to “waste time and public money”, reducing the resources available to progress the cases of other court users and promoting cynicism about judicial precedents which is important because the use of precedent is “a cornerstone of our legal system” and “an indispensable foundation upon which to decide what is the law and its application to individual cases”, as Lord Bingham said in Kay v LB of Lambeth. Although FTT judgments are not binding on other Tribunals, they nevertheless “constitute persuasive authorities which would be expected to be followed” by later Tribunals considering similar fact patterns, see Ardmore Construction Limited v HMRC.


This case exemplifies the danger of relying on AI to substantiate a legal position and highlights the importance of always checking the authenticity of case law authorities by using an appropriate legal website, such as the FTT or BAILLI. The SRA’s Risk Outlook report notes that the use of AI is rising rapidly (with three quarters of the largest solicitors’ firms using AI) and sets out ways that solicitors’ firms can manage the risks.