Navigating the AI risk frontier: Liability, copyright & regulation
-
Insight Article 18 July 2025 18 July 2025
-
UK & Europe
-
Tech & AI evolution
-
Technology, Outsourcing & Data
Artificial Intelligence ("AI”) presents tremendous opportunities for organizations and professionals to digest, process and compute large amounts of information.
However, AI tools are imperfect and their unchecked and undisclosed use creates risks of professional liability and copyright infringement. What’s more, the use of AI in cyber-attacks may lead to greater frequency and severity of claims, and risk of exposure of private information. As a result, AI regulation is at the forefront of many jurisdictions, each of which will need to grapple with how best to control its use and growth.
Professional Liability
Professionals face exposure to liability when using AI tools particularly when they are not allowed to do so or when they have not addressed the use and risks of these tools with their clients and others who may be adversely affected by AI-generated errors.
Legal professionals who appear before the courts and are therefore subject to regular external oversight and published warnings and sanctions in judgments are perhaps the most visible example of this exposure to professional liability. Despite warnings in recent years from various regulators of legal professionals about the use of AI in producing legal work, legal professionals around the world have been caught bringing before the courts legal authorities or arguments hallucinated by generative AI, generally in the form of non-existent authorities or quotations or significant and obvious mischaracterizations of existing authorities. There are already several dozen reported decisions around the world in which legal professionals have faced warnings and/or been sanctioned in such contexts, from being referred to disciplinary bodies, to having to compensate other parties or lawyers, to being fined [1].
In the above cases, the quantum of damages is often limited because the use of the AI-generated error is caught early and has limited negative consequences on others. However, there are situations in which the exposure could be far greater, e.g. if the AI-generated errors are not caught before a professional’s work or advice is used in large-scale projects or widely disseminated or perhaps relied upon by a Court in a published decision.
The examples of the decisions involving legal professionals suggest that education is required amongst professionals of all types about the unchecked and undisclosed use of AI.
Copyright Infringement:
We are also now seeing claims arising from the unauthorized use of copyrighted material to train AI platforms.
For example, in February 2025, Thomson Reuters (owner of the legal research platform Westlaw) sued Ross Intelligence, a new competitor, for copyright infringement alleging that, after Thomson Reuters denied Ross’s request to license its content, Ross trained its AI using LegalEase Bulk Memos which were built from Westlaw’s headnotes.
In granting Thomson Reuters summary judgment, the United States District Court, District of Delaware, found that Ross infringed on 2,243 of Westlaw’s headnotes. The district court further found that Ross’s defenses of innocent infringement, copyright misuse, merger, scenes à faire, and fair use all failed. In May 2025, the district court stayed the case and certified an interlocutory appeal to the Third Circuit Court of Appeals to answer the questions of: (1) whether the Westlaw headnotes and Key Number System are original as a matter of law, and (2) whether Ross’s alleged use of the Westlaw headnotes was fair use.
More recently, on June 4, 2025, Reddit, Inc. sued Anthropic, PBC in California Superior Court for breach of contract, unjust enrichment, trespass to chattels, tortious interference, and unfair competition because Anthropic allegedly used, without authorization, Reddit’s content to train its AI technology.
As companies increasingly release their own AI technology, and use content to train that AI technology, insurers are likely to see an increase in claims related to the use of copyrighted material to train AI technology.
Cyber Attacks: Frequency and Severity
The recent proliferation in use of AI in cyber-attacks, including by deep fake technology, voice clones, and phishing attacks, raises concerns about increased frequency and severity of claims
For instance, Threat Actors utilize Generative AI models to create phishing emails which appear more convincing than those written without the use of AI. Threat Actors can often mimic the tone and language of a target organization to trick victims into social engineering attacks, which continue to be the most common method of cyber-attack. Typically, victims can prevent such attacks by being alert to poor grammar or an awkward turn of phrase in correspondence. However, with the use of AI, Threat Actors have an opportunity to clean up messages before sending them through. What’s more, AI can automate the process of creating and sending phishing emails, allowing Threat Actors to launch large-scale campaigns with minimal effort.
Deep fake technology has also created opportunities for large scale attacks. In Hong Kong, a deep fake of a company’s Chief Financial Officer was used to convince a finance worker at a multinational firm into sending USD 25 million to a Threat Actor. The scheme involved the worker attending a video call with whom he believed to be the CFO and other staff, but all of whom were, in fact, deep fake clones.
AI is not infallible. In the deep fake example, for instance, there are certain “tells” when the technology is being used (for instance, there may be mismatches between speech and mouth movement). However, as AI technology improves, organizations will need to be alert to the nefarious use of such programs.
Regulation
The European Union's A.I. Act (“A.I. Act”) is considered the world's first comprehensive legal framework on AI, aiming to regulate the development, deployment, and use of AI systems in the EU. Specifically, the A.I. Act uses a risk-based classification system for AI systems, with different levels of compliance requirements depending on the potential harm they pose which are categorized as: (1) unacceptable risk; (2) high risk; (3) limited risk and (4) low risk. Another one of the main objectives of the A.I. Act is to address the challenges of combating algorithmic bias and ensuring robust protection against discrimination in high-risk AI systems—such as those used in hiring, credit scoring, or healthcare. In contrast, there currently is no comprehensive federal legislation or regulations in the United States that regulate the development of AI or specifically prohibit or restrict its use. Many states though, including California, Colorado and Maryland, have introduced or enacted AI-related legislation concerning areas like algorithmic discrimination, deepfakes, and transparency requirements. But it is crucial to note that the regulatory landscape in the U.S. is constantly evolving, with ongoing developments and new guidance emerging including those set forth in President Trump’s proposed H.R. 1, the One Big Beautiful Bill Act.
End