The English High Court has delivered its anticipated judgment in the Getty Images v Stability AI copyright dispute, with the outcome raising ongoing questions over the interface between artificial intelligence technology and intellectual property rights.

The case centred on allegations from Getty Images that Stability AI infringed its copyright and trademark rights by using Getty’s images to train its Stable Diffusion AI model and by creating AI-generated images bearing Getty’s watermarks. However, the Court did not provide the broad legal direction many observers had expected on the legitimacy of using copyrighted material to train artificial intelligence models.

Key claims dropped

During proceedings, Getty Images conceded an important element of its case – the ‘Training and Development Claim’ – after acknowledging that none of the Stable Diffusion model’s training activities had taken place within UK jurisdiction. The consequence, according to Iain Connor, Intellectual Property Partner at Michelmores, was that the court did not rule in general terms on whether AI’s use of protected input materials is lawful or whether the outputs of AI models could infringe copyrights.

Connor said that, as a result, the judgment “leaves the UK without a meaningful verdict on the lawfulness of an AI model’s process of learning from copyright materials.” He also noted that Getty’s more technical database right claim evaporated with the primary copyright issue, and that a secondary infringement argument failed because Stable Diffusion did not store or reproduce protected works.

“The question of whether an AI system is inherently unlawful if trained on third party copyright materials…failed,” Connor said. He contrasted the outcome with a recent United States case involving Anthropic, which settled for USD $1.5 billion after admitting to retaining unauthorised copies of authors’ works following training.

Trade mark ruling

Getty did succeed in one aspect: the High Court held that Stability AI had infringed Getty Images’ and iStock’s trade marks by producing images that contained their watermarks. However, as Connor commented, “this will provide Getty Images with little solace.” The more substantial question concerning the legality of training on protected materials remains unanswered.

Nathan Smith, Intellectual Property Partner at Katten Muchin Rosenman, said the ruling provided only superficial clarity and left “significant uncertainty.” He noted, “The court’s findings on the more important questions regarding copyright infringement were constrained by jurisdictional limitations, offering little insight on whether training AI models on copyrighted works infringes intellectual property rights.”

Smith observed that the judgment dismissed secondary copyright infringement claims and held that the Stable Diffusion AI model was not an infringing copy under English law since it did not store or reproduce the original copyrighted works. “On the face of it, the judgment appears to present a win for the AI community, but arguably leaves the legal waters of copyright and AI training as murky as before,” Smith said.

Focus on liability

Wayne Cleghorn, Data Protection and Artificial Intelligence Partner at Excello Law, stated that the case addressed a pressing question for the field: who is liable when AI models contain or utilise protected intellectual property. Cleghorn said, “The answer is almost always the AI developers and AI model providers.” He pointed out that Getty Images argued it could not be expected to bring action against every AI model developer to enforce its rights, advocating instead for new transparency rules from the government.

“Getty Images is calling on governments, including the UK Government, to put in place new AI transparency rules, to avoid costly AI and IP litigation and enable IP rights holders to protect their assets,” Cleghorn stated.

He also suggested the outcome may prompt both AI developers and creative industry stakeholders to favour commercial agreements over continued expensive litigation.

Industry reaction

James Clark, Data Protection, AI and Digital Regulation Partner at Spencer West, commented that the Court’s ruling affirmed the position that training an AI model on copyright work does not result in the AI model being an infringing copy, nor does it directly reproduce such work. Clark noted that this finding would likely concern creative industry groups seeking stronger protection, while encouraging AI developers.

He explained, “The judgment usefully highlights the problem that the creative industry has in bringing a successful copyright infringement claim in relation to the training of large language models. During the training process, the model is not making a copy of the work used to train it, and it does not reproduce that work when prompted for an output by its user.”

The judgment refers to expert evidence confirming that diffusion models ‘learn the statistics of patterns’ found in input data rather than storing the data itself, a process likened to human learning. Creative groups had hoped for a more decisive precedent but were left disappointed with the result.

Following the ruling, Getty Images’ share price rose after the company announced an AI licence deal, a signal some observers interpreted as a shift towards negotiated commercial solutions in the absence of clear legal guidance.

The legal community and those involved in AI development continue to await clarification from the courts or legislatures on the broader legal status of using copyright material for AI training purposes.