How two recent news stories portray companies’ complicated relationship with AI’s evolving technology

It’s hard to predict what lies in artificial intelligence’s (AI’s) regulatory future for the simple reason that AI’s technology is constantly evolving. This means its issues, of bias or copyright or lack of transparency, to name a few will continue to shape the regulatory agenda for many years to come. 

Two news stories highlight the complex relationship companies have with AI and the often circuitous path they will have to take to achieve some sort of control over this nearly unmanageable technology.

In December, 2023, The New York Times sued OpenAI and Microsoft, accusing them of using millions of the newspaper’s articles without permission to help train chatbots to provide information to readers. This is the first time a major media organisation in the US is suing OpenAI and Microsoft over copyright issues related to its written works. 

Generative AI technologies, that generate text, images and other media from short prompts, have led other groups – writers, computer programmers – to file copyright suits against AI companies. But these companies contend that they can legally use the content for free to train their technologies because it is public and they don’t reproduce the material in its entirety.

The Times has not specified an exact amount of money it is seeking in the suit. It does say that OpenAI and Microsoft should be held responsible for billions of dollars in damages for unlawful copying and using their work. The Times also wants the two companies to destroy any chatbot models and training data that use their copyrighted material.

The lawsuit could define the legal implications of generative AI technologies and have huge implications for the news industry. 

In an interesting element of the case, The Times tried to work out a resolution with Microsoft and OpenAI that would involve a commercial agreement and some sort of protections around the AI systems.

At the same time, German publishing giant Axel Springer SE, which owns Politico and Business Insider, reached a licensing agreement with OpenAI in which the company will pay Axel Springer to use its news content in the company’s AI products. This collaboration is a new kind of publishing deal that will allow the ChatGPT creator to train its AI models on the news organisation’s reporting.

As part of the deal, when users ask ChatGPT a question, the chatbot will deliver summaries of relevant news stories from Axel Springer brands. Those summaries will include material from stories that would otherwise require subscriptions to read. The summaries will cite the Axel Springer publication as the source, and also provide a link to the full article it summarises.

The agreement, which involved millions of euros, reflects another direction for companies on either side of AI systems. And, in a telling note, the deal is not exclusive. Axel Springer is free to make similar deals with other generative AI companies.

It’s still too early to predict what will serve as a template for media companies. Media analyst Ian Whittaker is quoted as saying that the deal is “a model for everyone else — flat fee for the historic data plus ongoing annual fee.”

But a flurry of lawsuits against AI systems – by the actress Sarah Silverman, authors including Jonathan Franzen and John Grisham and Getty Images, the photography syndicate, among others – demonstrate that the boundaries of content creation and copyright law as they pertain to generative AI technologies are still being defined.