OpenAI is facing accusations of training its AI models on copyrighted material without permission, as a new paper alleges the company used paywalled books from O’Reilly Media to train its GPT-4o model. The AI Disclosures Project, a nonprofit co-founded by Tim O’Reilly and Ilan Strauss, published the paper.
AI models function as prediction engines, learning patterns from extensive data like books and movies to extrapolate from prompts. While some AI labs are using AI-generated data as real-world sources diminish, training on purely synthetic data carries risks, such as impacting a model’s performance.
The paper’s methodology, DE-COP, determines if a model distinguishes between human-authored texts and AI-generated paraphrases. This suggests whether the model has prior knowledge from its training data. Researchers probed GPT-4o, GPT-3.5 Turbo, and other OpenAI models, using 13,962 excerpts from 34 O’Reilly books to estimate the probability of inclusion in training datasets.
Results indicated GPT-4o recognized significantly more paywalled O’Reilly book content than older models like GPT-3.5 Turbo. According to the paper, GPT-4o likely recognizes many non-public O’Reilly books published before its training cutoff date. O’Reilly doesn’t have a licensing agreement with OpenAI, according to the paper.
The co-authors acknowledge the method isn’t foolproof and OpenAI might have collected excerpts from users’ ChatGPT inputs. Another caveat is that more recent OpenAI models, including GPT-4.5, weren’t evaluated.
OpenAI, advocating for looser copyright restrictions, has sought higher-quality training data, hiring journalists to fine-tune model outputs. The company also has licensing deals with news publishers and offers opt-out mechanisms for copyright owners. OpenAI has not commented on the paper.