Derived functions are the poison pill of generative AI

Recent Lamas of Meta 2 Launch showed interest in open source Large Language Models (LLMs) and Launch was announced as Big Tech’s first open source LLM with a commercial license.

In all the excitement, it’s easy to forget the very real cloud of uncertainty surrounding legal issues like IP (intellectual property) ownership and copyright in the generative AI space. Generally, people jump under the assumption that regulatory risk is something that companies creating LLMs need to worry about.

The poison pill of generative AI: This is a dangerous assumption without considering derivatives.

Understanding the poison pill risks of generative AI also gives enterprise technology leaders the tools to manage them.

“Derivative works” are given specific legal treatment under copyright law, but there are few precedents for addressing statutes or regulations data Derivatives, thanks to open source LLM, are going to become much more prevalent.

When a software program produces output data based on input data, which output data is a derivative of the input data? All this? Some of them? None of these?

An upstream problem, like a poison pill, spreads the infection down the derivative chain, increasing the scope of any claim as we approach real legal challenges to IP in the LLM.

Uncertainty about the legal treatment of data derivatives is the status quo in software.

Why does LLM change the game? It’s a perfect storm of three forces:

  • Centralization. Until the advent of LLM, a piece of software could not produce an applicable variable output in endless ways. LLMs not only produce text and images but also code, audio, video and pure data. In a few years, long before the case law on IP ownership and copyright around LLM is settled, LLM use will become ubiquitous, increasing exposure to LLM users if LLM vendors are at risk. This applies not only to copyright-related risks, but also to risks related to other possible harms caused by illusions, biases, etc.
  • encourage Copyright holders have an incentive to argue for the broadest possible interpretation of LLM derivatives, as it expands the scope for which they can claim damages. Conversely, major platform companies do so when imposing licensing restrictions in their all-out war with other platforms. The Llama 2 license is a case in point: Section 1.bv prohibits the use of Llama to “improve” non-Llama LLMs. Vague definitions benefit rights holders and those with the largest legal war chests.

Leave a Comment