It has long been established that predictive models can be transformed into lossless compressors and vice versa. Incidentally, in recent years, the machine learning community has focused on training increasingly large and powerful self-supervised (language) models. Since these large language models exhibit impressive predictive capabilities, they are well-positioned to be strong compressors. In this work, we advocate for viewing the prediction problem through the lens of compression and evaluate the compression capabilities of large (foundation) models. We show that large language models are powerful general-purpose predictors and that the compression viewpoint provides novel insights into scaling laws, tokenization, and in-context learning. For example, Chinchilla 70B, while trained primarily on text, compresses ImageNet patches to 43.4% and LibriSpeech samples to 16.4% of their raw size, beating domain-specific compressors like PNG (58.5%) or FLAC (30.3%), respectively. Finally, we show that the prediction-compression equivalence allows us to use any compressor (like gzip) to build a conditional generative model.
I wonder what a paper like this, especially given the title, does for the legal case regarding copyright and generative AI. Haven't had a chance to read the paper yet, so don't know if the findings are relevant to copyright.
You realize that this is already the case right? As it stands now, AI produced works are uncopyrightable. Copy-rights are dedicated to human produced works of art. The only exception to this is when AI is used in a non-major portion of production. Like a photo-editor using AI to remove a person from a picture, where the AI didn't produce the picture, it was just used as a tool to help the process along.
Additionally -- If...say OpenAI made ChatGPT and AI works could be copyrighted...there's no use in a word-prediction-engine or diffusion engine, owning something, because it can't make decisions for itself. That would be required to pass copy-rights along to someone else, for example.
What the courts say and what is right is not necessarily the same. Working with an AI model, manipulating all the parameters of each component process. crafting prompts and data to manipulate its output, and then fine-tuning that output to achieve a desired result is analogous to and indistinguishable from working with any other creative tool. It is no different than manipulating a camera using human judgement, framing and composure to generate a picture.
The neural networks are a fixed medium. They just happen to be generated with an automated step in the design process compared to traditional tools where there is a human designer directly engineering the tool. Even then, there is still a human that is designing and initializing the process. A human had to design the structure of the network, define its parameters, and decide what data would be used to form the network.
A human had to design the structure of the network, define its parameters, and decide what data would be used to form the network.
In a majority of the cases this simply isn't true. Yeah, there's some people deep into the ML game, but most predictive engines aren't using any kind of additional fine tuning or dataset from their users. And in most stable diffusors that are popular right now, were trained on copyright violating works.
LLMs are just prediction engines, again - trained on many works that were privvy to copyright and the companies didn't care. Unless they all can prove their dataset contains no copyright violations, which will never happen.
Image and language predictors are just that...predictors. And morally, what's law now IS what is right. Typing some sentences into an image diffusion algorithm is no different than plugging an equation into a calculator. Math isn't copyrightable either.
The law already has stipulations for what constitutes an AI generated work, or merely an AI assisted creation. There are clear lines drawn in the sand that most people agree with morally.
In a majority of the cases this simply isn’t true.
The neural networks did not spring from the ether. And they are not naive neuron grids so simple as to be trivial. There are multiple layers with multiple purposes that have different designed functions.
Yeah, there’s some people deep into the ML game, but most predictive engines aren’t using any kind of additional fine tuning or dataset from their users.
And there are relatively few people who design the image sensors for cameras compared the the number of people using a camera to take pictures. They're still designed as a tool by a person.
trained on many works that were privvy to copyright
Trained the same way you learn with the wetwear neural network in your brain. And even if you're not convinced that these networks "learn" the same way we do, the resulting network weights are entirely transformational, which is perfectly allowed by copyright law. With 5 billion image/text pairs for training into 960 million parameters in the diffusion and text encoding networks of stable diffusion, for example, that is 0.2 parameters (or about 6 bits), per image in the resulting product. The image, as such, is almost entirely discarded.
And morally, what’s law now IS what is right.
I fundamentally disagree with you and I do not think we'll come to an agreement on this. There is a lot I find morally and philosophically wrong with our copyright law, and the current findings of the courts regarding AI works is just a fraction of that.
Typing some sentences into an image diffusion algorithm is no different than plugging an equation into a calculator.
A camera's image sensor is just as deterministic as the neural network weights. The human work comes from the judgement used when conjuring a prompt to feed into the tool, just like a photographer decides what light reflecting source to point his camera at.
There are clear lines drawn in the sand that most people agree with morally.
I'm not convinced the lines are either clear or agreed upon by the majority. This is a really complex set of circumstances and there's a reason we're still battling it out in the courts and in online forum comment sections. ;)
And there are relatively few people who design the image sensors for cameras compared the the number of people using a camera to take pictures. They're still designed as a tool by a person.
I'm not the most familiar with copyright law, but IIRC you're certainly able to violate copyright while taking a photo. If you take a photo of a copyrighted work (i.e. parts of a book or something) without artistic intent, I don't believe that's considered transformative.
I suspect the courts will end up having to deal with many of these issues on a case-by-case basis, just like they already do with fair use.