You should read the ruling in more detail, the judge explains the reasoning behind why he found the way that he did. For example:
Authors argue that using works to train Claude’s underlying LLMs was like using works to train any person to read and write, so Authors should be able to exclude Anthropic from this use (Opp. 16). But Authors cannot rightly exclude anyone from using their works for training or learning as such. Everyone reads texts, too, then writes new texts. They may need to pay for getting their hands on a text in the first instance. But to make anyone pay specifically for the use of a book each time they read it, each time they recall it from memory, each time they later draw upon it when writing new things in new ways would be unthinkable.
This isn’t “oligarch interests and demands,” this is affirming a right to learn and that copyright doesn’t allow its holder to prohibit people from analyzing the things that they read.
Yeah, but the issue is they didn’t buy a legal copy of the book. Once you own the book, you can read it as many times as you want. They didn’t legally own the books.
I will admit this is not a simple case. That being said, if you’ve lived in the US (and are aware of local mores), but you’re not American. you will have a different perspective on the US judicial system.
How is right to learn even relevant here? An LLM by definition cannot learn.
Where did I say analyzing a text should be restricted?
What does an LLM application (or training processes associated with an LLM application) have to do with the concept of learning? Where is the learning happening? Who is doing the learning?
Who is stopping the individuals at the LLM company from learning or analysing a given book?
From my experience living in the US, this is pretty standard American-style corruption. Lots of pomp and bombast and roleplay of sorts, but the outcome is no different from any other country that is in deep need of judicial and anti-corruotion reform.
Well, I’m talking about the reality of the law. The judge equated training with learning and stated that there is nothing in copyright that can prohibit it. Go ahead and read the judge’s ruling, it’s on display at the article linked. His conclusions start on page 9.
But AFAIK they actually didn’t acquire the legal rights even to read the stuff they trained from. There were definitely cases of pirated books used to train models.
Except learning in this context is building a probability map reinforcing the exact text of the book. Given the right prompt, no new generative concepts come out, just the verbatim book text trained on.
So it depends on the model I suppose and if the model enforces generative answers and blocks verbatim recitation.
Again, you should read the ruling. The judge explicitly addresses this. The Authors claim that this is how LLMs work, and the judge says “okay, let’s assume that their claim is true.”
Fourth, each fully trained LLM itself retained “compressed” copies of the works it had trained upon, or so Authors contend and this order takes for granted.
Even on that basis he still finds that it’s not violating copyright to train an LLM.
And I don’t think the Authors’ claim would hold up if challenged, for that matter. Anthropic chose not to challenge it because it didn’t make a difference to their case, but in actuality an LLM doesn’t store the training data verbatim within itself. It’s physically impossible to compress text that much.
You should read the ruling in more detail, the judge explains the reasoning behind why he found the way that he did. For example:
This isn’t “oligarch interests and demands,” this is affirming a right to learn and that copyright doesn’t allow its holder to prohibit people from analyzing the things that they read.
Yeah, but the issue is they didn’t buy a legal copy of the book. Once you own the book, you can read it as many times as you want. They didn’t legally own the books.
Right, and that’s the, “but faces trial over damages for millions of pirated works,” part that’s still up in the air.
I will admit this is not a simple case. That being said, if you’ve lived in the US (and are aware of local mores), but you’re not American. you will have a different perspective on the US judicial system.
How is right to learn even relevant here? An LLM by definition cannot learn.
Where did I say analyzing a text should be restricted?
I literally quoted a relevant part of the judge’s decision:
I am not a lawyer. I am talking about reality.
What does an LLM application (or training processes associated with an LLM application) have to do with the concept of learning? Where is the learning happening? Who is doing the learning?
Who is stopping the individuals at the LLM company from learning or analysing a given book?
From my experience living in the US, this is pretty standard American-style corruption. Lots of pomp and bombast and roleplay of sorts, but the outcome is no different from any other country that is in deep need of judicial and anti-corruotion reform.
Well, I’m talking about the reality of the law. The judge equated training with learning and stated that there is nothing in copyright that can prohibit it. Go ahead and read the judge’s ruling, it’s on display at the article linked. His conclusions start on page 9.
But AFAIK they actually didn’t acquire the legal rights even to read the stuff they trained from. There were definitely cases of pirated books used to train models.
Yes, and that part of the case is going to trial. This was a preliminary judgment specifically about the training itself.
People. ML AI’s are not a human. It’s machine. Why do you want to give it human rights?
Except learning in this context is building a probability map reinforcing the exact text of the book. Given the right prompt, no new generative concepts come out, just the verbatim book text trained on.
So it depends on the model I suppose and if the model enforces generative answers and blocks verbatim recitation.
Again, you should read the ruling. The judge explicitly addresses this. The Authors claim that this is how LLMs work, and the judge says “okay, let’s assume that their claim is true.”
Even on that basis he still finds that it’s not violating copyright to train an LLM.
And I don’t think the Authors’ claim would hold up if challenged, for that matter. Anthropic chose not to challenge it because it didn’t make a difference to their case, but in actuality an LLM doesn’t store the training data verbatim within itself. It’s physically impossible to compress text that much.