Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
[ad_1]
The only exception to this is UMG v. Anthropogenic case, because at least before, previous versions of Anthropic generate the song for the songs in the output. It’s a problem. The current status of that case is that they have put safeguards in place to try to prevent that from happening, and the parties have sort of agreed that, pending the resolution of the case, these safeguards are sufficient, so they no longer have searched for a preliminary injunction.
At the end of the day, the hardest question for AI companies is not is it legal to engage in training? Is it what do you do when your AI generates output that is too similar to a particular job?
Do you expect the majority of these cases to go to trial, or do you see settlements on the horizon?
There may well be some establishments. Where I hope to see settlements is with major players who either have large swaths of content or content that is particularly valuable. The New York Times could end up with a deal, and with a licensing agreement, perhaps where OpenAI pays money to use the New York Times content.
There’s enough money at stake that we’re probably going to get at least some judgments that set the parameters. The class-action actors, my sense is that they have stars in their eyes. There are many class actions, and my guess is that the defendants will be resistant to those and hope to win on summary judgment. It is not obvious that they are going to trial. The Supreme Court in the Google versus Oracle the case nudged the law of fair use very strongly in the direction of being resolved on summary judgment, not before a jury. I think the AI companies are going to try very hard to get those cases decided on summary judgment.
Why would it be better for them to win on a summary judgment versus a jury verdict?
It’s faster and cheaper than going to trial. AI companies are worried that they will not be seen as popular, which many people think, Oh, you made a copy of the work that should be illegal and not delve into the details of the fair use doctrine.
There have been many deals between AI companies and the mediacontent providers, and other rights holders. Most of the time, these deals seem to be more about finding fundamental patterns, or at least that’s how it’s been described to me. In your opinion, is the licensed content to be used in AI search engines – where the answers are derived from the augmented generation of recovery or RAG – something that is legally binding? Why do they do it like that?
If you use the increased generation of retrieval on specific targeted content, then your fair use argument becomes more challenging. It is much more likely that the search generated by AI will generate a text taken directly from a particular source in the output, and it is much less likely to be a fair use. I mean, it is could well-but the risky area is that it is much more likely to be competing with the original source material. If instead of directing people to a New York Times story, I give them my AI prompt that uses RAG to take the text directly from that New York Times story, which seems like a substitution that could damage the New York Times. The legal risk is greater for the AI company.
What do you want people to know about generative AI copyright fights that they don’t already know, or might be misinformed about?
The thing that I hear most often that is wrong as a technical matter is this concept that these are just plagiarism machines. All they do is take my stuff and then get it back in the form of text and replies. I’ve heard a lot of artists say that, and I’ve heard a lot of lay people say that, and it’s not right as a technical matter. You can decide if generative AI is good or bad. You can decide whether it is legal or illegal. But it’s really something fundamentally new that we haven’t experienced before. The fact that he needs to train on a lot of content to understand how sentences work, how arguments work, and to understand different facts about the world does not mean that it is just a kind of copy and paste things or to create a collage. It really generates things that no one could expect or predict, and it gives us a lot of new content. I think it is important and valuable.
[ad_2]
Source link