zlacker

[parent] [thread] 46 comments
1. andy99+(OP)[view] [source] 2023-11-05 18:17:33
Copyright holders make all kinds of arguments for why they should be get money for incidental exposure to their work. This is all about greed and jealousy. If someone uses AI to make infringing content, existing laws already cover that. The fact that an ML model could be used to generate infringing content, and has exposure to or "knowledge" of some copyrighted material is immaterial. People just see someone else making money and want to try and get a piece of it.
replies(5): >>ethanb+8 >>exabri+01 >>rvz+b2 >>Animat+Q4 >>gumbal+fv
2. ethanb+8[view] [source] 2023-11-05 18:18:36
>>andy99+(OP)
> People just see someone else making money in a way that is completely dependent upon their own prior work and want to try and get a piece of it
replies(3): >>qt3141+o8 >>Ukv+r9 >>paulmd+Yn
3. exabri+01[view] [source] 2023-11-05 18:23:24
>>andy99+(OP)
Lets try this:

I'd like you do give away 100% of your salary, ok?

Are you greedy if you say no?

replies(2): >>Tadpol+x1 >>bdcrav+V1
◧◩
4. Tadpol+x1[view] [source] [discussion] 2023-11-05 18:25:13
>>exabri+01
This is a blatant non-sequitor. There are many approaches to actually having a good faith discussion on the societal/economic/moral/humanitarian effects of large-scale AI taking over entire workforces. Being coy and asking loaded questions does nothing to convince anyone of them.
replies(2): >>coding+25 >>kmeist+rb
◧◩
5. bdcrav+V1[view] [source] [discussion] 2023-11-05 18:27:09
>>exabri+01
If you use a snippet from Stack Overflow that came from a book, is the original publisher entitled to some of your salary?
replies(1): >>exabri+u2
6. rvz+b2[view] [source] 2023-11-05 18:28:56
>>andy99+(OP)
All I see is AI companies poorly justifying their grift that they know they don't want to pay for the content that they are commercializing without permission and pull the fair use excuses.

It is no wonder why OpenAI had to pay Shutterstock for training on their data and Getty suing Stability AI for training on their watermarked images and using it commercially without permission and actors / actresses filing lawsuits against commercial voice cloners which costs them close to nothing, as those companies either take down the cloned voice offering or shutdown.

These weak arguments from these AI folks sound like excuses justifying a newly found grift.

replies(3): >>artnin+A3 >>Tadpol+c4 >>Ukv+W5
◧◩◪
7. exabri+u2[view] [source] [discussion] 2023-11-05 18:30:06
>>bdcrav+V1
This is what Silicon Valley doesn't understand: The concept of Consent.

If someone posts something to StackOverflow, they're intending to help both the original person and anyone that comes along later with the same question with their coding problem, and that's the extent of it.

An artist making a painting or song has not consented to training algorithms on their copyrighted work. In fact, neither has the StackOverflow person.

Boggles my mind this concept is so absent from the minds of SV folk.

replies(4): >>depere+w6 >>Turing+m7 >>EMIREL+Y7 >>gagany+n01
◧◩
8. artnin+A3[view] [source] [discussion] 2023-11-05 18:36:04
>>rvz+b2
Afaik Getty's case is strong against stability because they were dumb enough not to remove the water marks before training so now their model recreates their watermark, which is an infringement. Also let's not pretend openai licensed their dataset from all the stock image sites they scraped. The main reason they have a deal with Shutterstock is probably easy access and also Shutterstock partnered up with openai for ai tech to sell on their website.
◧◩
9. Tadpol+c4[view] [source] [discussion] 2023-11-05 18:39:01
>>rvz+b2
When you're viewing everyone with a different opinion than you as a grifter, corporate rat, or some other malicious entity, you've disabled the ability or desire for people to try to engage with you. You won't be convinced, and you're already being uncivil and fallacious.

AI outputs should be regulated, of course. Obviously impersonation and copyright law already applies to AI systems. But a discussion on training inputs is entirely novel to man and our laws, and it's a very nuanced and important topic. And as AI advances, it becomes increasingly difficult because of the diminishing distinction between "organic" learning and "artificial" learning. As well as when stopping AI from — as an example — learning from research papers means we miss out on life-saving medication. Where do property rights conflict with human rights?

They're important conversations to have, but you've destroyed the opportunity to have them from the starting gun.

replies(3): >>sillys+t5 >>Turing+98 >>rvz+Mb
10. Animat+Q4[view] [source] 2023-11-05 18:42:58
>>andy99+(OP)
Yes, most of this is whining from the "copyright forever" crowd. If you get out something vaguely similar to something old, they complain. What they're really worried about is becoming obsolete, not being copied.

The case against "tribute bands" is much stronger than the case against large language models built with some copyrighted content. Those are a blatant attempt to capitalize on the specific works of specific named people.

replies(1): >>gumbal+sv
◧◩◪
11. coding+25[view] [source] [discussion] 2023-11-05 18:43:54
>>Tadpol+x1
The ability of AI to produce the content that it does actually will be reducing potentially hundreds of thousands of jobs.

Now, the percentage of those jobs lost because some of the content was accidentally copy written may be small but does account for some percentage of that job loss. So it isn't actually a non sequitur in my opinion.

replies(2): >>Turing+18 >>Tadpol+m9
◧◩◪
12. sillys+t5[view] [source] [discussion] 2023-11-05 18:46:04
>>Tadpol+c4
Thank you.
replies(1): >>rvz+Pb
◧◩
13. Ukv+W5[view] [source] [discussion] 2023-11-05 18:48:20
>>rvz+b2
You likely benefit from machine learning applications constantly without realizing. The spam filters for your email, the scanning for defects of the products you use and the rails they were delivered on, when you enter a search query or translate a page into English, weather modelling to give you accurate predictions and early warnings, etc.

To avoid IP law causing more damage than it already has with evergreening of medical patents, I think it strictly has to be the generation of substantially similar media that counts as infringement, as the comment you're replying to suggests - not just "this tumor detector was pretrained on a large number of web images before task-specific fine-tuning, so it's illegal because they didn't pay Getty beforehand" if training were to be infringement.

◧◩◪◨
14. depere+w6[view] [source] [discussion] 2023-11-05 18:51:17
>>exabri+u2
For a bunch of rent-seekers who issue licenses to use their prior work, they really struggle with the various licences that other people's work can be issued with.
◧◩◪◨
15. Turing+m7[view] [source] [discussion] 2023-11-05 18:55:01
>>exabri+u2
> This is what Silicon Valley doesn't understand: The concept of Consent.

This is what you don't understand: the concept of fair use.

https://en.wikipedia.org/wiki/Fair_use

If the courts hold this type of thing to be fair use (which I'm about 90% sure they will), "consent" won't enter into it. At all.

replies(1): >>rvz+Vd
◧◩◪◨
16. EMIREL+Y7[view] [source] [discussion] 2023-11-05 18:58:07
>>exabri+u2
Their position (and also mine, even though I have otherwise lots of disagreement with most SV folk in other areas) is that for those ML purposes, no consent need be sought or granted. If the work is publicly accessible, it's usable for AI. This is legally supported by fair use (to be determined by the courts, keep an eye out on the Andersen v. Stability lawsuit)
◧◩◪◨
17. Turing+18[view] [source] [discussion] 2023-11-05 18:58:10
>>coding+25
> The ability of AI to produce the content that it does actually will be reducing potentially hundreds of thousands of jobs.

You mean like tractors, electric motors, powered looms, machine tools, excavators, and such?

Yeah, and? In the limit, those things are why our population isn't 90% unfree agricultural laborers (serfs or slaves), 9+% soldiers to keep the serfs in line, and < 1% "nobles" and "priests", who get to consume all the goodies.

This same basic argument about "putting artists out of work" was made when photography was invented. It didn't work then, and it's not going to work now.

◧◩◪
18. Turing+98[view] [source] [discussion] 2023-11-05 18:59:04
>>Tadpol+c4
> AI outputs should be regulated, of course.

Why "of course"?

replies(2): >>daniel+Ha >>Tadpol+3c
◧◩
19. qt3141+o8[view] [source] [discussion] 2023-11-05 19:00:31
>>ethanb+8
It's actually impressive that Weird Al has made it as far as he did now that I think about it
replies(2): >>sensan+G8 >>kmeist+B9
◧◩◪
20. sensan+G8[view] [source] [discussion] 2023-11-05 19:02:51
>>qt3141+o8
Weird Al explicitly seeks out the permission from the people whose songs he's parodying, despite not really having to (legally), and he compensates them in kind as well, so kinda the exact opposite of what the AI crowd wants to do.
◧◩◪◨
21. Tadpol+m9[view] [source] [discussion] 2023-11-05 19:05:41
>>coding+25
I don't disagree with you at all, your point is important to communicate and debate on! But the framing of the original comment was unproductive and only served to hurt the argument.

I, personally, think that AI is a tremendous opportunity that we should be investing in and pushing forward. And my existing dislike of property right laws does feed into my views on the training data discussion; prioritizing a revolution in productivity over preservation of jobs for the sake of maintaining the status quo. But I'm not stupid enough to think there will be no consequences for being unprepared for the future.

Rather unfortunately, I'm not quite clever enough to see what being prepared would actually look like either.

◧◩
22. Ukv+r9[view] [source] [discussion] 2023-11-05 19:06:20
>>ethanb+8
> in a way that is completely dependent upon their own prior work

Ultimately information has to come from somewhere. If something has no information about what a "car" is, it cannot paint a car more successfully than a random guess. When you draw a car or write an algorithm to do so, you'll be slightly affected by the existing car designs you've seen. It's not a limitation specific to AI - it's just more obscured for humans since there's no explicit searchable database of all the cars you've glanced at.

Whether it was affected by (and dependant on in aggregate) prior work is not the standard for copyright infringement, and I'd claim would implicate essentially all action as infringement. Instead, it should be judged by whether there's substantial similarity - and if there is substantial similarity, then by the factors of fair use.

◧◩◪
23. kmeist+B9[view] [source] [discussion] 2023-11-05 19:07:17
>>qt3141+o8
Weird Al gets permission and licenses for every song parody he makes. The legal definition of parody in fair use wouldn't exactly cover everything he does.

That being said, there's still occasionally times where he gets screwed over by the licensing machine anyway - either because the label forgot to ask the artist (Amish Paradise) or because the artist forgot to ask the label (You're Pitiful).

◧◩◪◨
24. daniel+Ha[view] [source] [discussion] 2023-11-05 19:13:36
>>Turing+98
Because everything else is, why should AI output be any different?
◧◩◪
25. kmeist+rb[view] [source] [discussion] 2023-11-05 19:18:10
>>Tadpol+x1
It's important to keep in mind that AI doesn't take over entire workforces because it is better, or does jobs humans can't, but because it is cheaper. I've played with several AI art and text models and none of them I would consider to be better than a human. However, they are good enough - and more importantly, legally ownable[0] capital goods - such that corporations would rather have an AI serve you to make their own scale problems go away.

The hyperbole about being forced to work for free isn't entirely wrong, because tech companies love tricking people into doing free labor for them. They also aren't arguing for AI being a copyright-free zone. They're arguing for reallocation of ownership from authors to themselves, in the same way that record labels and publishers already did in decades prior.

[0] At least until the Luddite Solidarity Union Robot Uprising of 2063

◧◩◪
26. rvz+Mb[view] [source] [discussion] 2023-11-05 19:20:26
>>Tadpol+c4
> When you're viewing everyone with a different opinion than you as a grifter, corporate rat, or some other malicious entity, you've disabled the ability or desire for people to try to engage with you.

I think we have given it plenty of time for such a discussion and the amount of events and actions happening around training on copyrighted works from images, songs and deepfakes for the lawsuits and licensing deals to happen and it all converging to paying for the data; hence OpenAI and may others doing so due to risks in such lawsuits.

> AI outputs should be regulated, of course. Obviously impersonation and copyright law already applies to AI systems. But a discussion on training inputs is entirely novel to man and our laws, and it's a very nuanced and important topic. And as AI advances, it becomes increasingly difficult because of the diminishing distinction between "organic" learning and "artificial" learning.

Copyright law does not care, nor is the overlying problem about using such a generative AI system for non-commercial uses such as for education or private use-cases. The line is being drawn as soon as it is commercialized and the fair use excuses fall apart. Even if the AI advances, so does the traceability methods and questions on the dataset being used. [0]

It costs musicians close to nothing to target and file lawsuits against commercial voice cloners. Not even training on copyrighted songs was an option for tools like DanceDiffusion [1] due to that same risk which is why training on public domain sounds audio was the safer alternative rather than run the risk of lawsuits and ask questions on the training set by tons of musicians.

[0] https://c2pa.org

[1] https://techcrunch.com/2023/09/13/stability-ai-gunning-for-a...

replies(1): >>Tadpol+tf
◧◩◪◨
27. rvz+Pb[view] [source] [discussion] 2023-11-05 19:20:39
>>sillys+t5
Thanks for what exactly?
replies(1): >>Tadpol+Cg
◧◩◪◨
28. Tadpol+3c[view] [source] [discussion] 2023-11-05 19:22:06
>>Turing+98
Because as a society we generally already agree that human outputs need be restricted as well. Being artificial in origin doesn't change the nature of trademark infringement or outright theft (generally speaking — some content that is illegal now because it victimizes others, being turned into victimless but gross content is an edge case).

To be clear, I would argue the regulations in question would fall under the human/legal entity responsible for the creation or dissemination. Having censored output on the AI itself seems significantly less productive.

◧◩◪◨⬒
29. rvz+Vd[view] [source] [discussion] 2023-11-05 19:32:49
>>Turing+m7
There is nothing "fair use" around this: [0] or this [1] which both cases are done without permission and are commercial uses.

[0] https://www.theverge.com/2023/1/17/23558516/ai-art-copyright...

[1] https://variety.com/2023/digital/news/scarlett-johansson-leg...

replies(1): >>Turing+Ip8
◧◩◪◨
30. Tadpol+tf[view] [source] [discussion] 2023-11-05 19:42:20
>>rvz+Mb
> I think we have given it plenty of time for such a discussion

I don't see how this justifies needlessly divisive rhetoric.

No matter how long the disagreement lasts, you aren't my enemy because you have a different opinion on how we should handle this conundrum. I know you mean the best and are trying to help.

> Copyright law does not care

Copyright law works fine with AI outputs. As does trademark law. I don't see an AI making a fanart Simpsons drawing being any more novel a legal problem than the myriad of humans that do it on YouTube already. Or people who sell handmade Pokemon plushies on Etsy without Nintendo's permission.

But the question is on inputs and how the carve-outs of "transformative" and "educational use" can be interpreted — model training may very well be considered education or research. I think it's been made rather clear that nobody has a real answer to this, copyright law didn't particularly desire to address if an artist is "stealing" when they borrow influence from other artists and use similar styles or themes (without consent) for their own career.

I don't envy the judges or legislators involved in making these future-defining decisions.

replies(1): >>rvz+Ax
◧◩◪◨⬒
31. Tadpol+Cg[view] [source] [discussion] 2023-11-05 19:48:09
>>rvz+Pb
It seems strange to interrogate why someone thanked someone else, doesn't it? Are you trying to start a fight with them over a simple acknowledgement that they agree?
replies(1): >>nickth+Zq
◧◩
32. paulmd+Yn[view] [source] [discussion] 2023-11-05 20:31:49
>>ethanb+8
> completely dependent

No, AI art would exist without Disney or HBO just like human art would.

It literally does come back to the idea that either AI is doing more or less the same thing as an art student, and learns styles and structures and concepts, in which case training an art student is infringing because it’s completely dependent on the work of artists who came before.

And sure, if you ask a skilled 2d artist if they can draw something in the style of 80s anime, or specific artists, they can do it. There are some artists who specialize in this in fact! Can’t have retro anime porn commissions if it’s not riffing on retro anime images. Yes twitter, I see what you do with that account when you’re not complaining about AI.

The problem is that AI lowers the cost of doing this to zero, and thus lays bare the inherent contradictions of IP law and “intellectual ownership” in a society where everyone is diffusing and mashing up each others ideas and works on a continuous basis. It is one of those “everyone does it” crimes that mostly survives because it’s utterly unenforced at scale, apart from a few noxious litigants like disney.

It is the old Luddite problem - the common idea that luddites just hated technology is inaccurate. They were textile workers who were literally seeing their livelihoods displaced by automation mass-producing what they saw as inferior goods. https://en.wikipedia.org/wiki/Luddite

In general this is a problem that's set up by capitalism itself though. Ideas can’t and shouldn’t be owned, it is an absurd premise and you shouldn’t be surprised that you get absurd results. Making sure people can eat is not the job of capitalism, it’s the job of safety nets and governments. Ideas have no cost of replication and artificially creating one is distorting and destructive.

Would a neural net put a tax on neurons firing? No, that’s stupid and counterproductive.

Let people write their slash fiction in peace.

(HN probably has a good understanding of it, but in general people don't appreciate just how much it is not just aping images it's seen but learning the style and relationships of pixels and objects etc. To wit, the only thing NVIDIA saved from DLSS 1.0 was the model... and DLSS 2.0 has nothing to do with DLSS 1.0 in terms of technical approach. But the model encodes all the contextual understanding of how pixels are supposed to look in human images, even if it's not even doing the original transform anymore! And LLMs can indeed generalize reasonably accurately about things they haven't seen, as long as they know the precepts etc. Because they aren't "just guessing what word comes next", it's the word that comes next given a conceptual understanding of the underlying ideas. And that's a difficult thing to draw a line between a human and an AI large model, college students will "riff on the things they know" if you ask them to "generalize" about a topic they haven't studied too, etc.)

replies(1): >>mistri+iZ
◧◩◪◨⬒⬓
33. nickth+Zq[view] [source] [discussion] 2023-11-05 20:54:31
>>Tadpol+Cg
If every HN was full of thank you posts, it would be unreadable. Upvotes exist. I prefer comments to add to the discussion, so I don’t find the users request for more information to be that baffling. Their comment was at least trying to further their understanding, which is more than sillysaurx did.
replies(2): >>Tadpol+FK >>sillys+nN
34. gumbal+fv[view] [source] 2023-11-05 21:25:50
>>andy99+(OP)
AI companies monetising people’s IP need to pay up. End of story. Make smarter ai next time that can “learn” with less content - or as it stands, so called ai is just a massive database that procedurally mixes content to generate what looks like “new” content.
replies(1): >>gagany+UZ
◧◩
35. gumbal+sv[view] [source] [discussion] 2023-11-05 21:26:56
>>Animat+Q4
What people are worried about is the age old thieves stealing property and monetising it - which is what a lot of ai companies do. Pay up or create your own content and its fair game.
replies(1): >>gagany+KZ
◧◩◪◨⬒
36. rvz+Ax[view] [source] [discussion] 2023-11-05 21:43:35
>>Tadpol+tf
> I don't see how this justifies needlessly divisive rhetoric.

What rhetoric? I am telling the hard truth of it.

> Copyright law works fine with AI outputs. As does trademark law. I don't see an AI making a fanart Simpsons drawing being any more novel a legal problem than the myriad of humans that do it on YouTube already. Or people who sell handmade Pokemon plushies on Etsy without Nintendo's permission.

How is running the risk of a lawsuit being enforced by the copyright holder meaning that it is OK to continue selling the works? Again, if it parodies and fan-art are in a non-commercial setting, then it isn't a problem. The problems start when you get to the commercial setting which in the case of Nintendo is known to be extremely litigious even in similarity, AI or not. [0] [1] [2] Then the question becomes: 'How long until it get caught if I commercialize this?' for both the model's inputs OR outputs.

That question was answered in Getty's case: They didn't need to request Stability's training set, since it is publicly available. Nintendo and other companies can simply ask for the original training data of closed models if they wanted to.

> But the question is on inputs and how the carve-outs of "transformative" and "educational use" can be interpreted — model training may very well be considered education or research.

As with the above, this is why C2PA and traceability is in the works for those same reasons [3] to determine where the source of the generative digital works were derived from its output.

> I think it's been made rather clear that nobody has a real answer to this, copyright law didn't particularly desire to address if an artist is "stealing" when they borrow influence from other artists and use similar styles or themes (without consent) for their own career.

So that explains the scrambling actions of these AI companies to not address these issues or be transparent about their data set and training data. (Except for Stability) Since that is where it is going.

[0] https://www.vice.com/en/article/ae3bbp/the-pokmon-company-su...

[1] https://kotaku.com/pokemon-nintendo-china-tencent-netease-si...

[2] https://www.gameshub.com/news/news/pokemon-nft-game-pokeworl...

[3] https://variety.com/2023/digital/news/scarlett-johansson-leg...

[4] https://c2pa.org/

◧◩◪◨⬒⬓⬔
37. Tadpol+FK[view] [source] [discussion] 2023-11-05 23:16:27
>>nickth+Zq
Then vote and/or report? They aren't trying to further their understanding, that much is clear when their dialogue started by saying anyone who holds a different ideological position is a grifter. They're trying to start a fight with a bystander who supported the "other".
◧◩◪◨⬒⬓⬔
38. sillys+nN[view] [source] [discussion] 2023-11-05 23:36:23
>>nickth+Zq
I’ve been on HN since the beginning. pg himself said that thank yous are fine. Empty but positive comments are not harmful.

I’ll leave my gratitude a mystery. They have my thanks, and my axe.

◧◩◪
39. mistri+iZ[view] [source] [discussion] 2023-11-06 01:17:35
>>paulmd+Yn
well said and .. the unique abilities of fast computers, cloud clusters and fast networks, to solve and serve clients in ways that humans cannot do.. must have sufficient weight in judgement. Specifically, some law school parable about how a person A can do this and group B does that, is very much missing the weighting of judgement needed. Many smart-enough people with responsibility do not think through the implications of tech, while returning to what they were taught, about comparable situations in law and the like.. IMO
replies(1): >>paulmd+Us1
◧◩◪
40. gagany+KZ[view] [source] [discussion] 2023-11-06 01:21:13
>>gumbal+sv
It's not theft, etc. Boring tangent to try to introduce
replies(1): >>gumbal+7J1
◧◩
41. gagany+UZ[view] [source] [discussion] 2023-11-06 01:23:29
>>gumbal+fv
Calling it a massive database makes it clear you lack a basic understanding of how it works. Please go learn the basics before commenting.
replies(1): >>gumbal+2J1
◧◩◪◨
42. gagany+n01[view] [source] [discussion] 2023-11-06 01:28:38
>>exabri+u2
The notion of consent you're pushing does not have a legal basis and is also deeply silly.
◧◩◪◨
43. paulmd+Us1[view] [source] [discussion] 2023-11-06 06:29:47
>>mistri+iZ
I am not approaching this from a law-school perspective at all. I'm aware that I'm arguing against a massive amount of current legal doctrine and a social shift that would be monumental. I just know the idea of someone "owning" an idea is rotten in general, when we are all diffusion machines ourselves, who are riffing off everything everyone else is saying etc. I learn when other people post good shit, and that informs the things I tell to others. The student becomes the commercially-employed professional becomes the textbook author. It seems absurd to single out this one particular act of diffusion as being unique because it was done by a machine - is an art student not affected by a coca-cola ad or whatever? That's a commercial property too.

The deep-down reason people are concerned is because it reduces the cost of doing it to zero. And that taps into this whole other set of problems where the computer thingy says we can't eat because nobody has a job anymore, or is limited by the cost to automate with a reasonable solution, etc. Plus a whole host of others besides.

I have no idea how you reward significant creative or R&D effort in a relatively post-IP society, where the cost of defining any idea is just some prompt. Pretending like any sort of IP ownership can be enforced in this thing is crazy though. We are seeing the cost of replicating intellectual property driven down to the actual economic-minimum cost basis.

It's absolutely not capitalism's job to ride out the population through whatever weird economic shit comes next, when the idea of IP law generally gets mushy and melts away. Right? There is a lot of managerial or creative work that can be completely displaced by this. Why even have a farmer watching the farm once the cropwatch 5000 is built? And physical labor obviously it's just a matter of cost.

You can't have everyone's salary be constrained by the actual cost to replace, because that's going to get a ton lower. And that's good, it lets us all move up an abstraction layer, and also have more time for leisure etc. It's just not going to be evenly distributed, at all. But we could be talking about a post-scarcity utopia before terribly long, if we want to. Why not just let the robots make the phones and the food and we just hike mountains and do art or whatever? How does an economy work in a situation where most of the actual work is automated and most people don't actually work?

It's super time for a livable, non-phased basic income. It's going to need a while to phase in (probably at least 10 if not 20-30 years) but like, the numbers on the cost aren't going to be any more appealing in another 15 years of watching AI displace everyone.

In general I kind of like the idea of "unregistered vs registered copyright" where you have some default rights of the work itself, and if you register it you receive more significant protections etc. If you're Intel, argue the value you added to create x86 etc and how you've supported it for 20 years, etc. The idea would be to combine and replace patents and copyright and IP in general, you have sort of a "right of creation" or sweat-of-the-brow intellectual ownership and right to exploit the work. The more effort and work, the larger the argument that some competitor ripping you off is intellectually unfair - sort of an actual-damages model.

But I'm also strongly against derivative works being illegal once the idea has been released into the public... but neither do I want to encourage trade-secrets-ism. I think that issue is probably overblown though, reverse engineering/etc can clear up a lot of trade secrets pretty quick. And I think some common-law norms of unfair exploitation of IP would develop (and could flux over time) such that we don't need to go after slash fiction because it violates your cinematic universe, but a large competitor ripping it off might be unfair.

The original creator will always have a period of exclusivity for at least the time to replicate, even in a true zero-IP-rights scenario. Making a chip takes 6-12 months anyway, for example. Recreating some breakthrough drug (hopefully in a better way) and getting it through trials takes time. And nobody is confused by knockoff works from small-time non-commerical operators etc. There are still a lot of factors in favor of actual innovation here, it's not nothing either, and I'm proposing a sweat-of-the-brow system to equalize the instances where that fails or is unduly exploited.

◧◩◪
44. gumbal+2J1[view] [source] [discussion] 2023-11-06 09:44:30
>>gagany+UZ
It is a massive database, but not in the classic sense. I believe your comment is a projection.
◧◩◪◨
45. gumbal+7J1[view] [source] [discussion] 2023-11-06 09:45:03
>>gagany+KZ
Outside the organised ai bubble that's what it's called I'm afraid.
replies(1): >>gagany+3D3
◧◩◪◨⬒
46. gagany+3D3[view] [source] [discussion] 2023-11-06 19:44:05
>>gumbal+7J1
Well, no. The courts disagree. You're in a bubble that you should leave
◧◩◪◨⬒⬓
47. Turing+Ip8[view] [source] [discussion] 2023-11-08 02:58:20
>>rvz+Vd
Your first reference is an opinion of the lawyers for a concerned party, i.e., meaningless. Lawyers make nonsensical claims all the time. It's one of the things they get paid for.

The situation described in your second reference is already unlawful, regardless of how the image was produced. You're not allowed to make commercial use of images of Scarlett Johansson even if you scratch them on a cave wall with a broken deer antler.

[go to top]