zlacker

[parent] [thread] 22 comments
1. manuel+(OP)[view] [source] 2023-07-08 08:05:33
If you run a store on Main Street, should people be allowed to take pictures of your store, copy its content and put it up for sale on another store?

I see this argument made over and over again here on HN and it’s puzzling that people always stop at the first part.

Companies won’t stop at the “look at your content” phase. They will use the knowledge gathered by looking at your content to do something else. That’s the problematic part.

replies(3): >>safety+K >>konsch+V >>chii+j7
2. safety+K[view] [source] 2023-07-08 08:14:46
>>manuel+(OP)
...Yes?

Retail companies research what other retail companies are doing and copy them all the time... was the answer supposed to be no here?

replies(3): >>nxpnsv+U >>rat998+01 >>manuel+L2
◧◩
3. nxpnsv+U[view] [source] [discussion] 2023-07-08 08:17:07
>>safety+K
Often they have signs forbidding you to take photos in stores… i guess that’s a bit like robots.txt
replies(1): >>delfin+m91
4. konsch+V[view] [source] 2023-07-08 08:17:14
>>manuel+(OP)
I don't think that's problematic. That's how societies work. They learn.
replies(2): >>gumbal+82 >>manuel+V2
◧◩
5. rat998+01[view] [source] [discussion] 2023-07-08 08:18:25
>>safety+K
It is a no depending on what you sell. If they sell original pictures, you cannot copy them. You are allowed to sell the same products, but not to copy them.
replies(1): >>safety+P2
◧◩
6. gumbal+82[view] [source] [discussion] 2023-07-08 08:30:53
>>konsch+V
AI is not “societies” or “people” and it most certainly doesnt “learn” as the two would. Perhaps thats what openai’s effective marketing campaign taught gullible folks but that’s not how it works at all. A”I” ingests massive amounts of people’s intellectual work, often without consent, mixes it and resells it without royalties.
replies(1): >>chii+t7
◧◩
7. manuel+L2[view] [source] [discussion] 2023-07-08 08:39:11
>>safety+K
And is your point that that’s ok?
replies(1): >>safety+84
◧◩◪
8. safety+P2[view] [source] [discussion] 2023-07-08 08:39:29
>>rat998+01
You can take a photo of someone else's copyrighted picture (photo, art, whatever). Or any other merchandise they're selling. You can even do it while you're on their property, standing next to a sign that says no photos allowed. All legal.

The business has the right to ask you to leave if you violate their policies. In fact, they can ask you to leave for (almost) any reason at all. They may have some limited right to remove you using a reasonable amount of force, depending on the jurisdiction.

Once you've left or been removed from their property, you still have the legal right to take photos of it from the public place you're now standing in. If you can view the photos or are they're selling through their window, you can keep taking photos of it.

They don't have the right to confiscate your camera or the pictures you took. Your rights in terms of what you can do with those photos may have limitations (e.g. redistribution, reproduction), particularly if you photographed copyrighted works.

This is why the parent's comment confused me so much. In most of the world you live in a society where yeah you have the freedom to take photos of stuff, or copy it down on a clipboard or whatever, and use it as competitive intelligence to improve your own business. And thousands of businesses are doing it every day.

replies(1): >>manuel+X3
◧◩
9. manuel+V2[view] [source] [discussion] 2023-07-08 08:41:01
>>konsch+V
“How societies work” can be used to justify essentially everything and I do not think it’s a good argument.
◧◩◪◨
10. manuel+X3[view] [source] [discussion] 2023-07-08 08:54:36
>>safety+P2
Everything you wrote ignores the fact that this content taken from websites are not just parked there to be used as “competitive intelligence”

It becomes integral part of a business product. That is the problematic part.

You going into a store and take pictures of some art to use as a reference material is not an issue.

But if you take those pictures and you use them to make a program that than spits out new art that is just a mix of those images patched together then, imo, that’s an issue.

replies(1): >>safety+06
◧◩◪
11. safety+84[view] [source] [discussion] 2023-07-08 08:55:59
>>manuel+L2
Maybe I am not understanding your point?

Of course it's OK to take note of what stock is on a store's shelf, go back to your own business, and sell the same stock. It's also ubiquitous. It is de facto practiced globally by everyone, it's generally legal, and it's morally fine. Broadly speaking we call this competitive intelligence or market intelligence.

replies(1): >>manuel+u5
◧◩◪◨
12. manuel+u5[view] [source] [discussion] 2023-07-08 09:14:51
>>safety+84
My point is that these analogies fail to capture the actual reality of AI products and they relationship with source content.

The source content is part of the AI product. There is no AI product without the source content.

This is not you going to a store and see what they sell and adjust your offering. You have no offering without the original store’s content.

◧◩◪◨⬒
13. safety+06[view] [source] [discussion] 2023-07-08 09:20:04
>>manuel+X3
It sounds to me like we agree. With respect, people have a lot more rights than they realize when it comes to taking photos of stuff in public (or semi-public) places, which is the scenario in your analogy. But this has questionable bearing on whether an AI can scoop up Internet content and do something with it.

I think it's almost a guarantee that courts will start finding exact AI reproductions of copyrighted work to be infringement.

Where the analogy might come into play is that if you take a photo of a copyrighted work there are limitations on what you can do with your photo, without infringing on that copyright. I have no idea if the courts will apply that stuff to AI, for instance there's actually a fair bit of leeway if you take a photo which contains only a portion of a copyrighted work and then you want to sell or redistribute that photo. One might argue that this legal principle applies to AI as well... lawyers are already having a field day with this stuff I'm sure.

replies(1): >>Spivak+ID
14. chii+j7[view] [source] 2023-07-08 09:35:06
>>manuel+(OP)
> copy its content and put it up for sale on another store?

they aren't copying the content. They are learning off the content, and produce more like it but not a copy.

replies(1): >>denton+gf1
◧◩◪
15. chii+t7[view] [source] [discussion] 2023-07-08 09:36:57
>>gumbal+82
> ingests massive amounts of people’s intellectual work, often without consent, mixes it and resells it without royalties.

but when people do that, it is allowed isnt it? So what is special about AI, other than the scale?

replies(1): >>gumbal+f8
◧◩◪◨
16. gumbal+f8[view] [source] [discussion] 2023-07-08 09:46:38
>>chii+t7
This debate is becoming tiring - yes, humans are allowed to according to terms and conditions. We could use the same argument in claiming that a database is just human memory at scale, thus it should be allowed to store any data it wants and then serve it, yet we dont permit that. Similarly a laptop can sing because just like a human it emits sound, yet you have to pay for what it emits.

AI is software, it doesnt “learn” as a human does and even if it did it would still have to be bound by the same rules as any other piece of software and human.

replies(1): >>chii+yb
◧◩◪◨⬒
17. chii+yb[view] [source] [discussion] 2023-07-08 10:32:26
>>gumbal+f8
> it would still have to be bound by the same rules as any other piece of software and human.

exactly, so there's zero reason to prevent anyone from using a piece of software (which slurps a lot of information off the internet), and produce new works that do not break currently copyrighted content.

replies(1): >>gumbal+dz
◧◩◪◨⬒⬓
18. gumbal+dz[view] [source] [discussion] 2023-07-08 14:08:08
>>chii+yb
Well that goes without saying. The issue is not the tool the issue is how its created and used. No problem in using publicly available ai friendly licensed content. The issue is using copyrighted content without consent and without honouring licensing terms.
replies(1): >>chii+Nc2
◧◩◪◨⬒⬓
19. Spivak+ID[view] [source] [discussion] 2023-07-08 14:38:42
>>safety+06
> I think it's almost a guarantee that courts will start finding exact AI reproductions of copyrighted work to be infringement.

That was never not true. The difference is that AI can't violate copyright, only humans can. The legal not-so-gray area is whether "spat out by an AI after prompting" is a performance of the work and if so, what human is responsible for the copying.

replies(1): >>Anthon+nU
◧◩◪◨⬒⬓⬔
20. Anthon+nU[view] [source] [discussion] 2023-07-08 16:22:35
>>Spivak+ID
Except that they almost never do exact reproductions of a work. If you were trying to do it on purpose you'd have to do some significant prompt engineering to get it to even come close. Because the nature of it is to smush together thousands of different things, not photocopy one in particular.

The exceptions will be like, pictures of a specific city's skyline. Not because it's copying a particular image, but because that's what that city's skyline looks like, so that's how it looks in an arbitrary picture of it. But those are the pictures that lack original creativity to begin with -- which is why the pictures in the training data are all the same and so is the output.

And people seem to make a lot of the fact that it will often reproduce watermarks, but the reason it does that isn't that it's copying a specific image. It's that there are a large number of images of that subject with that watermark. So even though it's not copying any of them in particular, it's been trained that pictures of that subject tend to have that watermark.

Obviously lawyers are going to have a field day with this, because this is at the center of an existing problem with copyright law. The traditional way you show copying is similarity (and access). Which no longer really means anything because you now have databases of billions of works, which are public (so everyone has access), and computers that can efficiently process them all to find the existing work which is most similar to any new one. And if you put those two works next to each other they're going to look similar to a human because it's the 99.9999999th percentile nearest match from a database of a billion images, regardless of whether the new one was actually generated from the existing one. It's the same reason YouTube Content ID has false positives -- except that its database only includes major Hollywood productions. A large image database would have orders of magnitude more.

◧◩◪
21. delfin+m91[view] [source] [discussion] 2023-07-08 17:48:29
>>nxpnsv+U
Yes, inside the store which is private property, they can legally start enacting such restrictions. Outside the store, not so much
◧◩
22. denton+gf1[view] [source] [discussion] 2023-07-08 18:17:15
>>chii+j7
If you record 10 billion parameters from a 3-megapixel image, it's kinda disingenuous to pretend you haven't copied the image.
◧◩◪◨⬒⬓⬔
23. chii+Nc2[view] [source] [discussion] 2023-07-09 02:16:33
>>gumbal+dz
> ai friendly licensed content

> The issue is using copyrighted content without consent

the consent is given implicitly if the content is available to the public for viewing. The copyright isn't being violated by an ai training model, as it isn't copied. The information contained within the works is not what's being copyrighted - it's the expression.

If the ai training algorithm is capable of extracting the information out of the works, and use it in another environment as part of some other works, you cannot claim copyright over such information.

This applies to style, patterns and other abstract information that could be extracted from works. It's as if a chef, upon reading many recipe books, produces a new recipe book (that contains information extracted from them) - the original creators of those recipe books cannot claim said chef had violated any copyright.

[go to top]