zlacker

[parent] [thread] 3 comments
1. AlienR+(OP)[view] [source] 2025-06-24 18:51:24
This is nonsense, in my opinion. You aren't "hearing" anything. You are literally creating a work, in this case, the model, derived from another work.

People need to stop anthropomorphizing neural networks. It's a software and a software is a tool and a tool is used by a human.

replies(2): >>tantal+t2 >>adinis+wn
2. tantal+t2[view] [source] 2025-06-24 19:01:13
>>AlienR+(OP)
It is easy to dismiss, but the burden of proof would be on the plaintiff to prove that training a model is substantially different than the human mind. Good luck with that.
replies(1): >>AlienR+5H1
3. adinis+wn[view] [source] 2025-06-24 20:59:48
>>AlienR+(OP)
Humans are also created/derived from other works, trained, and used as a tool by humans.

It's interesting how polarizing the comparison of human and machine learning can be.

◧◩
4. AlienR+5H1[view] [source] [discussion] 2025-06-25 11:31:28
>>tantal+t2
That makes no sense as a default assumption. It's like saying FSD is like a human driver. If it's a person, why doesn't it represent itself in court? What wages is it being paid? What are the labor rights of AI? How is it that the AI is only human-like when it's legally convenient?

What makes far more sense is saying that someone, a human being, took copyrighted data and fed it into a program that produces variations of the data it was fed. This is no different from a photoshop filter, and nobody would ever need to argue in court that a photoshop filter is not a human being.

[go to top]