zlacker

[parent] [thread] 1 comments
1. unshav+(OP)[view] [source] 2022-12-15 19:40:11
But the ML itself doesn't have intention. The author of the ML does, and that i would think is no different than an artist that purposefully makes copied/derived work.

TBH given how derivative humans tend to be, with such a deeper "Human Learning" model and years and years of experiences.. i'm kinda shocked ML is even capable of even appearing non-derivative. Throw a child in a room, starve it of any interaction and somehow (lol) only feed it select images and then ask it to draw something.. i'd expect it to perform similarly. A contrived example, but i'm illustrating the depth of our experiences when compared to ML.

I half expect that the "next generation" of ML is fed by a larger dataset by many orders of magnitude more similarly matching our own. A video feed of years worth of data, simulating the complex inputs that Human Learning gets to benefit from. If/when that day comes i can't imagine we will seem that much more unique than ML.

I should be clear though; i am in no way defending how companies are using these products. I just don't agree that we're so unique in how we think, how we create, and if we're truly unique in any way shape or fashion. (Code, Input) => Output is all i think we are, i guess.

replies(1): >>wnkrsh+rI2
2. wnkrsh+rI2[view] [source] 2022-12-16 14:43:08
>>unshav+(OP)
Of course it's the intention of the user that matters here, I just see that these models give easy access to make extremely derivative works from existing artist's work - and I feel that's an unethical use of the unethically sourced models.

Anyone finding their own artistic voice with the tools, I respect that, those people are artists - but training with the aim to create derivative models, that should be called out.

[go to top]