zlacker

[return to "Who knew the first AI battles would be fought by artists?"]
1. 4bpp+65[view] [source] 2022-12-15 12:25:25
>>dredmo+(OP)
Surely, if the next Stable Diffusion had to be trained from a dataset that has been purged of images that were not under a permissive license, this would at most be a minor setback on AI's road to obsoleting painting that is more craft than art. Do artists not realise this (perhaps because they have some kind of conceit along the lines of "it only can produce good-looking images because it is rearranging pieces of some Real Artists' works it was trained on"), are they hoping to inspire overshoot legislation (perhaps something following the music industry model in several countries: AI-generated images assumed pirated until proven otherwise, with protection money to be paid to an artists' guild?), or is this just a desperate rearguard action?
◧◩
2. orbifo+V9[view] [source] 2022-12-15 12:49:38
>>4bpp+65
I think this drastically overestimates what current AI algorithms are actually capable of, there is little to no hint of genuine creativity in them. They are currently severely limited by the amount of high quality training data not the model size. They are really mostly copying whatever they were trained on, but on a scale that it appears indistinguishable from intelligent creation. As humans we don't have to agree that our collective creative output can be harvested and used to train our replacements. The benefits of allowing this will be had by a very small group of corporations and individuals, while everyone else will lose out if this continues as is. This will and can turn into an existential threat to humanity, so it is different from workers destroying mechanical looms during the industrial revolution. Our existence is at stake here.
◧◩◪
3. idleha+Sh[view] [source] 2022-12-15 13:31:16
>>orbifo+V9
This has been a line of argument from every Luddite since the start of the industrial revolution. But it is not true. Almost all the productivity gains of the last 250 years have been dispersed into the population. A few early movers have managed to capture some fraction of the value created by new technology, the vast majority has gone to improve people's quality of life, which is why we live longer and richer lives than any generation before us. Some will lose their jobs and that is fine because human demand for goods and services is infinite, there will always be jobs to do.

I really doubt that AI will somehow be our successors. Machines and AI need microprocessors so complex that it took us 70 years of exponential growth and multiple trillion-dollar tech companies to train even these frankly quite unimpressive models. These AI are entirely dependent on our globalized value chains with capital costs so high that there are multiple points of failure.

A human needs just food, clean water, a warm environment and some books to carry civilization forward.

◧◩◪◨
4. orbifo+9r[view] [source] 2022-12-15 14:16:26
>>idleha+Sh
There is a significant contingent of influential people that disagree. "Why the future doesn't need us" (https://www.wired.com/2000/04/joy-2/), Ray Kurzweil etc. This is qualitatively different than what the Luddites faced, it concerns all of us and touches the essence of what makes us human. This isn't the kind of technology that has the potential to make our lives better in the long run, it will almost surely be used for more harm than good. Not only are these models trained on the collectively created output of humanity, the key application areas are to subjugate, control and manipulate us. I agree with you that this will not happen immediately, because of the very real complexities of physical manufacturing, but if this part of the process isn't stopped in its tracks, the resulting progress is unlikely to be curtailed. I at least fundamentally think that the use of all of our data and output to train these models is unethical, especially if the output is not freely shared and made available.
[go to top]