zlacker

[parent] [thread] 12 comments
1. orbifo+(OP)[view] [source] 2022-12-15 12:49:38
I think this drastically overestimates what current AI algorithms are actually capable of, there is little to no hint of genuine creativity in them. They are currently severely limited by the amount of high quality training data not the model size. They are really mostly copying whatever they were trained on, but on a scale that it appears indistinguishable from intelligent creation. As humans we don't have to agree that our collective creative output can be harvested and used to train our replacements. The benefits of allowing this will be had by a very small group of corporations and individuals, while everyone else will lose out if this continues as is. This will and can turn into an existential threat to humanity, so it is different from workers destroying mechanical looms during the industrial revolution. Our existence is at stake here.
replies(4): >>idleha+X7 >>XorNot+ya >>nether+Dh >>rperez+7W1
2. idleha+X7[view] [source] 2022-12-15 13:31:16
>>orbifo+(OP)
This has been a line of argument from every Luddite since the start of the industrial revolution. But it is not true. Almost all the productivity gains of the last 250 years have been dispersed into the population. A few early movers have managed to capture some fraction of the value created by new technology, the vast majority has gone to improve people's quality of life, which is why we live longer and richer lives than any generation before us. Some will lose their jobs and that is fine because human demand for goods and services is infinite, there will always be jobs to do.

I really doubt that AI will somehow be our successors. Machines and AI need microprocessors so complex that it took us 70 years of exponential growth and multiple trillion-dollar tech companies to train even these frankly quite unimpressive models. These AI are entirely dependent on our globalized value chains with capital costs so high that there are multiple points of failure.

A human needs just food, clean water, a warm environment and some books to carry civilization forward.

replies(1): >>orbifo+eh
3. XorNot+ya[view] [source] 2022-12-15 13:46:53
>>orbifo+(OP)
> They are really mostly copying whatever they were trained on

People keep saying this without defining what exactly they mean. This is a technical topic, and it requires technical explanations. What do you think "mostly copying" means when you say it?

Because there isn't a shred of original pixel data reproduced from training data through to output data by any of the diffusion models. In fact there isn't enough data in the model weights to reproduce any images at all, without adding a random noise field.

> The benefits of allowing this will be had by a very small group of corporations and individuals

You are also grossly mistaken here. The benefits of heavily restricting this, will be had by a very small group of corporations and individuals. See, everyone currently comes around to "you should be able to copyright a style" as the solution to the "problem".

Okay - let's game this out. US Copyright lasts for the life of author plus 70 years. No copyright work today will enter public domain until I am dead, my children are dead, and probably my grandchildren as well. But copyright can be traded and sold. And unlike individuals, who do die, corporations as legal entities do not. And corporations can own copyright.

What is the probability that any particular artistic "style" - however you might define that (whole other topic really) - is truly unique? I mean, people don't generally invent a style on their own - they build it up from studying other sources, and come up with a mix. Whatever originality is in there is more a function of mutation of their ability to imitate styles then anything else - art students, for example, regularly will do studies of famous artists and intentionally try to copy their style as best they can. A huge amount of content tagged "Van Gough" in Stable Diffusion is actually Van Gough look-alikes, or content literally labelled "X in the style of Van Gough". It had nothing to do with them original man at all.

I mean, zero - by example - it's zero. There are no truly original art styles. Which means in a world with copyrightable art styles, all art styles eventually end up as a part of corporate owned styles. Or the opposite is also possible - maybe they all end up as public domain. But in both cases the answer is the same: if "style" becomes a copyrightable term, and AIs can reproduce it in some way which you can prove, then literal "prior art" of any particular style will invariably be an existing part of an AI dataset. Any new artist with a unique style will invariably be found to simply be 95% a blend of other known styles from an AI which has existed for centuries and been producing output constantly.

In the public domain world, we wind up approximately where we are now: every few decades old styles get new words keyed into them as people want to keep up with the times of some new rising artist who's captured a unique blend in the zeitgeist. In the corporate world though, the more likely one, Disney turns up with it's lawyers and says "we're taking 70% or we're taking it all".

replies(2): >>orbifo+Dl >>alan-c+Qo1
◧◩
4. orbifo+eh[view] [source] [discussion] 2022-12-15 14:16:26
>>idleha+X7
There is a significant contingent of influential people that disagree. "Why the future doesn't need us" (https://www.wired.com/2000/04/joy-2/), Ray Kurzweil etc. This is qualitatively different than what the Luddites faced, it concerns all of us and touches the essence of what makes us human. This isn't the kind of technology that has the potential to make our lives better in the long run, it will almost surely be used for more harm than good. Not only are these models trained on the collectively created output of humanity, the key application areas are to subjugate, control and manipulate us. I agree with you that this will not happen immediately, because of the very real complexities of physical manufacturing, but if this part of the process isn't stopped in its tracks, the resulting progress is unlikely to be curtailed. I at least fundamentally think that the use of all of our data and output to train these models is unethical, especially if the output is not freely shared and made available.
replies(1): >>yeknod+2z
5. nether+Dh[view] [source] 2022-12-15 14:18:02
>>orbifo+(OP)
> They are really mostly copying whatever they were trained on, but on a scale that it appears indistinguishable from intelligent creation.

Which is what most humans do, and what most humans need.

replies(1): >>AuryGl+Ta2
◧◩
6. orbifo+Dl[view] [source] [discussion] 2022-12-15 14:35:14
>>XorNot+ya
Ok, let me try to be technical. These models fundamentally can be understood as containing a parametrised model of an intractable probability distribution ("human created images", "human created text"), which can be conditioned on a user provided input ("show me three cats doing a tango", "give me a summary of the main achievements of Richard Feynman") and sampled from. The way they achieve their impressive performance is by being exposed to as much of human created content as possible, once that has happened they have limited to no ways of self-improvement.

I disagree that there is no originality in art styles, human creativity amounts to more than just copying other people. There is no way a current gen AI model would be able to create truly original mathematics or physics, it is just able to reproduce facsimile and convincing bullshit that looks like it. Before long the models will probably able to do formal reasoning in a system like Lean 4, but that is a long way of from truly inventive mathematics or physics.

Art is more subtle, but what these models produce is mostly "kitsch". It is telling that their idea of "aesthetics" involves anime fan art and other commercial work. Anyways, I don't like the commercial aspects of copyright all that much, but what I like is humans over machines. I believe in freely reusing and building on the work of others, but not on machines doing the same. Our interests are simply not aligned at this point.

◧◩◪
7. yeknod+2z[view] [source] [discussion] 2022-12-15 15:23:50
>>orbifo+eh
It seems we are running out of ways to reinvent ourselves as machines and automation replace us. At some point, perhaps approaching, the stated goal of improving quality of life and reduce human suffering ring false. What is human being if we have nothing to do? Where are the vast majority of people supposed to find meaning?
replies(3): >>ChadNa+Y31 >>snordg+Zl1 >>yeknod+UI1
◧◩◪◨
8. ChadNa+Y31[view] [source] [discussion] 2022-12-15 17:28:47
>>yeknod+2z
I don't see why machines automatically producing art takes away the meaning of making art. There's already a million people much better at art than you or I will ever be producing it for free online. Now computers can do it too. Is that supposed to take away my desire to make art?
◧◩◪◨
9. snordg+Zl1[view] [source] [discussion] 2022-12-15 18:54:42
>>yeknod+2z
Where do you find meaning in life today? What do you do on weekends and vacations?

Another place to look is the financially independent. What are they doing with their time?

◧◩
10. alan-c+Qo1[view] [source] [discussion] 2022-12-15 19:08:33
>>XorNot+ya
Trying to be exact about "mostly copying", I want to contrast Large Language Models (LLM) with Alpha Go learning to play super human Go through self play.

When Alpha Go adds one of its own self-vs-self games to its training database, it is adding a genuine game. The rules are followed. One side wins. The winning side did something right.

Perhaps the standard of play is low. One side makes some bad moves, the other side makes a fatal blunder, the first side pounces and wins. I was surprised that they got training through self play to work; in the earlier stages the player who wins is only playing a little better than the player who loses and it is hard to work out what to learn. But the truth of Go is present in the games and not diluted beyond recovery.

But a LLM is playing a post-modern game of intertextuality. It doesn't know that there is a world beyond language to which language sometimes refers. Is what a LLM writes true or false? It is unaware of either possibility. If its own output is added to the training data, that creates a fascinating dynamic. But where does it go? Without Alpha Go's crutch of the "truth" of which player won the game according to the hard coded rules, I think the dynamics have no anchorage in reality and would drift, first into surrealism and then psychosis.

One sees that AlphaGo is copying the moves that it was trained on and a LLM is also copying the moves that is was trained on and that these two things are not the same.

◧◩◪◨
11. yeknod+UI1[view] [source] [discussion] 2022-12-15 20:36:47
>>yeknod+2z
I've been lucky enough to build and make things and work in jobs where I can see the product of my work - real, tangible, creative, and extremely satisfying. I can only do this work as long people want and need the work to be done.
12. rperez+7W1[view] [source] 2022-12-15 21:41:42
>>orbifo+(OP)
Exactly this, and it was clear based on the backlash got SD 2.0 after they removing artist labels and getting 'less creative'. Most people are not interested on the creative aspect, just looking for a easy way to copy art from people they admire.
◧◩
13. AuryGl+Ta2[view] [source] [discussion] 2022-12-15 23:06:18
>>nether+Dh
And everything else is just copying with either small tweaks or combinations. There’s a reason art went through large jumps in understanding from cave paintings to where we are today.

I was the first photographer I knew of that combined astrophotography with wedding portraiture. That was new. Now lots of people do it - far better than me (I rarely get the chance)!

I’m a small fry so they almost assuredly didn’t get the idea from me, before anyone says I claim otherwise. There were probably a few photographers who thought to do it and now everybody has seen it and emulates it. The true artists put just a little spin on it, from which others will learn. So it goes.

[go to top]