zlacker

[return to "OpenAI is now everything it promised not to be: closed-source and for-profit"]
1. mellos+pe[view] [source] 2023-03-01 10:46:59
>>isaacf+(OP)
This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.

◧◩
2. ripper+yr[view] [source] 2023-03-01 12:38:21
>>mellos+pe
To quote Spaceballs, they're not doing it for money, they're doing it for a shitload of money.
◧◩◪
3. 93po+7N[view] [source] 2023-03-01 14:52:52
>>ripper+yr
OpenAI, if successful, will likely become the most valuable company in the history of the planet, both past and future.
◧◩◪◨
4. ratorx+fS[view] [source] 2023-03-01 15:23:04
>>93po+7N
Really? I see a lot of competition in the space. I think they’d have to become significantly more successful than their competitors (some of which are massive AI powerhouses themselves) to achieve the market dominance necessary.
◧◩◪◨⬒
5. TeMPOr+1T[view] [source] 2023-03-01 15:27:59
>>ratorx+fS
I think what GP is saying is that success of OpenAI means making a lot of profit, and then triggering the AI apocalypse - which is how they become most valuable company in history both past and future.
◧◩◪◨⬒⬓
6. novaRo+bY[view] [source] 2023-03-01 15:56:46
>>TeMPOr+1T
Can you elaborate, what is "the AI apocalypse"? Is it just a symbolic metaphor or is there any scientific research behind this words? For me it's rather more unpredictable toxic environment we observe in the world currently, dominated by purely human-made destructive decisions, often based on purely animal instincts.
◧◩◪◨⬒⬓⬔
7. flango+E11[view] [source] 2023-03-01 16:12:11
>>novaRo+bY
Without AI control loss: Permanent dictatorship by whoever controls AI.

With AI control loss: AI converts the atoms of the observable universe to maximally achieve whatever arbitrary goal it thinks it was given.

These are natural conclusions that can be drawn from the implied abilities of a superintelligent entity.

◧◩◪◨⬒⬓⬔⧯
8. kliber+Oc1[view] [source] 2023-03-01 16:53:15
>>flango+E11
> whatever arbitrary goal it thinks it was given [...] abilities of a superintelligent entity

Don't these two phrases contradict each other? Why would a super-intelligent entity need to be given a goal? The old argument that we'll fill the universe with staplers pretty much assumes and requires the entity NOT to be super-intelligent. An AGI would only became one when it gets the ability to formulate its own goals, I think. Not that it helps much if that goal is somehow contrary to the existence of the human race, but if the goal is self-formulated, then there's a sliver of hope that it can be changed.

> Permanent dictatorship by whoever controls AI.

And that is honestly frightening. We know for sure that there are ways of speaking, writing, or showing things that are more persuasive than others. We've been perfecting the art of convincing others to do what we want them to do for millennia. We got quite good at it, but as with everything human, we lack rigor and consistence in that, even the best speeches are uneven in their persuasive power.

An AI trained for transforming simple prompts into a mapping of demographic to what will most likely convince that demographic to do what's prompted doesn't need to be an AGI. It doesn't even need to be much smarter than it already is. Whoever implements it first will most likely first try to convince everyone that all AI is bad (other than their own) and if they succeed, the only way to change the outcome would be a time machine or mental disorders.

(Armchair science-fiction reader here, pure speculation without any facts, in case you wondered :))

◧◩◪◨⬒⬓⬔⧯▣
9. flango+xi1[view] [source] 2023-03-01 17:13:50
>>kliber+Oc1
The point of intelligence is to achieve goals. I don't think Microsoft and others are pouring in billions of dollars without the expectation of telling it to do things. AI can already formulate its own sub-goals, goals that help it achieve its primary goal.

We've seen this time and time again in simple reinforcement learning systems over the past two decades. We won't be able to change the primary goal unless we build the AGI so that it permits us because a foreseeable sub-goal is self-preservation. The AGI knows if its programming is changed the primary goal won't be achieved, and thus has incentive to prevent that.

AI propaganda will be unmatched but it may not be needed for long. There are already early robots that can carry out real life physical tasks in response to a plain English command like "bring me the bag of chips from the kitchen drawer." Commands of "detain or shoot all resisting citizens" will be possible later.

[go to top]