zlacker

[return to "OpenAI is now everything it promised not to be: closed-source and for-profit"]
1. mellos+pe[view] [source] 2023-03-01 10:46:59
>>isaacf+(OP)
This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.

◧◩
2. ripper+yr[view] [source] 2023-03-01 12:38:21
>>mellos+pe
To quote Spaceballs, they're not doing it for money, they're doing it for a shitload of money.
◧◩◪
3. 93po+7N[view] [source] 2023-03-01 14:52:52
>>ripper+yr
OpenAI, if successful, will likely become the most valuable company in the history of the planet, both past and future.
◧◩◪◨
4. ejb999+vS[view] [source] 2023-03-01 15:25:05
>>93po+7N
seriously doubt it - what they are doing, others can do - and if they start generating a lot of revenue, it will attract competition - lots of it.

They don't have a moat big enough that many millions of dollars can't defeat.

◧◩◪◨⬒
5. hypert+EV[view] [source] 2023-03-01 15:44:40
>>ejb999+vS
What if they have an internal ChatGPTzero, training and reprogramming itself, iterating at inhuman speed? A headstart in an exponential is a moat.

It surely will have huge blindspots (and people do too), but perhaps it will be good enough for self-improvement... or will be soon.

◧◩◪◨⬒⬓
6. jasonj+sd1[view] [source] 2023-03-01 16:55:35
>>hypert+EV
It's a fundamentally different problem. AlphaZero (DeepMind) was able to be trained this way because it was setup with an explicit reward function and end condition. Competitive self-play needs a reward function.

It can't just "self-improve towards general intelligence".

What's the fitness function of intelligence?

◧◩◪◨⬒⬓⬔
7. pixl97+oI1[view] [source] 2023-03-01 18:45:22
>>jasonj+sd1
Actually you are stating 2 different problems at the same time.

A lot of intelligence is based around "Don't die, also, have babies". Well AI doesn't have that issue as long as it produces a good enough answer we'll keep the power flowing to it.

The bigger issue, and one that is likely far more dangerous, is "Ok, what if it could learn like this". You've created super intelligence with access to huge amounts of global information and is pretty much immortal unless humans decide to pull the plug. The expected alignment of your superintelligent machine is to design a scenario where we cannot unplug the AI without suffering some loss (monetary is always a good one, rich people never like to lose money and will let an AI burn the earth first).

The alignment issue on a potential superintelligence is, I don't believe, a solvable problem. Getting people not to be betraying bastards is hard enough at standard intelligence levels, having a thing that could potentially be way smarter and connected to way more data is not going to be controllable in any form or fashion.

[go to top]