zlacker

[return to "OpenAI is now everything it promised not to be: closed-source and for-profit"]
1. mellos+pe[view] [source] 2023-03-01 10:46:59
>>isaacf+(OP)
This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.

◧◩
2. ripper+yr[view] [source] 2023-03-01 12:38:21
>>mellos+pe
To quote Spaceballs, they're not doing it for money, they're doing it for a shitload of money.
◧◩◪
3. 93po+7N[view] [source] 2023-03-01 14:52:52
>>ripper+yr
OpenAI, if successful, will likely become the most valuable company in the history of the planet, both past and future.
◧◩◪◨
4. ejb999+vS[view] [source] 2023-03-01 15:25:05
>>93po+7N
seriously doubt it - what they are doing, others can do - and if they start generating a lot of revenue, it will attract competition - lots of it.

They don't have a moat big enough that many millions of dollars can't defeat.

◧◩◪◨⬒
5. hypert+EV[view] [source] 2023-03-01 15:44:40
>>ejb999+vS
What if they have an internal ChatGPTzero, training and reprogramming itself, iterating at inhuman speed? A headstart in an exponential is a moat.

It surely will have huge blindspots (and people do too), but perhaps it will be good enough for self-improvement... or will be soon.

◧◩◪◨⬒⬓
6. jasonj+sd1[view] [source] 2023-03-01 16:55:35
>>hypert+EV
It's a fundamentally different problem. AlphaZero (DeepMind) was able to be trained this way because it was setup with an explicit reward function and end condition. Competitive self-play needs a reward function.

It can't just "self-improve towards general intelligence".

What's the fitness function of intelligence?

◧◩◪◨⬒⬓⬔
7. hypert+iL1[view] [source] 2023-03-01 18:57:25
>>jasonj+sd1
Thanks, good point. Thinking aloud:

Can ChatGPT evaluate how good ChatGPT-generated output is? This seems prone to exaggerating blind-spots, but OTOH, creating and criticising are different skills, and criticising is usually easier.

◧◩◪◨⬒⬓⬔⧯
8. hypert+f73[view] [source] 2023-03-02 04:21:14
>>hypert+iL1
> What's the fitness function of intelligence?

Not General, but there's IQ tests. Undergraduate examinations. Can also involve humans in the loop (though not iterate as fast), through usage of ChatGPT, CAPTCHA's, votes on reddit/hackernews/stackexchanges, even pay people to evaluate it.

Going back to the moat, even ordinary technology tends to improve, and a headstart can be maintained - provided its possible to improve it. So a question is whether ChatGPT is a kind of plateau, that can't be improved very much so others catch up while it stands still, or it's on a curve.

A significant factor is whether being ahead helps you keep ahead - a crucial thing is you gather usage data that is unavailable to followers. This data is more significant for this type of technology than any other - see above.

[go to top]