zlacker

[return to "OpenAI is now everything it promised not to be: closed-source and for-profit"]
1. mellos+pe[view] [source] 2023-03-01 10:46:59
>>isaacf+(OP)
This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.

◧◩
2. ripper+yr[view] [source] 2023-03-01 12:38:21
>>mellos+pe
To quote Spaceballs, they're not doing it for money, they're doing it for a shitload of money.
◧◩◪
3. 93po+7N[view] [source] 2023-03-01 14:52:52
>>ripper+yr
OpenAI, if successful, will likely become the most valuable company in the history of the planet, both past and future.
◧◩◪◨
4. berkle+8T[view] [source] 2023-03-01 15:28:49
>>93po+7N
It’s just an autocomplete engine. Someone else will achieve AGI and OpenAI falls apart very quickly when that occurs.
◧◩◪◨⬒
5. HarHar+bC1[view] [source] 2023-03-01 18:21:19
>>berkle+8T
No, it's not just an autocomplete engine. The underlying neural network architecture is a transformer. It certainly can do "autocomplete" (or riffs on autocomplete), but it can also do a lot more. It doesn't take much thought to realize that being REALLY good at autocomplete means that you need to learn how to do a lot of other things as well.

At the end of the day the "predict next word" training goal of LLMs is the ultimate intelligence test. If you could always answer that "correctly" (i.e. intelligently) you'd be a polymath genius. Focusing on the "next word" ("autocomplete") aspect of this, and ignoring the knowledge/intelligence needed to do WELL at it is rather misleading!

"The best way to combine quantum mechanics and general relativity into a single theory of everything is ..."

◧◩◪◨⬒⬓
6. goatlo+t23[view] [source] 2023-03-02 03:37:58
>>HarHar+bC1
Wouldn't the ultimate intelligence test involve manipulating the real world? That seems orders of magnitude harder than autocompletion. For a theory of everything, you would probably have to perform some experiments that don't currently exist.
◧◩◪◨⬒⬓⬔
7. HarHar+3O4[view] [source] 2023-03-02 17:08:01
>>goatlo+t23
> Wouldn't the ultimate intelligence test involve manipulating the real world?

Perhaps, although intelligence and knowledge are two separate things, so one can display intelligence over a given set of knowledge without knowing other things. Of course intelligence isn't a scalar quantity - to be super-intelligent you want to display intelligence across the widest variety/type of experience and knowledge sets - not just "book smart or street smart", but both and more.

Certainly for parity with humans you need to be able to interact with the real world, but I'm not sure it's much different or a whole lot more more complex. Instead of "predict next word" the model/robot would be doing "predict next action", followed by "predict action response". Embeddings are a very powerful type of general-purpose representation - you can embed words, but also perceptions/etc, so I don't think we're very far at all from having similar transformer-based models able to act & perceive - I'd be somewhat surprised if people aren't already experimenting with this.

◧◩◪◨⬒⬓⬔⧯
8. goatlo+p05[view] [source] 2023-03-02 17:56:17
>>HarHar+3O4
The biggest challenge might be the lack of training data when it comes to robotics and procedural tasks that aren't captured by language.
[go to top]