zlacker

[return to "Three senior researchers have resigned from OpenAI"]
1. quickt+U4[view] [source] 2023-11-18 07:51:26
>>convex+(OP)
Makes me wonder whether to keep building upon OpenAI? Given that they have an API and it takes effort to build on that vs. something else. I am small fry but maybe other people are wondering the same? Can they give reassurances about their products going into the future?
◧◩
2. mebutn+S7[view] [source] 2023-11-18 08:17:18
>>quickt+U4
I’d recommend trying to build out your systems to work across LLMs where you can. Create an interface layer and for now maybe use OpenAI and Vertex as a couple of options. Vertex is handy as while not always as good you may find it works well for some tasks and it can be a lot cheaper for those.

If you build out this way then when the next greatest LLM comes out you can plug that into your interface and switch the tasks it’s best at over.

◧◩◪
3. quickt+aa[view] [source] 2023-11-18 08:38:40
>>mebutn+S7
The problem is swapping LLMs can require rework of all your prompts, and you may be relying on specific features of OpenAI. If you don't then you are at a disadvantage or at least slowing down your work.
◧◩◪◨
4. bongob+xa[view] [source] 2023-11-18 08:42:37
>>quickt+aa
Just ask the LLM to rewrite your prompts for the new model.
◧◩◪◨⬒
5. worlds+Pb[view] [source] 2023-11-18 08:55:02
>>bongob+xa
Does it really have that kind of self awareness to be able to do that successfully? I feel very sceptical.
◧◩◪◨⬒⬓
6. Roark6+Ge[view] [source] 2023-11-18 09:17:48
>>worlds+Pb
I doubt self awareness has anything to do with it..
◧◩◪◨⬒⬓⬔
7. worlds+9u[view] [source] 2023-11-18 11:27:31
>>Roark6+Ge
What else would you call the ability for it to adapt a task for its own capabilities?
◧◩◪◨⬒⬓⬔⧯
8. mkl+wF[view] [source] 2023-11-18 12:49:24
>>worlds+9u
Language modelling, token prediction. It's not much different from generating code in a particular programming language; given examples, learn the patterns and repeat them. There's no self-awareness or consciousness or understanding or even the concept of capabilities, just predicting text.
◧◩◪◨⬒⬓⬔⧯▣
9. worlds+pne[view] [source] 2023-11-21 22:50:53
>>mkl+wF
Sure but that kind of sounds like it is building a theory of mind of itself.

If it does have considerable training data including prompt and response when people are interacting with itself then I suppose it isn't that surprising.

That does sound like self awareness, in the non magical sense. It is aware of its own behaviour because it has been trained on it.

[go to top]