zlacker

[parent] [thread] 6 comments
1. bongob+(OP)[view] [source] 2023-11-18 08:42:37
Just ask the LLM to rewrite your prompts for the new model.
replies(1): >>worlds+i1
2. worlds+i1[view] [source] 2023-11-18 08:55:02
>>bongob+(OP)
Does it really have that kind of self awareness to be able to do that successfully? I feel very sceptical.
replies(2): >>Roark6+94 >>irthom+Eh
◧◩
3. Roark6+94[view] [source] [discussion] 2023-11-18 09:17:48
>>worlds+i1
I doubt self awareness has anything to do with it..
replies(1): >>worlds+Cj
◧◩
4. irthom+Eh[view] [source] [discussion] 2023-11-18 11:14:28
>>worlds+i1
Just have it write 10 and bench them against your own.
◧◩◪
5. worlds+Cj[view] [source] [discussion] 2023-11-18 11:27:31
>>Roark6+94
What else would you call the ability for it to adapt a task for its own capabilities?
replies(1): >>mkl+Zu
◧◩◪◨
6. mkl+Zu[view] [source] [discussion] 2023-11-18 12:49:24
>>worlds+Cj
Language modelling, token prediction. It's not much different from generating code in a particular programming language; given examples, learn the patterns and repeat them. There's no self-awareness or consciousness or understanding or even the concept of capabilities, just predicting text.
replies(1): >>worlds+Sce
◧◩◪◨⬒
7. worlds+Sce[view] [source] [discussion] 2023-11-21 22:50:53
>>mkl+Zu
Sure but that kind of sounds like it is building a theory of mind of itself.

If it does have considerable training data including prompt and response when people are interacting with itself then I suppose it isn't that surprising.

That does sound like self awareness, in the non magical sense. It is aware of its own behaviour because it has been trained on it.

[go to top]