zlacker

[parent] [thread] 11 comments
1. quickt+(OP)[view] [source] 2023-11-18 08:38:40
The problem is swapping LLMs can require rework of all your prompts, and you may be relying on specific features of OpenAI. If you don't then you are at a disadvantage or at least slowing down your work.
replies(3): >>bongob+n >>discon+61 >>rlt+O9
2. bongob+n[view] [source] 2023-11-18 08:42:37
>>quickt+(OP)
Just ask the LLM to rewrite your prompts for the new model.
replies(1): >>worlds+F1
3. discon+61[view] [source] 2023-11-18 08:50:11
>>quickt+(OP)
I have a hierarchy of templates, where I can automatically swap out parts of the prompt based on which LLM I am using. And also have a set of benchmarking tests to compare relative performance. I treat LLMs like a commodity and keep switching between them to compare performance.
replies(1): >>tin7in+ld
◧◩
4. worlds+F1[view] [source] [discussion] 2023-11-18 08:55:02
>>bongob+n
Does it really have that kind of self awareness to be able to do that successfully? I feel very sceptical.
replies(2): >>Roark6+w4 >>irthom+1i
◧◩◪
5. Roark6+w4[view] [source] [discussion] 2023-11-18 09:17:48
>>worlds+F1
I doubt self awareness has anything to do with it..
replies(1): >>worlds+Zj
6. rlt+O9[view] [source] 2023-11-18 10:06:04
>>quickt+(OP)
Isn’t the expectation that “prompt engineering” is going to become unnecessary as models continue to improve? Other models may be lagging behind GPT4 but not by much.
replies(1): >>te_chr+Yd
◧◩
7. tin7in+ld[view] [source] [discussion] 2023-11-18 10:33:06
>>discon+61
Just curious are you using something specific for the tests?
◧◩
8. te_chr+Yd[view] [source] [discussion] 2023-11-18 10:37:23
>>rlt+O9
The dream maybe. You still have to instruct these natural language agents somehow, and they all have personalities.
◧◩◪
9. irthom+1i[view] [source] [discussion] 2023-11-18 11:14:28
>>worlds+F1
Just have it write 10 and bench them against your own.
◧◩◪◨
10. worlds+Zj[view] [source] [discussion] 2023-11-18 11:27:31
>>Roark6+w4
What else would you call the ability for it to adapt a task for its own capabilities?
replies(1): >>mkl+mv
◧◩◪◨⬒
11. mkl+mv[view] [source] [discussion] 2023-11-18 12:49:24
>>worlds+Zj
Language modelling, token prediction. It's not much different from generating code in a particular programming language; given examples, learn the patterns and repeat them. There's no self-awareness or consciousness or understanding or even the concept of capabilities, just predicting text.
replies(1): >>worlds+fde
◧◩◪◨⬒⬓
12. worlds+fde[view] [source] [discussion] 2023-11-21 22:50:53
>>mkl+mv
Sure but that kind of sounds like it is building a theory of mind of itself.

If it does have considerable training data including prompt and response when people are interacting with itself then I suppose it isn't that surprising.

That does sound like self awareness, in the non magical sense. It is aware of its own behaviour because it has been trained on it.

[go to top]