zlacker

[parent] [thread] 15 comments
1. mebutn+(OP)[view] [source] 2023-11-18 08:17:18
I’d recommend trying to build out your systems to work across LLMs where you can. Create an interface layer and for now maybe use OpenAI and Vertex as a couple of options. Vertex is handy as while not always as good you may find it works well for some tasks and it can be a lot cheaper for those.

If you build out this way then when the next greatest LLM comes out you can plug that into your interface and switch the tasks it’s best at over.

replies(3): >>quickt+i2 >>pjmlp+y2 >>ramraj+oe
2. quickt+i2[view] [source] 2023-11-18 08:38:40
>>mebutn+(OP)
The problem is swapping LLMs can require rework of all your prompts, and you may be relying on specific features of OpenAI. If you don't then you are at a disadvantage or at least slowing down your work.
replies(3): >>bongob+F2 >>discon+o3 >>rlt+6c
3. pjmlp+y2[view] [source] 2023-11-18 08:41:25
>>mebutn+(OP)
Definitely, just like with games development, the key is to master how things work, not specific APIs.

AI tools will need a similar plugin like approach.

replies(1): >>quickt+Sd
◧◩
4. bongob+F2[view] [source] [discussion] 2023-11-18 08:42:37
>>quickt+i2
Just ask the LLM to rewrite your prompts for the new model.
replies(1): >>worlds+X3
◧◩
5. discon+o3[view] [source] [discussion] 2023-11-18 08:50:11
>>quickt+i2
I have a hierarchy of templates, where I can automatically swap out parts of the prompt based on which LLM I am using. And also have a set of benchmarking tests to compare relative performance. I treat LLMs like a commodity and keep switching between them to compare performance.
replies(1): >>tin7in+Df
◧◩◪
6. worlds+X3[view] [source] [discussion] 2023-11-18 08:55:02
>>bongob+F2
Does it really have that kind of self awareness to be able to do that successfully? I feel very sceptical.
replies(2): >>Roark6+O6 >>irthom+jk
◧◩◪◨
7. Roark6+O6[view] [source] [discussion] 2023-11-18 09:17:48
>>worlds+X3
I doubt self awareness has anything to do with it..
replies(1): >>worlds+hm
◧◩
8. rlt+6c[view] [source] [discussion] 2023-11-18 10:06:04
>>quickt+i2
Isn’t the expectation that “prompt engineering” is going to become unnecessary as models continue to improve? Other models may be lagging behind GPT4 but not by much.
replies(1): >>te_chr+gg
◧◩
9. quickt+Sd[view] [source] [discussion] 2023-11-18 10:19:12
>>pjmlp+y2
I have a good idea how transformers work and have written Python code and trained toy ones, but end of the day right now calling OpenAI nothing I can build can beat it.
10. ramraj+oe[view] [source] 2023-11-18 10:23:14
>>mebutn+(OP)
That would go as well as trying to write a universal android iOS app or write ansi sql to work across database platforms. A bad idea in every dimension.
◧◩◪
11. tin7in+Df[view] [source] [discussion] 2023-11-18 10:33:06
>>discon+o3
Just curious are you using something specific for the tests?
◧◩◪
12. te_chr+gg[view] [source] [discussion] 2023-11-18 10:37:23
>>rlt+6c
The dream maybe. You still have to instruct these natural language agents somehow, and they all have personalities.
◧◩◪◨
13. irthom+jk[view] [source] [discussion] 2023-11-18 11:14:28
>>worlds+X3
Just have it write 10 and bench them against your own.
◧◩◪◨⬒
14. worlds+hm[view] [source] [discussion] 2023-11-18 11:27:31
>>Roark6+O6
What else would you call the ability for it to adapt a task for its own capabilities?
replies(1): >>mkl+Ex
◧◩◪◨⬒⬓
15. mkl+Ex[view] [source] [discussion] 2023-11-18 12:49:24
>>worlds+hm
Language modelling, token prediction. It's not much different from generating code in a particular programming language; given examples, learn the patterns and repeat them. There's no self-awareness or consciousness or understanding or even the concept of capabilities, just predicting text.
replies(1): >>worlds+xfe
◧◩◪◨⬒⬓⬔
16. worlds+xfe[view] [source] [discussion] 2023-11-21 22:50:53
>>mkl+Ex
Sure but that kind of sounds like it is building a theory of mind of itself.

If it does have considerable training data including prompt and response when people are interacting with itself then I suppose it isn't that surprising.

That does sound like self awareness, in the non magical sense. It is aware of its own behaviour because it has been trained on it.

[go to top]