zlacker

[parent] [thread] 1 comments
1. int_19+(OP)[view] [source] 2023-11-18 04:44:18
They really aren't nearly as good as GPT4, though. The best hobbyist stuff that we have right now is 70b LLaMA finetunes, which you can run on a high-end MacBook, but I would say it's only marginally better than GPT-3.5. As soon as it gets a hard task that requires reasoning, things break down. GPT-4 is leaps ahead of any of that stuff.
replies(1): >>Paul-C+xi
2. Paul-C+xi[view] [source] 2023-11-18 07:19:08
>>int_19+(OP)
No, not by themselves, they're not as good as GPT-4. (I disagree that they're only "marginally" better than GPT-3.5, but that's just a minor quibble) If you use RAG and other techniques, you can get very close to GPT-4-level performance with other open models. Again, I'm not claiming open models are better than GPT-4, just that you can come close.
[go to top]