zlacker

[parent] [thread] 2 comments
1. Paul-C+(OP)[view] [source] 2023-11-18 00:13:57
I don't think so. LLMs are absolutely not a scam. There are LLMs out there that I can and do run on my laptop that are nearly as good as GPT4. Replacing GPT4 with another LLM is not the hardest thing in the world. I predict that, besides Microsoft, this won't be felt in the broader tech sector.
replies(1): >>int_19+tF
2. int_19+tF[view] [source] 2023-11-18 04:44:18
>>Paul-C+(OP)
They really aren't nearly as good as GPT4, though. The best hobbyist stuff that we have right now is 70b LLaMA finetunes, which you can run on a high-end MacBook, but I would say it's only marginally better than GPT-3.5. As soon as it gets a hard task that requires reasoning, things break down. GPT-4 is leaps ahead of any of that stuff.
replies(1): >>Paul-C+0Y
◧◩
3. Paul-C+0Y[view] [source] [discussion] 2023-11-18 07:19:08
>>int_19+tF
No, not by themselves, they're not as good as GPT-4. (I disagree that they're only "marginally" better than GPT-3.5, but that's just a minor quibble) If you use RAG and other techniques, you can get very close to GPT-4-level performance with other open models. Again, I'm not claiming open models are better than GPT-4, just that you can come close.
[go to top]