zlacker

[return to "Fine-tune your own Llama 2 to replace GPT-3.5/4"]
1. ronyfa+wk[view] [source] 2023-09-12 18:29:55
>>kcorbi+(OP)
For translation jobs, I've experimented with Llama 2 70B (running on Replicate) v/s GPT-3.5;

For about 1000 input tokens (and resulting 1000 output tokens), to my surprise, GPT-3.5 turbo was 100x cheaper than Llama 2.

Llama 7B wasn't up to the task fyi, producing very poor translations.

I believe that OpenAI priced GPT-3.5 aggressively cheap in order to make it a non-brainer to rely on them rather than relying on other vendors (even open source models).

I'm curious to see if others have gotten different results?

◧◩
2. Muffin+vs[view] [source] 2023-09-12 18:50:41
>>ronyfa+wk
I thought Llama was opensource/free and you could run it yourself?
◧◩◪
3. loudma+yv[view] [source] 2023-09-12 19:00:08
>>Muffin+vs
You can run the smaller Llama variants on consumer grade hardware, but people typically rent GPUs from the cloud to run the larger variants. It is possible to run even larger variants on a beefy workstation or gaming rig, but the performance on consumer hardware usually makes this impractical.

So the comparison would be the cost of renting a cloud GPU to run Llama vs querying ChatGPT.

◧◩◪◨
4. ramesh+FE[view] [source] 2023-09-12 19:30:14
>>loudma+yv
>So the comparison would be the cost of renting a cloud GPU to run Llama vs querying ChatGPT.

Yes, and it doesn't even come close. Llama2-70b can run inference at 300+tokens/s on a single V100 instance at ~$0.50/hr. Anyone who can should be switching away from OpenAI right now.

◧◩◪◨⬒
5. thewat+YS[view] [source] 2023-09-12 20:16:10
>>ramesh+FE
What's the best way to use LLama2-70b without existing infrastructure for orchestrating it?
◧◩◪◨⬒⬓
6. mjirv+rg1[view] [source] 2023-09-12 21:50:45
>>thewat+YS
I stumbled upon OpenRouter[0] a few days ago. Easiest I’ve seen by far (if you want SaaS, not hosting it yourself).

[0] https://openrouter.ai

[go to top]