For about 1000 input tokens (and resulting 1000 output tokens), to my surprise, GPT-3.5 turbo was 100x cheaper than Llama 2.
Llama 7B wasn't up to the task fyi, producing very poor translations.
I believe that OpenAI priced GPT-3.5 aggressively cheap in order to make it a non-brainer to rely on them rather than relying on other vendors (even open source models).
I'm curious to see if others have gotten different results?
So the comparison would be the cost of renting a cloud GPU to run Llama vs querying ChatGPT.
Yes, and it doesn't even come close. Llama2-70b can run inference at 300+tokens/s on a single V100 instance at ~$0.50/hr. Anyone who can should be switching away from OpenAI right now.