zlacker

[parent] [thread] 0 comments
1. moonch+(OP)[view] [source] 2023-09-12 21:13:20
I don't understand what you're trying to say ?

From what I've read 4090 should blow A100 away if you can fit within 22GB VRAM, which a 7B model should comfortably.

And the latency (along with variability and availability) on OpenAI API is terrible because of the load they are getting.

[go to top]