zlacker

[return to "Memory and new controls for ChatGPT"]
1. markab+df[view] [source] 2024-02-13 19:25:39
>>Josely+(OP)
I've found myself more and more using local models rather than ChatGPT; it was pretty trivial to set up Ollama+Ollama-WebUI, which is shockingly good.

I'm so tired of arguing with ChatGPT (or what was Bard) to even get simple things done. SOLAR-10B or Mistral works just fine for my use cases, and I've wired up a direct connection to Fireworks/OpenRouter/Together for the occasion I need anything more than what will run on my local hardware. (mixtral MOE, 70B code/chat models)

◧◩
2. chrisa+nU[view] [source] 2024-02-13 23:16:23
>>markab+df
Same here. I've found that I currently only want to use an LLM to solve relatively "dumb" problems (boilerplate generation, rubber-ducking, etc), and the locally-hosted stuff works great for that.

Also, I've found that GPT has become much less useful as it has gotten "safer." So often I'd ask "How do I do X?" only to be told "You shouldn't do X." That's a frustrating waste of time, so I cancelled by GPT-4 subscription and went fully self-hosted.

[go to top]