zlacker

[parent] [thread] 1 comments
1. haiku2+(OP)[view] [source] 2025-06-03 00:35:02
> How does that work exactly? Do you have a link?

https://ollama.com lets you run models on your own hardware and serve them over a network. The you point your editor at that server, eg https://zed.dev/docs/ai/configuration#ollama

replies(1): >>Diablo+3M
2. Diablo+3M[view] [source] 2025-06-03 09:09:19
>>haiku2+(OP)
Don't use Ollama, use llama.cpp instead.
[go to top]