Claude, Gemini, Copilot and and ChatGPT are non-starters for privacy-minded folks.
So far, local experiements with agents have left me underwhelmed. Tried everything on ollama that can run on my dedicated Ryzen 8700G with 96GB DDR5. I'm ready to blow ~10-15k USD on a better rig if I see value in it but if I extrapolate current results I believe it'll be another CPU generation before I can expect positive productivity output from properly securely running local models when factoring in the setup and meta.
Most providers promise not to train on inputs if used via an API (and otherwise have a retention timeline for other reasons).
I'm not sure the privacy concern is greater than using pretty much any cloud provider for anything. Storing your data on AWS: Privacy concern?
The single biggest productivity boost you can get in LLM world is believing them when they make those promises to you!
I'm having a hard time interpreting what you mean here. It sounds like something straight out of a cult.
I'd be looking for something that can run offline and receive system updates from an internal mirror on the airgapped network. Needing to tie an AppleID to the machine and allow it internet access for OS updates is a hard sell. Am I wrong in thinking that keeping an airgapped macOS installation up to date would additional infrastructure that requires some enterprise contract with Apple?
1. Believe them. Use their products and benefit from them.
2. Disbelieve them. Refuse to use their products. Miss out on benefiting from them.
I pick option 1. I think that's the better option to pick if you want to be able to benefit from what this technology can do for you.
Personally I think "these people are lying about everything" is a stronger indication of a cult mindset. Not everyone is your enemy.
Fool me twice, you can't get fooled again.
Devstral (mistral small fine-tuned for agentic use coding) w/ cline has been above expectations for me.
Those policies are worth the paper they're printed on.
I also note that if you're a USian, you've almost certainly been required to surrender your right to air grievances in court and submit to mandatory binding arbitration for any conflict resolution that one would have used the courts for.
I find this lack of trust quite baffling. Companies like money! They like having customers.
And, those who are pay attention notice that the fines and penalties for big companies that screw the little guys are often next-to-nothing when compared with that big company's revenue. In other words, these punishments are often "cost of doing business" expenses, rather than actual deterrents.
So, yeah. Add into all that a healthy dose of "How would anyone but the customers with the deepest pockets ever get enough money to prove such a contract violation in court?", and you end up a profound lack of trust.
This space is fiercely competitive. If OpenAI turn out to be training on private data in breach of contract, their customers can switch to Anthropic.