zlacker

[parent] [thread] 4 comments
1. hospit+(OP)[view] [source] 2023-12-20 21:04:50
Some things to note about gpt4:

>Sometimes it will spit out terrible horrid answers. I believe this might be due to time of the day/too many users. They limit tokens.

>Sometimes it will lie because it has alignment

>Sometimes I feel like it tests things on me

So, yes you are right, gpt4 is overall better, but I find myself using local models because I stopped trusting gpt4.

replies(2): >>moffka+W1 >>crooke+42
2. moffka+W1[view] [source] 2023-12-20 21:15:21
>>hospit+(OP)
How are local models better in terms of trust? GPT 4 is the only model I've seen actually tuned to say no when it doesn't have the information being asked for. Though I do agree it used to run better earlier this year.

The best open source has to offer is Mixtral that will confidently make up a biography of a person it's never heard of before or write a script with nonexistant libraries.

replies(2): >>mattke+Z6 >>hospit+hM1
3. crooke+42[view] [source] 2023-12-20 21:15:52
>>hospit+(OP)
Don't forget that ChatGPT 4 also has seasonal depression [1].

[1]: https://twitter.com/RobLynch99/status/1734278713762549970

(Though with that said, the seasonal issue might be common to any LLM with training data annotated by time of year.)

◧◩
4. mattke+Z6[view] [source] [discussion] 2023-12-20 21:46:38
>>moffka+W1
I once asked Llama whether it’d heard of me. It came back with such a startlingly detailed and convincing biography of someone almost but not quite entirely unlike me that I began to wonder if there was some kind of Sliding Doors alternate reality thing going on.

Some of the things it said I’d done were genuinely good ideas, and I might actually go and do them at some point.

ChatGPT just said no.

◧◩
5. hospit+hM1[view] [source] [discussion] 2023-12-21 14:00:59
>>moffka+W1
To be clear, the comparison was originally with GPT3 and ChatGPT3. ChatGPT3 would lie about anti-vaxx books never existing. GPT3 would answer facts.
[go to top]