zlacker

[parent] [thread] 7 comments
1. ben_w+(OP)[view] [source] 2023-11-18 09:19:16
It can do: https://chat.openai.com/share/f1c0726f-294d-447d-a3b3-f664dc...

IMO the main reason it's distinguishable is because it keeps explicitly telling you it's an AI.

replies(3): >>rezona+a1 >>NoOn3+K1 >>peigno+zn
2. rezona+a1[view] [source] 2023-11-18 09:28:55
>>ben_w+(OP)
This isn't the same thing. This is a commanded recital of a lack of capability, not that its confidence in it's answer is low. For a type of question the GPT _could_ answer, most of the time it _will_ answer, regardless of accuracy
3. NoOn3+K1[view] [source] 2023-11-18 09:34:19
>>ben_w+(OP)
I just noticed that when I ask really difficult technical questions, but for which there is an exact answer, It often tries to answer plausibly, but incorrectly instead of answering "I don't know". But over time, It becomes smarter and there are fewer and fewer such questions...
replies(2): >>ben_w+32 >>davegu+3g2
◧◩
4. ben_w+32[view] [source] [discussion] 2023-11-18 09:37:17
>>NoOn3+K1
Have you tried setting a custom instruction in settings? I find that setting helps, albeit with weaker impact than the prompt itself.
replies(1): >>NoOn3+dc
◧◩◪
5. NoOn3+dc[view] [source] [discussion] 2023-11-18 11:03:56
>>ben_w+32
It's not a problem for me. It's good that I can detect chatGPT by this sign.
6. peigno+zn[view] [source] 2023-11-18 12:25:06
>>ben_w+(OP)
I read an article where they did a proper Turing test and it seems people recognize it was a machine answering because it made no writing errors and wrote perfectly
replies(1): >>ben_w+Dp
◧◩
7. ben_w+Dp[view] [source] [discussion] 2023-11-18 12:40:03
>>peigno+zn
I've not read that, but I do remember hearing that the first human to fail the Turing test did so because they seemed to know far too much minutiae about Star Trek.
◧◩
8. davegu+3g2[view] [source] [discussion] 2023-11-18 23:24:03
>>NoOn3+K1
It doesn't become smarter except for releases of new models. It's an inference engine.
[go to top]