> The @huggingface GPT detector works very well on ChatGPT-created text. I ran 5 student essays and 5 ChatGPT essays for the same prompt through it, and it was correct every time with >99.9% confidence.
How about adding a %human/%GPT statistic to posts and comments?
> binom.test(5,5,0.5)
Exact binomial test
data: 5 and 5
number of successes = 5, number of trials = 5, p-value = 0.0625
alternative hypothesis: true probability of success is not equal to 0.5
95 percent confidence interval:
0.4781762 1.0000000
In other words, we don't have enough data in that small sample to reject the possibility that the model is 50% accurate, much less 99.9% accurate.See the app: https://huggingface.co/openai-detector/ - it gives a response as % chance it's genetic or chat bot.
Maybe I was just unlucky with the comment I tried it with (took the longest one I saw in my history), but I don't think I would have liked seeing it either removed or spat at for being considered as "AI generated"...
The detector also thinks this comment is fake. Seems influenced by flavors of mistakes.
Idiomatic ones. Spelling ones. Grammar. All non-native speakers will easily get flagged. Does not look spot-on for now. Checked all those assertion live-typing on the demo. 0.09% real.