zlacker

[parent] [thread] 0 comments
1. permo-+(OP)[view] [source] 2023-03-22 19:47:04
this isn't a good explanation. these LLMs are essentially statistical models. when they "hallucinate", they're not "imagining" or "dreaming", they're simply producing a string of results that your prompt combined with its training corpus implies to be likely
[go to top]