zlacker

[parent] [thread] 0 comments
1. ruthie+(OP)[view] [source] 2023-12-20 03:17:48
I think they’re drawing the right conclusion:

LLM’s are still in their infancy and easily mislead with the right prompting, and are still far too prone to hallucination to have applicability in the way some people are trying to implement them.

[go to top]