zlacker

[parent] [thread] 0 comments
1. jibal+(OP)[view] [source] 2023-11-21 07:35:50
They aren't just-so theories ... this is how LLMs work. We actually understand exactly how they process information internally, but since their very nature is to extract statistical patterns from the training data and that training data is massive, we can't anticipate what patterns have been extracted. We just know that, whatever patterns are there to be abstracted--e.g., users tending to identify issues with someone else's output rather than their own--those patterns will be reflected in the output.
[go to top]