zlacker

[parent] [thread] 0 comments
1. skywho+(OP)[view] [source] 2024-02-13 22:11:50
This sucks, but it's unlikely to be fixable, given that LLMs don't actually have any comprehension or reasoning capability. Get too far into fine-tuning responses and you're back to "classic" AI problems.
[go to top]