zlacker

[parent] [thread] 1 comments
1. ImHere+(OP)[view] [source] 2025-07-07 15:32:15
LLMs are also not understood. I mean we built and trained them. But don't of the abilities at still surprising to researchers. We have yet to map these machines.
replies(1): >>seadan+3ac
2. seadan+3ac[view] [source] 2025-07-12 00:23:38
>>ImHere+(OP)
I do partially agree. Though, it is at least tractable to understand why a LLM gave a specific output - but perhaps not practical. Understanding how a human arrives at a certain decision (by say simply looking at brain waves) OTOH is not even tractable as of yet.
[go to top]