>>HDThor+(OP)
I think the point was that on a purely technical level, the LLMs as currently used can’t do anything on their own. They only continue a prompt when given. It’s not like a LLM could “decide” to hack the NSA and publish the data tomorrow, because it determined that this would help humanity. The only thing it can do is try to make people do something when they read the responses.