>>staran+(OP)
Assuming they weren’t LARPing, that Reddit account claiming to have been in the room when this was all going down must be nervous. They wrote all kinds of nasty things about Sam, and I’m assuming the signatures on the “bring him back” letter would narrow down potential suspects considerably.
>>transc+32
Why can't these safety advocates just say what they are afraid of? As it currently stands, the only "danger" in ChatGPT is that you can manipulate it into writing something violent or inappropriate. So what? Is this some San Francisco sensibilities here, where reading about fictional violence is equated to violence? The more people raise safety concerns in the abstract, the more I ignore it.
>>epups+Gc
They invented a whole theory of how if we had something called "AGI" it would kill everyone, and now they think LLMs can kill everyone because they're calling it "AGI", even though it doesn't work anything like their theory assumed.
This isn't about political correctness. It's far less reasonable than that.
>>astran+ij
Based on the downvotes I am getting and the links posted in the other comment, I think you are absolutely right. People are acting as if ChatGPT is AGI, or very close to it, therefore we have to solve all these catastrophic scenarios now.