zlacker

[return to "We have reached an agreement in principle for Sam to return to OpenAI as CEO"]
1. transc+32[view] [source] 2023-11-22 06:15:37
>>staran+(OP)
Assuming they weren’t LARPing, that Reddit account claiming to have been in the room when this was all going down must be nervous. They wrote all kinds of nasty things about Sam, and I’m assuming the signatures on the “bring him back” letter would narrow down potential suspects considerably.

Edit: For those who may have missed it in previous threads, see https://old.reddit.com/user/Anxious_Bandicoot126

◧◩
2. epups+Gc[view] [source] 2023-11-22 07:24:30
>>transc+32
Why can't these safety advocates just say what they are afraid of? As it currently stands, the only "danger" in ChatGPT is that you can manipulate it into writing something violent or inappropriate. So what? Is this some San Francisco sensibilities here, where reading about fictional violence is equated to violence? The more people raise safety concerns in the abstract, the more I ignore it.
◧◩◪
3. dragon+Qg[view] [source] 2023-11-22 07:55:38
>>epups+Gc
> Why can't these safety advocates just say what they are afraid of?

They have. At length. E.g.,

https://ai100.stanford.edu/gathering-strength-gathering-stor...

https://arxiv.org/pdf/2307.03718.pdf

https://eber.uek.krakow.pl/index.php/eber/article/view/2113

https://journals.sagepub.com/doi/pdf/10.1177/102425892211472...

https://jc.gatspress.com/pdf/existential_risk_and_powerseeki...

For just a handful of examples from the vast literature published in this area.

◧◩◪◨
4. epups+5B[view] [source] 2023-11-22 10:46:18
>>dragon+Qg
I'm familiar with the potential risks of an out-of-control AGI. Can you summarise in one paragraph which of these risks concern you, or the safety advocates, in regards to a product like ChatGPT?
◧◩◪◨⬒
5. FartyM+uO1[view] [source] 2023-11-22 17:31:22
>>epups+5B
It's not only about ChatGPT. OpenAI will probably make other things in the future.
[go to top]