zlacker

[parent] [thread] 2 comments
1. dragon+(OP)[view] [source] 2023-11-22 07:55:38
> Why can't these safety advocates just say what they are afraid of?

They have. At length. E.g.,

https://ai100.stanford.edu/gathering-strength-gathering-stor...

https://arxiv.org/pdf/2307.03718.pdf

https://eber.uek.krakow.pl/index.php/eber/article/view/2113

https://journals.sagepub.com/doi/pdf/10.1177/102425892211472...

https://jc.gatspress.com/pdf/existential_risk_and_powerseeki...

For just a handful of examples from the vast literature published in this area.

replies(1): >>epups+fk
2. epups+fk[view] [source] 2023-11-22 10:46:18
>>dragon+(OP)
I'm familiar with the potential risks of an out-of-control AGI. Can you summarise in one paragraph which of these risks concern you, or the safety advocates, in regards to a product like ChatGPT?
replies(1): >>FartyM+Ex1
◧◩
3. FartyM+Ex1[view] [source] [discussion] 2023-11-22 17:31:22
>>epups+fk
It's not only about ChatGPT. OpenAI will probably make other things in the future.
[go to top]