zlacker

[parent] [thread] 2 comments
1. sineno+(OP)[view] [source] 2022-05-24 00:55:35
> The dangers of dismissing the possibility of AGI emerging in the next 5-10 years are huge.

Again, I think we should consider "The Human Alignment Problem" more in this context. The transformers in question are large, heavy and not really prone to "recursive self-improvement".

If the ML-AGI works out in a few years, who gets to enter the prompts?

replies(2): >>voz_+Ie >>NickNa+Tg
2. voz_+Ie[view] [source] 2022-05-24 03:27:26
>>sineno+(OP)
Me.

... ... ...

Obviously "/s", obviously joking, but meant to highlight that there are a few parties that would all answer "me" and truly mean it, often not in a positive way.

3. NickNa+Tg[view] [source] 2022-05-24 03:55:33
>>sineno+(OP)
A DAO.
[go to top]