zlacker

[parent] [thread] 7 comments
1. axg11+(OP)[view] [source] 2022-05-24 00:52:06
I firmly believe that ~20-40% of the machine learning community will say that all ML models are dumb statistical interpolators all the way until a few years after we achieve AGI. Roughly the same groups will also claim that human intelligence is special magic that cannot be recreated using current technology.

I think it’s in everyone’s benefit if we start planning for a world where a significant portion of the experts are stubbornly wrong about AGI. As a technology, generally intelligent ML has the potential to change so many aspects of our world. The dangers of dismissing the possibility of AGI emerging in the next 5-10 years are huge.

replies(3): >>sineno+k >>woeiru+za >>Daishi+Fd
2. sineno+k[view] [source] 2022-05-24 00:55:35
>>axg11+(OP)
> The dangers of dismissing the possibility of AGI emerging in the next 5-10 years are huge.

Again, I think we should consider "The Human Alignment Problem" more in this context. The transformers in question are large, heavy and not really prone to "recursive self-improvement".

If the ML-AGI works out in a few years, who gets to enter the prompts?

replies(2): >>voz_+2f >>NickNa+dh
3. woeiru+za[view] [source] 2022-05-24 02:33:48
>>axg11+(OP)
You should be much more concerned about the prospect of nuclear war right now than the sudden emergence of an AGI.
replies(2): >>random+uo >>Poigna+EP
4. Daishi+Fd[view] [source] 2022-05-24 03:08:02
>>axg11+(OP)
These ML models aren't capable of generating novel thinking. They allow for extracting knowledge from an existing network. They cannot declare new ideas, identify how to validate them, and gather data and reach conclusions.
◧◩
5. voz_+2f[view] [source] [discussion] 2022-05-24 03:27:26
>>sineno+k
Me.

... ... ...

Obviously "/s", obviously joking, but meant to highlight that there are a few parties that would all answer "me" and truly mean it, often not in a positive way.

◧◩
6. NickNa+dh[view] [source] [discussion] 2022-05-24 03:55:33
>>sineno+k
A DAO.
◧◩
7. random+uo[view] [source] [discussion] 2022-05-24 05:16:54
>>woeiru+za
100 times this. There’s very little sign of AGI, but nuclear weapons exist, can definitely destroy the planet already, are designed to, have nearly done so in the past, and we’re at the most dangerous point in decades.
◧◩
8. Poigna+EP[view] [source] [discussion] 2022-05-24 09:43:56
>>woeiru+za
Is it really that simple?

We can worry about two things at once. We can be especially worried that at some point (maybe decades away, potentially years away), we'll have nuclear weapons and rampant AGI.

[go to top]