Ilya
Jan Leike
William Saunders
Leopold Aschenbrenner
All gone
“I think AGI will probably be here by 2029, and could indeed arrive this year”
Kokotajlo too.
We are so fucked
I really, really doubt that transformers will become AGI. Maybe I am wrong, I am no expert in this field, but I would love to understand the reasoning behind this "could arrive this year", because it reminds me about coldfusion :X
edit: maybe the term has changed again. AGI to me means truly understanding, maybe even some kind of consciousness, but not just probability... when I explain something, I have understood it. It's not that I have soaked up so many books that I can just use a probabilistic function to "guess" which word should come next.
1. Alan Turing on why we should never ever perform a Turing test: https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf
2. Marvin Minsky on the “Frame Problem” that lead to one or two previous AI winters, and what an Intuitive algorithm might look like: https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...
Can you cite specifically what in the paper you're basing that on? I skimmed it as well as the Wikipedia summary but I didn't see anywhere that Turing said that the imitation game should not be played.