zlacker

[return to "Ilya Sutskever to leave OpenAI"]
1. ascorb+6C[view] [source] 2024-05-15 05:45:41
>>wavela+(OP)
Jan Leike has said he's leaving too https://twitter.com/janleike/status/1790603862132596961
◧◩
2. DalasN+BC[view] [source] 2024-05-15 05:51:45
>>ascorb+6C
There goes the so called superalignment:

Ilya

Jan Leike

William Saunders

Leopold Aschenbrenner

All gone

◧◩◪
3. reduce+sF[view] [source] 2024-05-15 06:23:14
>>DalasN+BC
Daniel “Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI”

“I think AGI will probably be here by 2029, and could indeed arrive this year”

Kokotajlo too.

We are so fucked

◧◩◪◨
4. Otomot+hG[view] [source] 2024-05-15 06:32:48
>>reduce+sF
I am sorry, there must be some hidden tech, some completely different attempt to speak about AGI.

I really, really doubt that transformers will become AGI. Maybe I am wrong, I am no expert in this field, but I would love to understand the reasoning behind this "could arrive this year", because it reminds me about coldfusion :X

edit: maybe the term has changed again. AGI to me means truly understanding, maybe even some kind of consciousness, but not just probability... when I explain something, I have understood it. It's not that I have soaked up so many books that I can just use a probabilistic function to "guess" which word should come next.

◧◩◪◨⬒
5. _nalpl+ZG[view] [source] 2024-05-15 06:40:04
>>Otomot+hG
I think what's missing:

- A possibility to fact-check the text, for example by the Wolfram math engine or by giving internet access

- Something like an instinct to fight for life (seems dangerous)

- some more subsystems: let's have a look a the brain: there's the amygdala, the cerebellum, the hippocampus, and so on, and there must be some evolutionary need for these parts

[go to top]