zlacker

[return to "Ilya Sutskever to leave OpenAI"]
1. ascorb+6C[view] [source] 2024-05-15 05:45:41
>>wavela+(OP)
Jan Leike has said he's leaving too https://twitter.com/janleike/status/1790603862132596961
◧◩
2. DalasN+BC[view] [source] 2024-05-15 05:51:45
>>ascorb+6C
There goes the so called superalignment:

Ilya

Jan Leike

William Saunders

Leopold Aschenbrenner

All gone

◧◩◪
3. reduce+sF[view] [source] 2024-05-15 06:23:14
>>DalasN+BC
Daniel “Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI”

“I think AGI will probably be here by 2029, and could indeed arrive this year”

Kokotajlo too.

We are so fucked

◧◩◪◨
4. Otomot+hG[view] [source] 2024-05-15 06:32:48
>>reduce+sF
I am sorry, there must be some hidden tech, some completely different attempt to speak about AGI.

I really, really doubt that transformers will become AGI. Maybe I am wrong, I am no expert in this field, but I would love to understand the reasoning behind this "could arrive this year", because it reminds me about coldfusion :X

edit: maybe the term has changed again. AGI to me means truly understanding, maybe even some kind of consciousness, but not just probability... when I explain something, I have understood it. It's not that I have soaked up so many books that I can just use a probabilistic function to "guess" which word should come next.

◧◩◪◨⬒
5. Miralt+SY[view] [source] 2024-05-15 09:53:31
>>Otomot+hG
This paper and other similar works changed my opinion on that quite a bit. It shows that to perform text prediction, LLMs build complex internal models.

>>38893456

[go to top]