zlacker

[parent] [thread] 2 comments
1. akamak+(OP)[view] [source] 2023-11-18 03:15:26
[deleted]
replies(2): >>swatco+M >>aleph_+C1
2. swatco+M[view] [source] 2023-11-18 03:20:40
>>akamak+(OP)
Alternate read: He says that to The Street because it's the board's vision, yet focuses more of the company on commercialization of ChatGPT than he was leading the board to believe. He was talking their message to the press, but playing his own game with their company.

If you truly believed that OpenAI had an ethical duty to pioneer AGI to ensure its safety, and felt like Altman was lying to the board and jeopardizing its mission as he sent it chasing market opportunities and other venture capital games, you might fire him to make sure you got back on track.

3. aleph_+C1[view] [source] 2023-11-18 03:25:29
>>akamak+(OP)
I think I don't get your point entirely:

"We can still push on large language models quite a lot, and we will do that": this sounds like continuing working on scaling LLMs.

"We need another breakthrough. [...] pushing hard with language models won't result in AGI.": this sounds like Sam Altman wants to do additional research into different directions, which in my opinion does make sense.

So, altogether, your quotes suggest that Sam Altman wants to continue working on scaling LLMs for the short and middle term and parallely do research into different approaches that might lead to another step towards AGI. I cannot see how this planning could infuriate Ilya Sutskever.

[go to top]