zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. gordon+LA1[view] [source] 2023-11-18 05:28:57
>>davidb+(OP)
From NYT article [1] and Greg's tweet [2]

"In a post to X Friday evening, Mr. Brockman said that he and Mr. Altman had no warning of the board’s decision. “Sam and I are shocked and saddened by what the board did today,” he wrote. “We too are still trying to figure out exactly what happened.”

Mr. Altman was asked to join a video meeting with the board at noon on Friday and was immediately fired, according to Mr. Brockman. Mr. Brockman said that even though he was the chairman of the board, he was not part of this board meeting.

He said that the board informed him of Mr. Altman’s ouster minutes later. Around the same time, the board published a blog post."

[1] https://www.nytimes.com/2023/11/17/technology/openai-sam-alt...

[2] https://twitter.com/gdb/status/1725736242137182594

◧◩
2. cedws+xC1[view] [source] 2023-11-18 05:44:26
>>gordon+LA1
So they didn't even give Altman a chance to defend himself for supposedly lying (inconsistent candour as they put it.) Wow.
◧◩◪
3. somena+HF1[view] [source] 2023-11-18 06:09:32
>>cedws+xC1
Another source [1] claims: "A knowledgeable source said the board struggle reflected a cultural clash at the organization, with Altman and Brockman focused on commercialization and Sutskever and his allies focused on the original non-profit mission of OpenAI."

[1] - https://sfstandard.com/2023/11/17/openai-sam-altman-firing-b...

◧◩◪◨
4. krzyk+oO1[view] [source] 2023-11-18 07:35:08
>>somena+HF1
So it looks like they did something good.
◧◩◪◨⬒
5. konsch+x42[view] [source] 2023-11-18 10:01:12
>>krzyk+oO1
If you want AI to fail, then yes.
◧◩◪◨⬒⬓
6. killer+On2[view] [source] 2023-11-18 12:27:32
>>konsch+x42
Yeah, AI will totally fail if people don't ship untested crap at breakneck speed.

Shipping untested crap is the only known way to develop technology. Your AI assistant hallucinates? Amazing. We gotta bring more chaos to the world, the world is not chaotic enough!!

◧◩◪◨⬒⬓⬔
7. criley+jD2[view] [source] 2023-11-18 14:07:28
>>killer+On2
All AI and all humanity hallucinates, and AI that doesn't hallucinate will functionally obsolete human intelligence. Be careful what you wish for, as humans are biologically incapable of not "hallucinating".
◧◩◪◨⬒⬓⬔⧯
8. killer+TS2[view] [source] 2023-11-18 15:39:39
>>criley+jD2
GPT is better than an average human at coding. GPT is worse than an average human at recognizing bounds of its knowledge (i.e. it doesn't know that it doesn't know).

Is it fundamental? I don't think so. GPT was trained largely on random internet crap. One of popular datasets is literally called The Pile.

If you just use The Pile as a training dataset, AI will learn very little reasoning, but it will learn to make some plausible shit up, because that's the training objective. Literally. It's trained to guess the Pile.

Is that the only way to train an AI? No. E.g. check "Textbooks Are All You Need" paper: https://arxiv.org/abs/2306.11644 A small model trained on high-quality dataset can beat much bigger models at code generation.

So why are you so eager to use a low-quality AI trained on crap? Can't you wait few years until they develop better products?

◧◩◪◨⬒⬓⬔⧯▣
9. fennec+oO9[view] [source] 2023-11-20 12:59:15
>>killer+TS2
Because people are are into tech? That's pretty much the whole point of this site?

Just imagining if we all only used proven products, no trying out cool experimental or incomplete stuff.

[go to top]