"In a post to X Friday evening, Mr. Brockman said that he and Mr. Altman had no warning of the board’s decision. “Sam and I are shocked and saddened by what the board did today,” he wrote. “We too are still trying to figure out exactly what happened.”
Mr. Altman was asked to join a video meeting with the board at noon on Friday and was immediately fired, according to Mr. Brockman. Mr. Brockman said that even though he was the chairman of the board, he was not part of this board meeting.
He said that the board informed him of Mr. Altman’s ouster minutes later. Around the same time, the board published a blog post."
[1] https://www.nytimes.com/2023/11/17/technology/openai-sam-alt...
What do you mean by this? Looks like you're just throwing out a diss on the doomer position (most doomers don't think near future LLMs are concerning).
We have ample empirical pretext to worry about things like AI ethics, automated trading going off the rails and causing major market disruptions, transparency around use of algorithms in legal/medical/financial/etc. decision-making, oligopolies on AI resources, etc.... those are demonstrably real, but also obviously very different in kind from generalized AI doomsday.