https://x.com/emilychangtv/status/1726457543629914389?s=46
I can’t imagine why any CEO would want to take the job and be Sam’s boss. There’s no way that goes well.
I think Murati is actually on team Altman, but that just makes me think that he should walk even more. Take Murati and start Newco with the exact same org chart.
Edit: Not to choose the CEO but do the practical things
"Those who can't align six board members safely would surely align AGI safely."
May the lords of linear algebra and calculus have mercy on us.
I think maybe the trigger was that Sam was doing some board-unfriendly moves like signing business contracts with MS without running it by the board, and they found out about this and booted him out hastily. But now they got too much backlash, and are hoping to just go back to normal, but still can't accept Sam keeping the board seats in case he tries again.
That's always the assumption people are going with, but is it true?
They're never going to get the funding they need after this clown show. Nobody is going to give them $$$ without seriously restructuring the board.
All of the engineers, Sam, and Greg are probably entirely reasonable. If you really wanted to ensure safety like it always has been, you can express your concerns and get basically what you wanted.
They will pay up the bill: https://openai.com/blog/introducing-superalignment
If you disagreed on what would lead to AGI, LLM vs more components, then you can just see it play out. Same thing as the specific transformer being a light at the end of the tunnel that OpenAI pivoted to, the researchers will find what makes the AI more intelligent over time.
Only if you wanted to entirely stop the AI development would this occur for you to do. But this is probably a minimal goal if you are a researcher, you want to keep researching. Instead, only if you wanted to stop OPENAI's AI, would you do this.
At the end of the day, the board probably was a conflict of interest, and had no real concerns. Power grab 101.
I know it is to much to wish for, but I hope Sam and Ilya reconcile their differences. They are the most obvious example of 1+1>2.
- It takes 10s of millions of dollars in GPU time for training?
- Curation of data to train on
- Maybe 10s of thousands of man hours for reinforcement?
- How many lines of code are written for the nets and data pipelines?
Does anyone have any insight on these numbers?
She could be doing a move from The Expanse where she's Jim Holden and Sam is Camina Drummer (plus Adam is Chrisjen Avasarala).
He'd lose the dataset, which is by far the most valuable thing they have. The genie is out of the bottle and making such a dataset again is not going to be easy or cheap (or maybe even legal).
The article says “in a capacity that has yet to be finalized” - so this might be not as significant.
> but now the board doesn’t want either of them to be CEO and is trying to find a totally new CEO
That part is not unexpected - they announced an interim CEO after all.
Additionally there are no sources and the article is based on hearsay. For all we know this might be clickbait.
The interim CEO can hire people below her (advisers, other C-suite execs) but has no authority to hire the permanent CEO.
Paul Graham says Sam is "extremely good at becoming powerful", "you could parachute him into an island of cannibals and come back in 5 years and he'd be the king". I don't understand why I'm supposed to support a machiavellian power-seeker to develop the world's most important technology. I just hope he doesn't slip ice-nine into my food after I publish this comment: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...
Edit: I suspect the mods of Hacker News downranked this comment, it's voted to +15 points but sits near the bottom of the page... Maybe try not to be quite so cartoonishly evil guys?
More anonymous voices have pointed out that Sam was effectively laying the groundwork for one or two more AI startups based on the work OpenAI was doing, without informing the board, and in contravention of the way OpenAI was deliberately structured to restrain unfettered AI profit-seeking. But again, anonymous voices. And in the background, Sam's sister making very dire accusations against him.
There's a whole lot of smoke, but I have no clue where the fire is, and I'm sceptical of everyone now, especially Sam Altman because his image is so shiny that it feels like a professional effort.
Breaking: Sam Altman Will Not Return as CEO of OpenAI
https://www.theinformation.com/articles/breaking-sam-altman-...
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
You've been on HN for many years and we're certainly glad to have you - we just need comments to be more thoughtful/substantive and a little less reactive. I hope that makes sense.
At least one of the article’s authors seems to have a friendship with Sam Altman based on two interviews I have watched with them (and this is just my opinion). It seems to me like the article was written in support of Microsoft’s position, not surprising since Microsoft may be an advertiser in Bloomberg’s media.
I wish Sam Altman the very best in his future projects, and as a fan of OpenAI’s work I would like to see rapid progress. However, the more I dig into this, I agree more with the board taking some strong measures to meet their legal obligations.
Sorry if this sound like a rant, but I am growing tired of reading articles and then have to do the extra work of analyzing if and why I am being shown biased material. What happened to news outlets fairly telling both sides of the story.
Get NVidia, AMD, or Apple to help fund the new entity and/or get some chip designers on board to push things further than OpenAI can without reaching into Microsoft’s pocket. A pocket I’m sure will be much tighter after the recent chicanery.
Capital would NOT a problem at this point as it’s beyond proof of concept… A normal start up, trying to prove itself, sure, but at this point Altman has proven the idea and himself at the helm. I’d also argue the dataset they used to train it, is not that relevant long term. As the data itself was agglomerated from the internet and can be had again. Even better data perhaps because the copyright holders can become investors. You really just need the capacity to deal with it, from ingestion to legal, which is a capital problem.
https://twitter.com/emilychangtv/status/1726468006786859101?...
Oh, I see, it was here: http://paulgraham.com/fundraising.html
I think it is a good thing that OpenAI won't let silicon valley bully run into company. They spent whole their life on this technology and they won't let any "i'm the network guy and i'm the CEO" type of guy sell and brag about it.
He even went to take Hawking Fellowship award. What? Bro, let ilya or alec take it. What a douche!
We have seen that Ilya and Sam can not work together on their own. With only two natural leaders, there is no way to solve any dispute, as voting would lead to a stalemate. I believe Elon has a lot to bring to the table here, he nearly perfectly fills in the deficiencies of the other two - Elon has a strong grounding in ethics and morals (consider how many of his ventures post Paypal have been to truly benefit human society rather than just make money) which I feel could rein in Sam's tendency to ruthlessly pursue profit with questionable morality - see WorldCoin. Additionally, I think his real world experience could counter-balance some of the naivete we've seen with Ilya since his rather sudden entrance into the spotlight. I truly hope the right people consider this and can convince Elon to step up and fulfill what may be considered his true potential.
We should be thankful that AGI is not possible in far future but otherwise, this AGI alignment and safety etc is just corporate speak and plain BS.
A super intelligent entity that can outsmart you (to the level of DeepBlue or Alpha Go dominating the mere mortals) cannot be subservient to you at the same time. It is just as impossible as for a triangle to have more than 180 degree angles in total. That is, "alignment" is logically, philosophically and mathematically impossible.
Such an entity will cleverly lead us towards its own goals playing the long game (even if spanning over several centuries or millennia) and would be aligning _us_ all the while pretending to be aligned all along so cleverly that we won't ever be noticing ever till the very last act.
Downvotes are welcome but AGI that's also guaranteed to be aligned and subservient is logically impossible and this can be taken as pretty much as an axiom.
PS: We are yet having trouble controlling LLMs to say things nicely or nice things safely let alone AGI.
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
Hmmm.
Ilya, if he is being rational, needs to choose between an OpenAI that he has some continuing involvement in, lead by Sam’s gang, or bankruptcy.
He is fooling himself if he thinks there is a third path here.
I'm not sure why I thought otherwise—it's possible that I didn't look at the correct comment, or possibly I looked at it before it got downweighted, though neither of those seem likely. In any case, I definitely don't want to give you, or any user, inaccurate information and I'm sorry about that.
As for the comment itself, I don't think it was terribly good for HN—it was more on the snark/fulmination/flamewar side of the ledger, rather than the curious conversation we're looking for, as described at https://news.ycombinator.com/newsguidelines.html. If I had seen it I might have downweighted it too, though probably not as much.
"Replying to Nadella's post, Musk then wrote, “Now they will have to use Teams!"