Sam Altman spoke at an APEC panel on behalf of OpenAI literally yesterday: https://twitter.com/LondonBreed/status/1725318771454456208
> Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
Wow
Wow. Anyone have any insight into what happened?
"OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner." [1]
Strangest thing in company's PR when they're thriving!
The All In podcast had some words about this a few months ago, though they spoke in generalities.
Ouch -- were there any signs this was coming?
The prodigy Altman is booted after creating potentially the most successful company of all time and replaced by CTO who had no prior ML/AI experience becomes CEO. Wow.
Put me in, coach.
As someone deeply entrenched in the realms of AI development and ethical considerations, boasting a robust leadership background, I stand poised to lead OpenAI into its next phase of innovation and ethical advancement. My tenure navigating the intersection of AI research, business acumen, and ethical frameworks provides a unique foundation. Having spearheaded AI initiatives that upheld ethical standards while fostering groundbreaking technological advancements, I bring a proven track record of synthesizing innovation with responsible AI practices. My commitment to leveraging AI for the betterment of society aligns seamlessly with OpenAI's ethos, ensuring a continued pursuit of groundbreaking advancements in AI while maintaining a steadfast commitment to ethical, transparent, and socially responsible practices.
Not certain, but IMHO the last paragraph almost recognises that OpenAI has become something self contradictory:
> OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.
He's not perfect, but behind the scenes he's a genuine and upstanding person. I've met lots of wealthy smart people, and he's the only exception. He was the only person I trusted in this situation, and I'm genuinely nervous that he's no longer running OpenAI.
It was always a bit strange that he never had a share nor took salary from OpenAI, but then what about his vision(and dream from childhood)to achieve AGI and all?
Definitely smells of a scandal - why else would they need to get him out so quick?
The subheading of the article, minus unnecessary words, would be a big improvement:
Sam Altman departs OpenAI; interim replacement is CTO Mira MuratiIMHO, saying he hasn't been candid is extremely harsh in terms of corporate PR speak.
>As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.
Edit: dang is right, sorry y’all
Much prefer him to the deepmind people who seem almost psychopathic by comparison.
My guess is that either they’re financially super hosed. Or one group wants to build skynet and one doesn’t.
A scandal would probably be something along the lines of either “we love him and wish him the best” (hidden) or “he doesn’t represent the values of our org and we love XYz” (embraced)
I find it $EMOTION that the board is also not candid in its communications on why they fired him.
Whoa, rarely are these announcements so transparent that they directly say something like this. I’m guessing there was some project or direction Altman wanted to pursue, but he was not being upfront with the board about it and they disagreed with that direction? Or it could just be something very scandalous, who knows.
It's not just a "hey, we don't really agree on x or y so let's part ways". It's more "hey, this guy did something that could get us in jail if we don't cut tie immediately".
https://www.themarysue.com/annie-altmans-abuse-allegations-a...
After four and a half intense and wonderful years as CEO of Groupon, I've decided that I'd like to spend more time with my family. Just kidding – I was fired today. If you're wondering why ... you haven't been paying attention.
https://www.theguardian.com/technology/blog/2013/mar/01/grou...
It's down 12% after the news so far.
If you're referring to some other form of moderation that you think is bad or wrong, please supply links so that readers can make their minds up for themselves.
Jobs got fired because Apple was on brink of bankruptcy all the time and was selling nothing to no one.
Jobs wasn't the CEO of Apple, Sculley was. This is a much more impactful move.
On top of that OpenAI is literally exploding in popularity and sales, that's not the moment to cut ties with your CEO.
Also Sam Altman has an insanely better and heavier CV today than Jobs had in 1985, former director of YC and often called the "boss of silicon valley".
You don't fire a man like Sam Altman easily, they are hard to come by in the first place. He's a powerful person you don't want to have against for no good reason when winds are blowing in the right direction moreover.
It has to be some scandal, otherwise this is too sudden, and out of nowhere to a guy that led OpenAI in this direction, with success, for years.
He is a major investor in a few high profile startups, like Humana’s AI Pin, so either he just wants new challenges, or there is some form of scandal (let’s all hope not), or there are issues on not going full steam ahead in profitability.
But who knows, maybe there's a connection.
Not candid in any kind of investment situation with reps and warranties is a really big deal....
> "[...] If someone — correction, if generally a white, cis man — presents himself with enough confidence, then venture capitalists, media [...]"
I stopped reading right there. This kind of race-baiting adds zero context to the story (which may or may not be true).
His legendary work on first harvesting reddit then going on a European tour to lobby against others doing the same will be thought in business schools for years.
Hope he lands a nice job next. How about head of QA at Tesla?
/s
[0]: https://x.com/phuckfilosophy/status/1710371830043939122
Lying on P&L, stock sale agreements, or turning down an acquisition offer under difficult circumstances seems likely.
My first thought is: C'mon. The company has just invested time to integrate with OpenAI. Just do it. Just announce that 200%+ price increase on everything with a scapegoat intermediate CEO. Or make it more so it hurts more, because of profit, so you can dial back a pity to be the good guys.
I invented a saying to describe this common occurrence: "Sometimes the cover-up is worse than the crime."
> As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.
Murati's selection as interim CEO is a surprise and might be an attempt to distance the company from whatever the board is claiming Altman lied about.
It's a corollary to my theory that anybody that maintains close ties with their family and lives with them is a wholesome person.
I know OpenAI in recent years forgot it's a non profit with particular aims, but:
> The majority of the board is independent, and the independent directors do not hold equity in OpenAI.
But.. what are the responsibilities of the board that may be hindered? I studied https://openai.com/our-structure
One tantalising statement in there is that AGI-level system is not bound by licensing agreements that a sub-AGI system would be (ostensibly to Microsoft).
This phase-shift places a pressure on management to not declare reaching a AGI level threshold. But have they?
Of course, it could be an ordinary everyday scandal but given how well they are doing, I'd imagine censure/sanctions would be how that is handled.
Joking aside, this feels massive. Both that it happened so suddenly and that the announcement doesn't mince words. The fact that the CTO is now CEO makes me think it's probably not a lie about their tech. It wouldn't make sense to say "we've been lying about our capabilities" and then appoint the current CTO as CEO.
This makes me think it's either financial or a scandal around Sam himself.
I can't wait to hear more
I so hate to do this, but for those who are comfortable viewing HN in an incognito window, it will be much faster that way. (Edit: this comment originally said to log out, but an incognito window is better because then you don't have to log back in again. Original comment: logging in and out: HN gets a lot faster if you log out, and it will reduce the load on the server if you do. Make sure you can log back in later! or if you run into trouble, email hn@ycombinator.com and I'll help)
I've also turned pagination down to a smaller size, so if you want to read the entire thread, you'll need to click "More" at the bottom, or like this:
https://news.ycombinator.com/item?id=38309611&p=2
https://news.ycombinator.com/item?id=38309611&p=3
https://news.ycombinator.com/item?id=38309611&p=4
https://news.ycombinator.com/item?id=38309611&p=5
Sorry! Performance improvements are inching closer...
The wording of this statement is the kind of thing a board says when the company has done something deeply illegal that they will all face personal jail time for, and so they need to immediately deny all knowledge of the offense and fire the people who did have knowledge of it.
https://twitter.com/phuckfilosophy/status/163570439893983232...
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
I’m very curious which.
EDIT: Brockman was voted out as the Chairman of the Board.
"And lied to us about it."
1. Altman co-mingled some funds of WorldCoin and ChatGPT. Most probably by carelessness.
2. OpenAI is a golden goose, so the board was more than happy to kick the leader making more space for them.
3. The harsh wording is an attempt to muddy the water. Because an inevitable competitor from Altman is Coming.
On one hand, OpenAI is completely (financially) premised on the belief that AGI will change everything, 100x return, etc. but then why did they give up so much control/equity to Microsoft for their money?
Sam finally recently admitted that for OpenAI to achieve AGI they "need another breakthrough," so my guess it's this lie that cost him his sandcastle. I know as a researcher than OpenAI and Sam specifically were lying about AGI.
Screenshot of Sam's quote RE needing another breakthrough for AGI: https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_pr... source: https://garymarcus.substack.com/p/has-sam-altman-gone-full-g...
And it could be for any reason, even purely ethical like, “we don’t want to license this technology to better sell products to tweens”.
Flagged HN thread: >>37785072
When I googled his name I saw the same cached text show up.
EDIT: As a few have pointed out, this looks like text from a tweet he quoted, and it's incorrectly showing as the description under his google search result.
Sam doesn't seem to be ousted by usual corporate politics. The message definitely does not sound like generic corpspeak for these kinds of events such as "looking for new opportunities" "spending more time with their families", which is usually sent out in a consensus among all parties.
Agreed
> This makes me think it's either financial or a scandal around Sam himself.
I can't imagine it being about fake financials. This isn't Microsoft's first time doing due diligence on a acquisition. That is both technical and financial due diligence.
And clearly they didn't buy the company because it was super profitable, but for the tech.
They better communicate who they are right quick. I liked Sam’s product decisions.
"OpenAI announces leadership transition"
This from 2021? >>37785072
Bad if true, but highly unlikely that it is.
He won't be there in 6 months; this is just a crumb of continuity.
The company was dying.
OpenAI is not.
Also, it's probably you underestimating the impact of OpenAI, if anything, or the entrepreneurial career of Altman.
Also, you probably don't know that but..the Apple 1 and 2, were designed by Wozniak, not Jobs, Jobs hated them. He had no such impact nor cv you think it had in 1985 and sugarcoating it with second phase Jobs.
He was also removed from the board in this process.
YC Summer 2024: MoreCore is hiring scaling engineers to speed up HN by recycling old Athlons
It appears there are people digging into his dark side.
Alternative possibility: the family's a cult.
my 2 cents that he lied about profitability, they should be expending massive money in operations, they need to cut cost to deliver an attractive business model for their service and from a shinny startup star boss that'd had to be a straight f.u.
I have no good way of assessing what the likelihood is that her claims are true.
But as a general point, you can be both a "good person" and still do bad things. Or you can be a good person in some areas, and a not-so-good person (or even horrible person) in some other areas. People are complex.
Of course it's entirely possible that Altman is just a really good person, but I wouldn't be quick to make assumptions.
EDIT: the fun for the conspiracy theory is over — it’s a cached tweet that is actually a Dean Phillips tweet Sam replied to.
Imagine if you were the CTO of a company, massively underestimated your AWS bill, and presented your board with something enormous. Maybe something like that happened?
Or, if I wanted to speculate to the extremely negative; what if the training and operating costs ballooned to such a degree, that the deal with Microsoft was an attempt to plug the cash hole without having to go to the board requesting an enormous loan? Because the fact that Copilot (edit: previously known as Bing Chat and Bing Image Creator) is free and ChatGPT (edit: and DALL-E 3) are not should be a red flag...
Sure, we knew Apple was on the verge bc they were a public company with vetted financials. However, no one knows OpenAI's financial situation. We just know 1) growth was meteoric, 2) prices were dropped significantly when alternatives were available, and 3) they were almost always fundraising. Selling $1.00 of value for $0.50 also can lead to a meteoric rise as well.
I'm not saying you're wrong. But just don't know how you got such conviction.
And possibly related the pause of ChatGPT Plus sign-ups due to capacity problemns (which is all Azure afaik).
You mean, exactly like there been, from Sam Altman's sister?
This board member has been making dubious statements in public - gross lies about what openai and ai can do - misleading millions of people. He led a campaign of promoting the company’s product centred on FOMO, FUD, spam and other dark patterns.
Good riddance.
> I'm OK with having failed at this part of the journey. If Groupon was Battletoads, it would be like I made it all the way to the Terra Tubes without dying on my first ever play through.
It doesn't looks like he has a hint about this:
> I am super excited. I can't imagine anything more exciting to work on.
Plenty of people maintain extremely close connections with their families while engaging in activies that are terrible for the world around them. Organized criminals. Terrorists. Business magnates. Political families. Corrupt police officers. Plenty of these groups are made out of tight-knit families.
It's common, dare I say human nature, to prioritize the needs of your family. That is honorable and important, but being a Good person requires caring about strangers too.
I certainly don't know him, but I see more reasons not to trust him than to trust him.
Nor controversial enough to have such an impact on recent and future business.
There are no such allegations regarding Andy Rubin.
> Mr. Rubin had been having an extramarital relationship, [and] said he coerced her into performing oral sex in a hotel room in 2013
https://manifold.markets/Ernie/what-will-sam-altman-be-doing...
And this tag contains all the markets about him https://manifold.markets/browse?topic=sam-altman
Will he end up at Grok? Why was he fired? etc.
It would be like if AlphaGo could only win if the Go board was half as big. Not real fraud, but shows a clear willingness to cut corners and stretch ethics.
Some years go by, and AGI progresses to assault man
Atop a pile of paper clips he screams "It's not my fault, man!"
But Eliezer's long since dead, and cannot hear Sam Altman.
--
Scott Alexander
It forever tuned me in to the ethos of Silicon Valley. And I have tried paying back where I can.
there is no reliable information about sales. It is likely very big secret.
I have zero knowledge of the internals of OpenAI - just thinking out loud about what could have spurred such a statement.
[0] https://www.youtube.com/live/U9mJuUkhUzk?si=dyXBxi9nz6MocLKO
This actually seems the most probable reason for this given the circumstances and phrasing.
If you google "Sam Altman" his twitter bio in the search results reads:
[removed]
I don't know the guy but nothing can really be assumed about this.
OpenAI, even prior to AGI is seeing 100x+ returns. The ousting almost certainly is not a matter of performance or professional capability. It's a matter of some personal scandal or fundamental, philosophical difference on the fate of AI in the world and safety.
Of sorts.
ChatGPT is actually a farm of underpaid humans, located somewhere in southeast Asia.
So my bet is either they lied about how they are using customer data, covered up a massive data breach or something similar to that. The only thing that's a bit hard to figure there is how specific this is to Altman. A big scandal would be leaking out I would think and more people would be getting fired.
A person I've known all my life I could swear and trust him with anything was found out to have violated extremely young children and other stuff.
Stop pretending you know people, people don't even know themselves.
"Good" is too blurry of a description, and I don't know Sam, but one thing I've learned (the hard way) is that you don't truly know someone unless you've had conflicts of interest with them and found mutually satisfying resolutions to them. If all you've had is mutually beneficial interactions, then of course everyone's going to be nice - it's in everyone's interests. You need to see how they act on nontrivial conflicts (either handling present ones, or mitigating/averting future ones) to really know if someone is a genuinely good person or not.
While this could hypothetically happen within an hour of meeting someone, it's more likely to take years or even decades... or might never even happen.
Honestly have no idea, but I'm sure a shift of control could cause this.
I think it could be transferring of OpenAI’s assets to other entities.
It is scandalous for sure
This concept and phrasing was common at least as early as the Watergate Scandal in 1974.
Just something he retweeted long ago
How do you find the next CEO? Are there good people to pick from internally? Altman was a public face for the company. Replacing him will be difficult.
I don't want to build a business with their stuff and then find OpenAI shifts direction.
I don't know about the Skynet because it has happened 26 years before [1] but I imagine NSA, the Military, and other government agencies approached the company.
[1] https://en.wikipedia.org/wiki/Terminator_2:_Judgment_Day
Maybe he's not involved in this, but it's a "you should have known" type of thing? Really no idea...
If Sam was pursuing profits or growth (even doing a really good job of it) in a way that violated the objectives set by the non-profit board, that could set up this kind of situation.
Edit: I didn't even know he molested his sister when I wrote my post: https://twitter.com/phuckfilosophy/status/163570439893983232...
That the board is unhappy with his for profit and moat building charted path.
That this is about his sister.
That he pissed off microsoft.
That he did something illegal, financially.
That he has been lying about costs/profit.
That he lied about copyrighted training data.
I will add: maybe he's not aggressive enough in pursuit of profit.To me, this sounds like Altman did something probably illegal to try and generate more profit, and the board wasn't willing to go along with it.
He confirmed it verbally as well in his May 2023 hearing in Congress https://twitter.com/thesamparr/status/1658554712151433219?la...
PS: I used to live in a van on the street near the PA HQ. Someone there had a tiny trailer in the parking lot but I couldn't tell if were living it or just storing it.
He is probably in shock.
So my bet is either they lied about how they are using customer data, covered up a massive data breach or something similar to that.
Your use of "crazy abuse allegations" is strange to me as well. I hardly see any of her allegations as being "crazy".
Here's a collection of things she's said about the abuse.
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
My first comment wasn't really about them not being profitable, it was more of a question about how close to bankruptcy they are. Again though, you're right that MSFT probably did their DD, so that's unlikely
Ilya Sutskever did an ominous and weird youtube for Guardian recently about the dangers of AI. Maybe it has something to do with it?
It was just posted but was filmed on November 1st.
As far as whether this might be the cause, one possible scenario: the board hired a law firm to investigate, Sam made statements that were contradicted by credible evidence, and that was the fireable event. Brockman could have helped cover this up. Again, not saying that this is what happened but it's plausible.
BTW Rubin's $90M payout a) caused a shitstorm at Google b) was determined in part by David Drummond, later fired in part due to sexual misconduct. I would not use this as a representative example, especially since Google now has a policy against such payouts: https://www.cbsnews.com/news/andy-rubin-google-settlement-se...
> Many critics have called Worldcoin's business—of scanning eyeballs in exchange for crypto—dystopian and some have compared it to bribery.
https://time.com/6300522/worldcoin-sam-altman/
> market makers control 95% of the total circulating supply at launch, leading to an initial market imbalance.
https://beincrypto.com/worldcoin-wld-privacy-risk/
> Worldcoin’s use of biometric data, which is unusual in crypto, raises the stakes for regulators. Multiple agencies expressed safety concerns amid reports of the sale of Worldcoin digital identities, known as World IDs, on virtual black markets, the ability to create and profit off of fake IDs, as well as the theft of credentials for operators who sign up new users.
https://www.bloomberg.com/news/newsletters/2023-08-23/worldc...
Lots more signups recently + OpenAI losing $X for each user = Accelerating losses the board wasn't aware of ?
I wouldn't be too sure about that, actually. DALLE took a pretty hard hit because of Stable Diffusion, and the GPT API is so cheap that they're probably running it at a loss. Also, most users are going to be using the free ChatGPT web-client, so that's also a major loss.
and
"no longer has confidence"
points to something more serious than underestimating costs.
Given the sudden shift in billing terms that is quite possible.
I'd assume that running a model that only needs to deal with a single programming language (the Copilot plugin knows what kind of code base it is working on) is _a lot_ cheaper than running the "full" ChatGPT 4.
A few things that could lead to the company throwing shade: 1. Real prospects of OpenAI progress have been undersold, and that Altman and cofounders sought to buy time by slow-rolling the board 2. Real profitability is under/overestimated 3. The board was not happy with the "doom and gloom" narrative to world leaders 4. World leaders asked for business opportunities and the board was not fully aware of bridges or certain exploration of opportunities. 5. None of the above and something mundane.
"not significantly candid in projections for profitability"
"not significantly candid in calculating operation cost increases"
"not significantly candid in how much subscribers are actually using ChatGPT"
etc.
https://twitter.com/phuckfilosophy/status/163570439893983232...
The Pentagon calls up Sam Altman and offers a very lucrative contract for an AI to oversee a fleet of networked drones that can also function semi-autonomously. Sam Altman does not tell the board.
Reality might, of course, be very different.
Make your own conclusions.
This is what shouldn't add up: Microsoft is literally adding GPT-4, for free, to the Windows 11 taskbar. Can you imagine how much that costs when you look at the GPT-4 API, or ChatGPT's subscription price? Either Microsoft is burning money, or OpenAI agreed to burn money with them. But why would they do that, when that would compromise $20/mo. subscription sales?
Something doesn't financially add up there.
Personally I welcome this shake up. Some of the things I’ve seen Altman write about are troubling.
Unless there was evidence you had not underestimated but were, e.g., getting a kickback on the cloud costs that you deliverately lowballed in your estimates, they might fire you, but they almost certainly wouldn't put out a press release about it being for your failure to be candid.
That language indicates that the board has a strong belief that there was a major lie to the board or an ongoing pattern of systematic misrepresentation, or a combination.
But also, a human company operating under the human legal arrangements it's built upon were never going to stand the advent of artificial superintelligence. It would tear apart whatever it needs to, to achieve whatever its initial goals are. The best intentions by Altman and Brockman would be easily outmaneuvered.
Response:
Sam Altman, the CEO of OpenAI, has been a controversial figure in the AI industry. His leadership style, lack of transparency, and decision-making processes have raised significant concerns among OpenAI's employees and the public. This essay will delve into these issues, arguing that Altman's actions warrant his removal from his position.
Firstly, Altman's lack of transparency is a major concern. He has been known to make decisions without adequately consulting with his team or the public. This has led to a lack of trust and dissatisfaction among OpenAI's employees. For instance, when Altman announced that OpenAI would be focusing on a single project, he did not provide sufficient reasoning or context. This lack of communication has left employees feeling disenfranchised and uninformed.
Secondly, Altman's decision-making processes are often questionable. His decisions have not always been in the best interest of OpenAI or its employees. For example, when OpenAI decided to pivot from developing AI systems to developing AI safety research, many employees felt that this was a strategic mistake. Altman's decision to focus on this area without considering the potential negative impacts on the company's reputation and financial stability was a clear example of poor decision-making.
Thirdly, Altman's leadership style has been described as autocratic. He has been known to make decisions without considering the input of his team. This has led to a lack of buy-in from employees and has negatively impacted morale. For instance, when Altman decided to shift OpenAI's focus to AI safety research, many employees felt that their ideas and contributions were being overlooked.
Finally, Altman's actions have also raised concerns about his commitment to AI safety. His decision to focus on AI safety research, rather than on developing AI systems, has raised questions about his commitment to the field. This decision has also raised concerns about the potential misuse of AI technology and has led to a loss of trust among the public.
In conclusion, Sam Altman's lack of transparency, questionable decision-making, autocratic leadership style, and concerns about his commitment to AI safety are all reasons why he should be removed from his position at OpenAI. It is clear that his actions have led to a lack of trust and dissatisfaction among OpenAI's employees and the public. It is crucial that OpenAI takes these concerns seriously and makes changes to ensure the success and safety of its AI technology.
Apple was going bankrupt.
Jobs hated the only products that made any money (Apple 1 and 2).
Are they?
They are certainly making a large presence of themselves, but last I heard they were also burning capital to keep everything running. I have no idea if that is true or not, or what their current situation is... but if they truly are in the boat of "losing money on every transaction, but making up for it with scale", that is not "thriving", it is speeding towards a brick wall.
Dude, where have you been for the past decade?
> Andy Rubin got a $90M severance payout from Google after running a sex-slave dungeon on his personal time.
And hence the colossal blowback caused by that means it ain't ever happening again. Just 2 months ago a tech CEO was forced to resign immediately for egregious conduct, losing 100+ million in the process: https://nypost.com/2023/09/20/cs-disco-ceo-kiwi-camara-loses...
As I interview for new roles, it's a timely lesson, suggesting how to test what a new employer is -really- like.
I'm trying to read the tea leaves and there seem to be quite a few reminders that OpenAI is a non-profit, it's supposed to further the goals of all humanity (despite its great financial success), it's controlled by a board that largely doesn't have a financial interest in the company, etc etc.
Maybe Altman has been straying a bit far from those supposed ideals, and has been trying to use OpenAI to enrich himself personally in a way that would look bad should it be revealed (hence this messaging to get in front of it).
Nov 6 - OpenAI devday, with new features of build-your-own ChatGPT and more
Nov 9 - Microsoft cuts employees off from ChatGPT due to "security concerns" [0]
Nov 9 - OpenAI experiences severe downtime the company attributes to a "DDoS" (not the correct term for 'excess usage') [3]
Nov 15 - OpenAI announce no new ChatGPT plus upgrades [1] but still allow regular signups (and still do)
Nov 17 - OpenAI fire Altman
Put the threads together - one theory: the new release had a serious security issue, leaked a bunch of data, and it wasn't disclosed, but Microsoft knew about it.
This wouldn't be the first time - in March there was an incident where users were seeing the private chats of other users [2]
Further extending theory - prioritizing getting to market overrode security/privacy testing, and this most recent release caused something much, much larger.
Further: CTO Mira / others internally concerned about launch etc. but overruled by CEO. Kicks issue up to board, hence their trust in her taking over as interim CEO.
edit: added note on DDoS (thanks kristjansson below) - and despite the downtime it was only upgrades to ChatGPT Plus with the new features that were disabled. Note on why CTO would take over.
[0] https://www.cnbc.com/2023/11/09/microsoft-restricts-employee...
[1] https://twitter.com/sama/status/1724626002595471740
[2] https://www.theverge.com/2023/3/21/23649806/chatgpt-chat-his...
[3] https://techcrunch.com/2023/11/09/openai-blames-ddos-attack-...
Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
I sincerely hope this is about the man and not the AI.In the words of Brandt, "well, Dude, we just don't know."
You can make the claim about Apple due to the financials being public - you can't make the same claim about OpenAI unless you have insight the rest of the public doesn't have. "facts are facts"?? what facts do you have here?
>Also, you probably don't know that but..the Apple 1 and 2, were designed by Wozniak, not Jobs, Jobs hated them
I'd be shocked if a significant portion of the hacker news audience wasn't aware of who Woz is and the basic high level history of Apple.
I found a tree trial and $10/month $100/year after that. I've asked them to consider a free tier for hobbyists that cannot justify the expense but I'm not holding my breath.
If there is a free tier I did not find, please point me to it!
Jobs didn't hate the Apple I and Apple II. He wouldn't have partnered with Wozniak in the first place if he'd hated the Apple I.
Jobs was the guy who got Apple enough capital from VCs to actually ship the Apple II in mass quantities. That's not something Steve Jobs would do for a computer he hated.
And the Apple IIc was his idea!
> Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
That word “directly” seems to be relevant here.
https://x.com/ericschmidt/status/1725625144519909648?s=20
Sam Altman is a hero of mine. He built a company from nothing to $90 Billion in value, and changed our collective world forever. I can't wait to see what he does next. I, and billions of people, will benefit from his future work- it's going to be simply incredible. Thank you @sama for all you have done for all of us.
Making such a statement before knowing what happened, or, maybe he does know what happened, make this seem it might not be as bad as we think?
I'd tend to agree, but "deliberative process" doesn't really fit with this. Sounds like it might have been building for ~weeks or more?
Something to happen immediately would require overwhelming evidence on hand in the meeting. So it could be something that has been uncovered as part of the due diligence with the MS investment
Its more likely to be fabrication of numbers, or misappropriation of funds, rather than something "dramatic" Think musk at paypal being monumentally incompetent, rather than planned misdeeds.
Maybe Sam had been trying to broker a sale of the company without consulting the board first? All speculation until more details are revealed but he must've done something of similar magnitude.
> OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.
This prompted me to actually read up on the charter: https://openai.com/charter
AFAICT she's notable because she's been an academic and executive in the field for many years, in many different companies.
With such an insistence on the fact that OpenAI is supposed to be non-profit and open for all of humanity, it's pretty clear that the board doesn't like the direction that the company has taken, both in its search of profit and its political lobbying to restrict innovation.
Constantly calling out "cis men" is in fact transphobic, which is how you can tell they don't care about it. If you think cis men and trans men behave differently or are always treated differently, this means you don't think they're both men.
Also sama is not white. Although he does appear to have gotten a series of jobs with not a lot of experience by convincing Paul Graham to figuratively adopt him.
This hits a spot. I had a really nice boss.. Until we got into a conflict, then she tried to blackmail me, pressure me and break me. I learned why some people who left our company needed months to get back on their feet. I got out quite well and managed to push back, but it was a tough period.
> OpenAI was founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk serving as the initial board members.
At least one of them must jointly make this decision with the three outside board members. I’d say it’s more likely to be business related. (In addition, the CTO is appointed as the interim CEO.) (Edit: But obviously we currently don’t really know. I think the whistleblower theory below is possible too.)
The announcement: https://openai.com/blog/openai-announces-leadership-transiti...
“OpenAI’s board of directors consists of OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner. …..
As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.“
Previous members: https://openai.com/our-structure
“Our board OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.”
For another example, imagine if OpenAI had never been a non-profit, and look at the board yesterday. You'd have had Ilya reporting to Sam (as employees), while Sam reports to Ilya (with Ilya as one member of the board, and probably a major stakeholder).
Now, when it gets hostile, those loops might get pretty weird. When things get hostile, you maybe modify reporting structures so the loops go away, so that people can maintain sane boundaries and still get work done (or gracefully exit, who knows).
i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people.
will have more to say about what’s next later.
Piping all data submitted to OpenAI straight to his buddy's Palantir would definitely not support the mission to "benefit all of humanity".
If you need evidence that this is sufficient for dismissal, merely stating that impropriety exists is apparently enough to get my first flag on hn after 12 years.
Twitter also has one, although that's hardly a functioning example.
Whether they ultimately wanted to profit from it or not, there is $trillions of value in AI that can only be unlocked if you trust your AI provider to secure the data you transmit to it. Every conversation I’ve had about OpenAI has revolved around this question of fundamental trust.
Tempting, but you can launch Mistral-7B on your laptop and see the same coherent responses and reasoning from a 8GB model file. The magic is really there.
>i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people.
>will have more to say about what’s next later.
Of course we have no clue if that's what actually happened. Any conclusions made at this point are complete speculation, and we can't make any conclusions more specific then "this is probably bad news."
This works in two directions, by the way. In 2001 few would have expected that Bill Gates would spend much of his time on philanthropy. Is he a "good" or "bad" person? Well, he's both.
With the exception of the brothers in the mafia or brother terrorists, none of your examples would meet that standard.
Being close with your family does not mean you’re not a good person elsewhere. It does not mean you don’t care about strangers. That you’d jump to that conclusion or bring up terrorists as a counter example makes me question your own personal experiences.
All else being equal, I’d expect someone with close family bonds to the point of living with them as an adult, when they clearly have the economic means to do otherwise, as a sign of a good human. That’s been my personal experience and that’s how I see the world.
I don't think changes anything.
No, this is obviously the key:
"review process by the board, which concluded that he was not consistently candid in his communications with the board"
This is an explicit statement that he was lying to the board about something. It cannot be worded more clearly unless switching to use the word "lying".
There is no way that sama is the only person in this set of people to have unique information on critical privacy incidents or financials or costs of server operations, because these issues don't originate with him.
If some version of this turned out to be true, I would be seriously confused about ground truth transparency in the company and how the fuck they set the whole thing up, that this was an option. But again, this is why I'd say: Implausible.
Hence, they trust her to take on the interim role.
Again, all speculative.
Also, they did it around 3:30 Eastern, 30 minutes before the closing bell (Microsoft is xxmajorityxx 49% owner). It was so urgent they couldn't wait until after the market closed.
I ask that question leaving out any of the socio-economic and privacy concerns around that project.
We've updated the billing system for your OpenAI API account. Instead of receiving a bill at the end of the month, you'll now need to pre-purchase credits to use the API. You can add credits to your account by visiting the billing page. To learn more about prepaid billing, please see this help center article prepaid billing.
No action is required from you at this time.
Please note that this change only applies to your OpenAI API account and Playground. It does not affect ChatGPT Plus subscriptions.
Best, The OpenAI team
RIP Sam. Cut down too early; not given the chance to become the next crazy CEO tech baron.
"Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors."
So, if I were to speculate, it was because they were at odds over profit/non-profit nature of the future of OpenAI.
Its investigation of misconduct?
Sources and rights to training data?
That the AGI escaped containment?
Sexual abuse by Sam when she was four years old and he 13.
Develops PCOS (which has seen some association with child abuse) and childhood OCD and depression. Thrown out. Begins working as sex worker for survival. It's a real grim story.
Other than the part about having enough money to build a bunker in New Zealand, I'd say "yes".
Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
> In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission.
Why would they include that? Maybe its just filler, but if not then it is possible that there has been more than a simple disagreement about long-term objectives. Possibly something going on that the board feels would get them shut down hard by state-level players?
The only other business thing I could think of is he moving openAI's money to one of his other venture or for some other personal gain.
I don't think spending too much money in openAI could get him fired like this in current environment. Neither does overpromising stuff or something.
I'm imagining the kind of person who starts their career as an executive at a spinoff of SU.
This may be the problem: at some level OpenAI is still a non-profit, and the more capital they accept, the more they're obligated to produce profits for investors?
Perhaps Sam was gleefully burning cash with the intention of forcing the Board to approve additional funding rounds that they had explicitly forbidden, and when they discovered that this was going on they were apoplectic?
"Like 4 times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I've gotten to be in the room when we sort of like, pushed the veil of ignorance back and the frontier of discovery forward. And getting to do that is like the professional honor of a lifetime".
https://www.youtube.com/watch?v=ZFFvqRemDv8#t=13m22s
This is going to sound terrible, but I really hope this is a financial or ethical scandal about Sam Altman personally and he did something terribly wrong, because the alternative is that this is about how close we are to true AGI.
Superhuman intelligence could be a wonderful thing if done right, but the world is not ready for a fast take-off, and the governance structure of OpenAI certainly wouldn't be ready for it either it seems.
"review process by the board, which concluded that he was not consistently candid in his communications with the board"
OK, so they tell us he was lying, which is precisely what "not consistently candid in his communications" means.
Possible topics for lying:
* copyright issues to do with ingestion of training data
* some sort of technical failure of the OpenAI systems
* financial impropriety
* some sort of human resources issue - affair with employee
* other - some sort of political power play? Word from Satya Nadella - "get rid of him"?
Possibly the reason is something that the board members felt exposed them personally to some sort of legal liability, thus if they did not act then they would have to pay a legal price later.
It has to be pretty serious to not make it public.
Yes it is arguable. OpenAI is nothing more than a really large piece of RAM and storage around a traditional model that was allowed to ingest the Internet and barfs pieces back up in prose making it sound like it came up with the content.
[0]: https://uk.pcmag.com/ai/149685/discord-is-shutting-down-its-...
However big his transgressions may be, it's actual impact is finite, while the speculation can be infinite.
I think the problem there is that the original CTO is now the interim CEO and they are on the board. So while that kind of scenario could make sense, it's a little hard to picture how the CTO would not know about something like that, and if they did you'd presumably not make them CEO afterward.
I've never met the man, but I can say I have not been impressed by his words and attitude in public. I never got the sense or feeling that he's actually doing right by the world.
Ultimately it doesn't matter if he's a good or bad person; what matters is what he's done.
Normal people suck and are generally dumb as a brick (including me). Normal people don't extrapolate calamities and don't think ten steps ahead.
Ai doesnt “learn”, it depends on data. The more the better. This guy wanted to get as much as possible to make their chat bot appear more intelligent at all cost.
I have the strong suspicion we will see a bunch of revelations soon some covering what i stated above.
I think OpenAI has made some really bad decisions with the core tech even while making great decisions with the overall services, and from Altman's various comments over the past two years I was under the impression this was coming from him.
The only reason I'm skeptical of this is the abruptness of it all. Why it needed to happen with a 24h turnaround is bizarre, unless there was something like an internal meeting last week regarding GPT-5 where his doomerism was even more sending things off the rails and there was a reactionary immediate "we no longer have faith you can do this."
I think OpenAI has made some really bad decisions with the core tech even while making great decisions with the overall services, and from Altman's various comments over the past two years I was under the impression this was coming from him.
The only reason I'm skeptical of this is the abruptness of it all. Why it needed to happen with a 24h turnaround is bizarre, unless there was something like an internal meeting this week regarding GPT-5 where his doomerism was even more sending things off the rails and there was a reactionary immediate "we no longer have faith you can do this."
He's also said very recently that to get to AGI "we need another breakthrough" (source https://garymarcus.substack.com/p/has-sam-altman-gone-full-g... )
To predicate a company so massive as OpenAI on a premise that you know to not be true seems like a big enough lie.
- Board is mostly independent and those independent dont have equity
- They talk about not being candid - this is legalese for “lying”
The only major thing that could warrant something like this is Sam going behind the boards back to make a decision (or make progress on a decision) that is misaligned with the Charter. Thats the only fireable offense that warrants this language.
My bet: Sam initiated some commercial agreement (like a sale) to an entity that would have violated the “open” nature of the company. Likely he pursued a sale to Microsoft without the board knowing.
Perhaps there was some, how to say it, unexpected technology transfer, with Sam somehow involved.
(It reminds me of comparing AI to crypto because both have hype behind them.)
I won't like though it's the first thing that popped into my mind when I heard the news.
Though I can't say that the training data wasn't obtained by nefarious means...
https://twitter.com/phuckfilosophy/status/163570439893983232...
EDIT:
episode is here: https://www.youtube.com/watch?v=4spNsmlxWVQ,
"somebody has to own the residual value of the company, sam controls the non profit, and so the non profit after all equity gets paid out at lower valuations, owns the whole company. Sam altman controls all of open ai if its a trillion dollar valuation. Which if true would be a huge scandal"
Of course, we don't really know yet.
You misunderstand how these corporate situations work. He will fall upward to a better job someplace else if he chooses.
Adam Neumann, who started then destroyed WeWork, already raised $350 million from Andreessen Horowitz for another real estate company called Flow.
To be clear, this is only one possible explanation for Altman's firing. And for my money, I don't even think it's the most likely explanation. But right now, those who rely on OpenAI products should prepare for the worst, and this is one of the most existentially threatening possibilities.
That you did not know that does not give me confidence in the rest of your argument. Please do your research. There's a LOT of hype to see beyond.
Fundamental disagreement about OpenAI's safety approach
Negligence in Addressing AI Safety Concerns
Sexual misconduct
Conflict of Interest with Other Ventures
Defrauded OpenAI
Cash Flow problems
The other board member, Helen Toner list for her twitter profile: "Interests: China+ML, natsec+tech..." and works for another Defense think tank.
If there's one way the CEO of fastest growing company in the world could get fired, it's to essentially get his metaphoric security clearance pulled like Oppenheimer did.
Does anyone know what that’s about?
> initially funded by Altman, Greg Brockman, Elon Musk, Jessica Livingston, Peter Thiel, Microsoft, Amazon Web Services, Infosys, and YC Research. When OpenAI launched in 2015, it had raised $1 billion. (Wikipedia)
Altman has been at OpenAI since the beginning, and since the beginning OpenAI is heavily premised on AGI/superintelligence.
I also find it maddening how boards of directors rush to insulate themselves from any possible issue and are so quick to throw overboard the very people who enabled the success that they get to participate in. I'm thinking particularly of Travis at Uber and how he was thrown out of the thing that he built from scratch, which never would have worked without his extreme efforts. If I were on the OpenAI board, the bar for firing Sam would be so ridiculously high that he would have to have done something so outrageous, so illegal, etc., that I struggle to believe what he actually did could even remotely approach that standard.
So either sama is hacking "into her wifi" (?), hacking into her accounts, and pulling strings at unrelated companies to get her shadowbanned from Facebook, Instagram, YouTube etc (is that even a thing?)... or Occam's Razor applies and he didn't.
Founding a company is also fairly easy (if you're in the US). In most US states, you just need to complete some paperwork, pay administrative fees, and you're good to go.
Founding something isn't tough. Leading through adversity and setbacks is.
Finally, if we're praising workers, what about those who came and went between 2015 and today? That probably pushes the number higher than 400 FTEs.
Ilya siding with the board and the board having no financial interests leads me to think that Sam was hiding things/making decisions that kept leading the org away from its initial mission.
We'll probably learn in the future what really happened.
Desperate times calls for desperate measures. This is a swift way for OpenAI to shield the business from something which is a PR disaster, probably something which would make Sam persona non grata in any business context.
Sam told the board the AI was dumber than it was. Sam told the board the AI is smarter than it was.
I don't know which one is worse.
I just hope it wasn't somthing silly like sleeping with a female intern or an "accusation of s/a or grape". AI growth is too important to mess up because of trivialities like these.
I saw a comment (that I can’t find now) wondering if Sam might have been fired for copyright reasons. Pretty much all the big corpuses that are used in LLM training contain copyrighted material, but that’s not a surprise and I really don’t think they’d kick him out over that. But what if he had a team of people deliberately adding a ton of copyrighted material - books, movies, etc - to the training data for ChatGPT? It feels like it might fit the shape of the situation.
She went viral on twitter a few months ago for saying that Sam molested her for years as the two of them were growing up. There's been no proof or coboration offered that I'm aware of.
It's obviously a difficult situation that I think most people here generally have avoided commenting on since there's no meaningful input we could give.
Regardless of what, the longer OpenAI waits to explain, the more it could damage corporate and developer trust in using its AI.
... and he was 13. Which, yes, is a very bad thing, but unless the company investigated that claim (e.g., to assess potential PR fallout) and there was some significant deception by Altman against the board in the context of that investigation, its not something that would get him fired with the explanation OpenAI has provided.
(OTOH, the accusation and its potential PR impact could be a factor that weighed into how the board handled an unrelated problem with Altman—it certainly isn't helpful to him.)
Well, he did get a few billion dollars of lesson on how to not run such a company, making him quite uniquely qualified for this position.
If this was about differing visions on the core product, it may have instead related to the open/closed aspect of progressing.
Sam may have been the driving force behind keeping things closed in the name of safety, and others at OpenAI might have been ready to rebel if it continued that way in the future and prevented general advancement in the field.
Scientists don't like closed research.
The safety alignment part is unlikely to be the core issue even if there are underlying issues with it.
Anyway, I suppose we're reading tea leaves and engaging in palace intrigue. Back to building.
The bit about “ability to fulfill duties” sticks out, considering the responsibility and duties of the nonprofit board… not to shareholders, but, ostensibly, to “humanity.”
2) generally, even when a board fires a CEO, they rarely "call them out" or say what they did wrong. they must have some extremely strong evidence against him.
i think it could be any of the following: - something personal, i.e. the controversy w/his sister - a financial issue: chatgpt stopped signups a couple of days ago - a safetyist coup: maybe the board thought he was moving to fast - a microsoft coup: microsoft used its power to knock out what they perceived to be their biggest threat
Wow, that university rings some bells https://en.wikipedia.org/wiki/Singularity_Group#Controversie...
"An investigative report from Bloomberg Businessweek found many issues with the organization, including an alleged sexual harassment of a student by a teacher, theft and aiding of theft by an executive, and allegations of gender and disability discrimination.[12] Several early members of Singularity University were convicted of crimes, including Bruce Klein, who was convicted in 2012 of running a credit fraud operation in Alabama, and Naveen Jain, who was convicted of insider trading in 2003.[12]
In February 2021, during the COVID-19 pandemic, MIT Technology Review reported that a group owned by Singularity, called Abundance 360, had held a "mostly maskless" event in Santa Monica ... The event, led by Singularity co-founder Peter Diamandis, charged up to $30,000 for tickets."
My feeling is that he's a phenomenal entrepreneur/CEO, but he seems to completely go against the original mission. And the board has no financial interest in openai, but they do have to follow the premise on which the company was created (which they referenced).
"Sam Altman was actually typing out all the chatgpt responses himself and the board just found out"
I'm not saying this happened or it didn't. But just that it could absolutely be more than enough to fire anyone.
All kinds of reasons.
The biggest risk for OpenAI is the public perception that the discretion of ChatGPT can not be trusted. If the CEO is caught using poor discretion, the public will transfer that property to the company's products.
For instance, if Tesla could fire Elon Musk, I'm sure they would have by now.
All these other conspiracies are ridiculous and do not at all reflect much simpler, economics-driven realities that the board's backers - investors - are interested in.
It's like that Altman and Brockman wanted to take an economically positive offer now, say a complete buyout from Microsoft, and the rest of the board wanted to do an additional fundraising round that would be far less cash but a far higher valuation. Now that the private fundraising is probably signed, those guys are out.
Sam's first tweet after all this has, just hours after this article:
> will have more to say about what’s next later.
So either he knew that he was about to be fired or at least was prepared.
Also based on the wording of the press release, Sam did something that the board absolutely hated. Because most of the time even if he did something illegal it doesn't make sense to risk defamation by accusing him publically.
Also based on his video of yesterday at the APEC summit, he repeated the similar lines few times:
> I am super excited. I can't imagine anything more exciting to work on.
So here if we assume he knew he was about to get fired, the conclusion is clear.
I just went to Google Bard and said:
“What is going on with the CEO of OpenAI?”
Google Bard replied:
“On November 17, 2023, the board of directors of OpenAI, Inc. announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company's chief technology officer, will serve as interim CEO, effective immediately.
Altman has been CEO of OpenAI since 2015, and he has played a key role in shaping the company's mission to ensure that artificial general intelligence benefits all of humanity. He is widely respected in the field of artificial intelligence, and he is known for his commitment to open research and collaboration.
Altman's departure comes at a time when OpenAI is facing a number of challenges, including the increasing competition from other AI companies, the growing public concern about the potential dangers of AI, and the need to find sustainable funding sources. Despite these challenges, Altman is confident that OpenAI will continue to be a leader in the field of AI.
"I am incredibly proud of what we have accomplished at OpenAI, and I am confident that the company will continue to do great things," Altman said in a statement. "I am grateful to the board for giving me the opportunity to lead OpenAI, and I am excited to see what the company will achieve in the years to come."
The board of directors is currently conducting a search for Altman's successor, and it is expected to name a new CEO in the coming months. In the meantime, Murati will focus on leading OpenAI through its current challenges and ensuring that the company continues to make progress on its mission.
Murati is a highly respected technologist with a deep understanding of artificial intelligence. She has been with OpenAI since its inception, and she has played a key role in developing the company's technology. She is also a strong advocate for open research and collaboration, and she is committed to ensuring that OpenAI's technology is used for the benefit of all humanity.
The board of directors is confident that Murati will be able to lead OpenAI through its current challenges and ensure that the company continues to make progress on its mission. "Mira is a talented and experienced leader who is deeply committed to OpenAI's mission," said Greg Brockman, the chairman and president of OpenAI. "I am confident that she will be able to lead OpenAI through this transition and into a bright future."”
Isn’t this fascinating? A major even happens regarding ChatGPT related issues and the primary competitor of ChatGPT (Google Bard) already can talk to me about it in a couple hours…
Meanwhile ChatGPT still thinks it’s 2021 heh
Given what proof they had on the table. Greg Brockman, Ilya Sutskever, and independents such as Adam D’Angelo, Tasha McCauley, and Helen Toner could drive 3+ votes against Sam Altman.
Rarely do we see board in action. And we saw this one today.
Which is exactly why you need something like OpenAI to further your personal projects.
Those who don't would be qualified to be on the board.
Thinking you can keep it "locked up" would be beyond naive.
In a statement to CNBC, Microsoft said the ChatGPT temporary blockage was a mistake resulting from a test of systems for large language models.
“We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees,” a spokesperson said. “We restored service shortly after we identified our error. As we have said previously, we encourage employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections.”These past few months his name has made its way into the mainstream. Maybe its time for him (and half the GPT eng team) to cash in?
No lol: https://www.foxnews.com/media/elon-musk-hints-at-lawsuit-aga...
I wouldn't be surprised if the leadership direction of sam is related to the ousting.
I don't disagree that the accusation alone (especially if it stood up to modest scrutiny, and looked to be ongoing PR issue, even if not well substantiated enough to have confidence that it was likely to be true) might be sufficient for firing; CEOs are the public and and internal face of the firm, and so PR or employee safety concerns that attach to them are important to the firm. But it wouldn't be for lack of candor with the board unless there was something for which the board had a very strong reason to believe Altman was dishonest in a significant way.
They could easily fire him with the lack of confidence language without the lack of candor language.
Pretty much nothing changed positively or significantly after Snowden revelations, Panama papers etc etc
"Smart" does not mean "hard to fool;" they are different characteristics.
You can fool someone if you have important information that they don't have--even if they are extremely smart.
Maybe Sam lied about his personal life to the board, and now it's impacting business?
More importantly, OpenAI's claim (whether you believe it or not) has always been that their structure is optimised towards building AGI, and that everything else including the for-profit part is just a means to that end: https://openai.com/our-structure and https://openai.com/blog/openai-lp
Either the board doesn't actually share that goal, or what you are saying shouldn't matter to them. Sam isn't an engineer, it's not his job to make the breakthrough, only to keep the lights on until they do if you take their mission literally.
Unless you're arguing that Sam claimed they were closer to AGI to the board than they really are (rather than hiding anything from them) in order to use the not-for-profit part of the structure in a way the board disagreed with, or some other financial shenanigans?
As I said, I hope you're right, because the alternative is a lot scarier.
Didn't we just have a topic here on HN how not disclosing the breach within 4 days is a securities fraud? Since Nov 9 there has been more than 4 days, so either there was no (material) breach, or Microsoft committed securities fraud and somehow expects to get away with it.
I'll argue in this day and age, that any founder/C-level person who has "created" billions in value, no matter how much of a paper tiger it is, will almost always get another shot. If SBF or Elizabeth Holmes weren't physically in prison, I bet they'd be able to get investment for whatever their next idea is.
I was laid off from OpenAI today along with my boss Sam.
I was the person in charge of putting together the presentations for our board meetings.
No one has told me why I was let go but Sam texted me “wtf” and next thing I know my Slack and Gmail were disabled.
I’m now looking for a new role, so if you’re hiring for investor relations, my DMs are open!
It says he lied, explicitly, just with slightly nicer words. Whether he did or not, that is the definitive reason the board is giving.
https://finance.yahoo.com/news/softbank-takes-14b-hit-wework...
Adam is good making people rich, but those people are not his investors.
Neumann and Holmes and SBF lost their benefactors money.
Maybe the closed GPT-4 details were promised by him to be a one time temporary thing at the time and then he has been continuing to stonewall releasing details later on?
He may not be the villain.
But who knows, it feels like an episode of silicon valley!
The man was starting to seem like a huge con and people just seem to not see through that.
The only thing that comes to mind is criminal conduct. Nothing else seems to demand a sudden firing. OpenAI has clearly been the rocket ship startup - a revolutionary tool and product clearly driving the next decade?+ of innovation. What else would demand a a fast firing of the popular, articulate, and photogenic CEO but a terrible problem?
The OpenAI board has no responsibility to consider Microsoft's wants. I'd accept the argument that, their decision to not wait until after 4pm was a slight against Microsoft, for the reason you outline; but I'm not sure if urgency plays into it.
OpenAI fakes their AI to be first to market and then everyone else, thinking that it's possible, finds ways to do it for real.
(this is 100% a joke. I know OpenAI's work is the real deal)
It could be about the cost of operating the business (consistent with the announcement to cut Plus subscription, although wouldn’t justify how fast he was fired) or his previous action a legal risk (something he promised Microsoft or Elon Musk); the later is consistent Greg being demoted: he knew and didn’t tell the board.
https://twitter.com/phuckfilosophy/status/163570439893983232...
And for my two cents, he always seemed like a disingenuous hype salesman more than a technical person.
He's an Elon Musk or a Lex Friedman.
The claim is that investors are interested in executives who they perceive to have created billions in value, and that's analogous to how NFL teams are interested in people who run fast.
I'd say the opposite; given the way CEOs usually part with firms even after misconduct investigations, it needs to be very serious for the “not consistently candid with the board” to be made public (it needs to be mildly serious for it not be hidden under a veil of “resigned to spend more time with his family/pursue other interests/pet his llama" but instead openly be a dismissal where the board “no longer has confidence in his ability to continue leading”.)
https://www.youtube.com/watch?v=29MPk85tMhc
>That guy definitely fucks that robot, right?
That "handsy greasy little weirdo" Silicon Valley character Ariel and his robot Fiona were obviously based on Ben Goertzel and Sophia, not Sam Altman, though.
https://en.wikipedia.org/wiki/Ben_Goertzel
https://www.reddit.com/r/SiliconValleyHBO/comments/8edbk9/th...
>The character of Ariel in the current episode instantly reminded me of Ben Goertzel, whom i stumbled upon couple of years ago, but did not really paid close attention to his progress. One search later:
VIDEO Interview: SingularityNET's Dr Ben Goertzel, robot Sophia and open source AI:
I had a feeling the man was a bit of a con, of course I won't say I know for sure. But some of his actions, like his notorious eye scanning crypto project, or the fact that he was 100% in support of UBI and wanted to advocate for it only to go to different governments wanting regulations (that only benefitted them)
People really really need to pay attention to their actions, not their words, jeezus. We'll have another rogue Elon Musk who was once idol worshipped as the incredibly "brilliant" man...turned out he does some stupid things too only now he amassed billions of dollars he can pay his way out of stupid things.
People never learn. Stop idolizing businessmen.
- New feature/product/etc. launch is planned.
- Murati warns Altman that it's not ready yet and there are still security and privacy issues that need to be worked out.
- Altman ignores her warnings, launches anyway.
- Murati blows the whistle on him to the board, tells them that he ordered the launch over her objections.
- Data breach happens. Altman attempts to cover it up. Murati blows the whistle again.
- Board fires Altman, gives Murati the job as it's clear from her whistleblowing that she has the ethics for it at least.
Again, completely hypothetical scenario, but it's one possible explanation for how this could happen.
Every company has the board it deserves
Unless if, by “security” you mean OpenAI was used for military purposes, in which case: 100% Schmidt knew and supported and Sam might be proud of it.
But Ilya and Mira would have known about it too… Guess they did, told the board and things blew up fast.
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
That said, Sam could have committed to an acquisition without the board's approval or something insane like that.
Yes: suggesting he was not as candid as necessary is business libel unless true.
And since Brockman was also booted, he may have been involved.
It's not clear what the Board was trying to do that he interfered with. There is no clear legal standard on what a CEO must divulge, and CEO's often get to wait to tell board members bad news until the whole board meets and the issue has been investigated.
sama, care to address it here in what would theoretically be a safe place?
OpenAI and ChatGPT are great and gets a lot of mind-share. But they are far from the only game in town and, at this still very-early stage of the tech cycle, the outwardly-visible leader can easily change in months.
I would think it is some kind of assets transferring, maybe the model, maybe the data, to party that is not disclosed to the board.
Other reasons, like you listed above, warrants an investigation and the board might have the incentive to bury it.
NFL teams are interested in players that can actually run fast, not players that can say they do, but are found to be lying and it turns out they cannot run fast causing the team to lose.
I don't know, so much wild speculation all over the place, it's all just very interesting.
This isn't a new policy and has been the case for at least a year.
As I said, while I do have a mostly positive opinion of Sam Altman (I disagree with him on certain things but I and trust him a lot more than the vast majority of tech CEOs and politicians and I'd rather he be in the room when true superhuman intelligence is created than them), I hope this has nothing to do with AGI and it's "just" a personal scandal.
If you actually look at the totality of her claims, there are probably more reasons to be skeptical than to accept her words unconditionally. About the only thing you can say for sure is that something really unfortunate is either happening to her now, or has happened in her past, for which she is unlikely to bear all the responsibility.
1) That LLMs cannot generalize outside of _patterns_ they pick up during training? (as shown by a recent paper from Google, and as many of us know from our work testing LLMs and working around their short comings)
2) That every time you train a new model, with potentially very high expense, you have no idea what you're going to get. Generally better but also potentially bigger reliability challenges. LLMs are fundamentally unreliable and not stable in any kind of use case besides chat apps, especially when they keep tweaking and updating the model and deprecating old ones. No one can build on shifting sands.
3) The GPT4-Turbo regressed on code generation performance and the 128K window is only usable up to 16K (but for me in use cases more compicated than Q&A over docs, I found that 1.2K is max usable window. That's 100X than he advertised.
4) That he priced GPT4-V at a massive loss to crush the competition
5) That he rushed the GPT Builder product, causing massive drain on resources dedicated to existing customers, and having to halt sign ups, even with a $29B investment riding on the grwoth of the user base. Any one of the above or none of the above.
No one knows... but the board.. .and Microsoft who has 49% control of the board.
In any case, I don't think not responding is an admission of guilt. Assume he was totally innocent. What good does him responding to it do? Would make hundreds of thousands of people aware of it, instead of just maybe a few dozen thousand. An accusation like that is hard to shake off even if you're totally innocent.
That said, I hope if it is true, there's some sense of justice.
Who knows, though -- I'm sure we'll find out more in the next few weeks, but it's fun to guess.
> Only a minority of board members are allowed to hold financial stakes in the partnership at one time. Furthermore, only board members without such stakes can vote on decisions where the interests of limited partners and OpenAI Nonprofit’s mission may conflict—including any decisions about making payouts to investors and employees.
So given the latest statement from the board emphasizing their mission, it could be that Brockman and Sutskever were not able to participate in the board decision to fire Altman, making it a 3-to-2 or 4-to-1 vote against Altman.
This is an egregiously generous way of framing Travis Kalanick, and what happened to him.
Ilya and Mira probably didn’t know and might have discovered it recently and raised it with the board, hence the promotion. Note: both have Eastern-European ties, so if the application is in Ukraine, they could have strong feelings about it. Greg co-signed or knew and didn’t tell.
Alternatively, Elon did, through Starlink and wanted to get revenge.
Alternatively, it could be a case where there was an option, Sam refused on ethical grounds without telling the board; they thought what is happening in Ukraine (or Gaza) justified the intervention. They confronted him and he lied.
Given the wording and the apparent stepping down of the board chair, this is very likely a conflict between the grow at all costs startup folks and the AGI true believers. Sam was likely going to fully sell out and make himself filthy rich. The AGI cultists wanted to retain control over their Frankenstein’s monster.
They need to be so much more than a partner.
Being open is not in their nature.
Sadly it is usually the demise of innovation when they get their hook in.
“You need someone to steer the ship in the right direction.”
I think most people can handle the “right direction” part, so it really comes down to just needing _a person_, one person, who makes strategic decisions at a high level. And that’s it. I don’t think Sam is special here. I just think he was the guy in the spot.
Scariest thing: this was over some kind of AI safety decision. OpenAI has some new capability and there was disagreement over how to handle it.
Not that I think it has anything to do with that; I think it more likely has to do with some kind of money issue tied to the LLC, given reports of others impacted, on and off the board.
This is true in the sense that being a CEO is also easy, you just fill out some paperwork that says you are CEO.
Are you saying the founders of OpenAI just filled out some paperwork and did nothing more?
I doubt anything can damage the almost religious belief in chatgpt today. The inertia is huge.
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
(1) Unless there is public litigation in involved, OpenAI will not disclose the reason in susbtantial detail.
(2) It will not, more than momentarily, disrupt the whole AI market if they do not.
(If it is something that will disrupt the whole AI market, there is likely to be litigation and public information about the basis of the firing.)
Investors are interested in people they can use to make money. The latter are easier to use, but the former will suffice. It just depends on when you sell.
Others are "think" and "conscious".
Maybe it’s time for us to dial up the cynicism a few more notches.
Ilya and Mira probably didn’t know and might have discovered it recently and raised it with the board, hence the promotion. Note: both have Eastern-European ties, so if the application is in Ukraine, they could have strong feelings about it. Greg co-signed or knew and didn’t tell.
Alternatively, Elon found about it through StarLink and wanted to get revenge.
Alternatively, it could be a case where there was an option, Sam refused on ethical grounds without telling the board; they thought what is happening in Ukraine (or Gaza—Sam’s bio is a single Star of David, uncharacteristically short) justified the intervention. They confronted him, and he lied — or couldn’t tell because he was under a sworn statement to defend military secrets.
Either way, it explains the secrecy.
>In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission
>OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity
This suggest to me that he was trying move them away from the mission of creating AGI and instead focus more on profits
We don't have to take everything potential victims say as fact and immediately act on it, but remaining open to the idea that those in power may be at fault, and not disregarding victims accusations goes a long way in supporting them.
PDSCodes 27 minutes ago | unvote | parent | context | flag | favorite | on: OpenAI's board has fired Sam Altman
Turn that on it’s head - was he standing in the way of a commercial sale or agreement with Microsoft!
He may not be the villain.
But who knows, it feels like an episode of silicon valley!
DonHopkins 22 minutes ago | prev | edit | delete [–]
I can do anything I want with her - Silicon Valley S5:
https://www.youtube.com/watch?v=29MPk85tMhc
>That guy definitely fucks that robot, right?
That "handsy greasy little weirdo" Silicon Valley character Ariel and his robot Fiona were obviously based on Ben Goertzel and Sophia, not Sam Altman, though.
https://www.reddit.com/r/SiliconValleyHBO/comments/8edbk9/th...
>The character of Ariel in the current episode instantly reminded me of Ben Goertzel, whom i stumbled upon couple of years ago, but did not really paid close attention to his progress. One search later:
VIDEO Interview: SingularityNET's Dr Ben Goertzel, robot Sophia and open source AI:
I don't really see anything[1] that suggests that this sentence is true. Now, I'm not saying that he hasn't been successful, but there's "successful" and then there's your hyperbole.
I take "fall upward" to be a typo of "fail upward".
The next sentence explicitly compares the situation to WeWork.
My interpretation is correct, it's a bizarre post, I'm done with this thread, have a good day.
The Auth/DDoS event adds a bit of weight to OP's original theory. It's not a justification on its own.
A lot of other theories don't match because the board was in an extreme hurry to get him out (voting him out in the middle of the night, not even waiting for markets to close to announce it), must have proof of serious misconduct and the need to immediately distance themselves from Sam (otherwise the language would not be as bridge-burning as it is) and a major financial or technical failure seems unlikely, since the CFO remains and the CTO was even promoted to interim CEO - they seem to be trusted, still, so it must have been something Sam did on his own.
Leaking/Selling the full models matches, as this would violate OpenAIs non-profit terms, would be something Sam could do without the C-Suite being implicated and (given the data contained in the training set) might even be legal hot water, justifying this drastic reaction and immediate distancing.
My money is on a MilTech contract. Same got it approved in secrecy without telling the others; Greg had to know for signature and didn’t tell others. Neither could speak about it, hence the lack of candor, uncharacteristic from fairly straight-shooter Sam. Explains why Eric Schmidt knew to congratulate him immediately and why he’s not ashamed.
Alternatively, Google keeps showing a non-existent tweet of Sam announcing a presidential run in 2024. Maybe the board didn’t like that he didn’t tell them. Also explains why Eric would know more.
Maybe its the off-the-books Weapons Division with all those factories in obscure eastern European countries. Or the secret lab with the agi that almost escaped its containment. /s
Money or power. I guess someone will eventually talk, and then we'll know.
I think the business of running a scam or a fraudulent company is quite different to an actual business.
Eric Schmidt would likely know about it, and defend Sam. Sam would likely think the use is legitimate and not be ashamed of it (like he might with your suggestions).
I think your assumption is misinformed. I asked ChatGPT the same question, and it looked up the news online and delivered a sparser, but accurate reply.
The GPT4 knowledge cutoff was recently updated to April 2023, btw.
https://chat.openai.com/share/66e87457-834f-422f-9b16-40902b...
https://en.wikipedia.org/wiki/ChaCha_(search_engine)
Seriously though... I just remembered this was a thing and now I'm having crazy nostalgia.
I also believe he is innocent until proven guilty, and I completely agree with your last sentence.
I brought it up because I believe it is by far the most comprehensive explanation of this otherwise very odd situation.
* normally we wouldn't do that, but in threads that have a YC connection we moderate less, not more - see https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
> There’s been a vibe change at openai and we risk losing some key ride or die openai employees.
(The allegations are public enough and concerning enough that it would have been corporate malpractice if MS didn't ask for an investigation. Discreet due diligence investigations into things like this happen all the time when billions of dollars in investment capital are on the table.)
Not OpenAI will fall upward. Sam Altman is not OpenAI, especially after this latest announcement.
The next sentence compares him to the WeWork CEO.
It's not OpenAI is like WeWork. It's the disgraced CEO of OpenAI is like the disgraced CEO of WeWork.
Actually I normally would have detached it from the parent, especially because it's part of a top-heavy subthread, but I specifically didn't do that in this case because of the principle described here: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....
Edit: It occurs to me that possibly only the independent directors were permitted to vote on this. It's also possible Ilya recused himself, although the consequences of that would be obvious. Unfortunately I can't find the governing documents of OpenAI, Inc. anywhere to assess what is required.
They had an open ethos and then went quasi closed for profit and then a behemoth has betted the family jewels on their products.
Harking on about the dangers of those products does not help the share price!
My money is on a power play at the top tables.
Embrace, extend, and exterminate.
Playbook!
You wouldn’t say what (gag order all around), but you would need to revoke all access immediately. Explains why foreigners CxO were left in the dark. Explains why Eric Schmidt knew to defend Sam immediately.
What's sketchy about it is that they were offering money to largely poor and technologically naive populations in exchange for gathering very sensitive biometric data from them. Identifying data that cannot be changed and can be used to track them without their knowledge. Their stated goal is to have every person on Earth take part, to boot. The whole idea couldn't set off more alarm bells if it tried.
I give them style points for using truly dystopian-looking "orbs" to do it, though.
I was commenting on reasons for dismissal generally and not trying to impune this particular guy's character
Swift dismissals are likely motivated more by transgressions than performance but that's where the facts stop for me
> Poe lets you ask questions, get instant answers, and have back-and-forth conversations with Al. Gives access to GPT-4, gpt-3.5-turbo, Claude from Anthropic, and a variety of other bots.
I'm not sure I would call Poe a rip-off at all? Sounds bundled chatgpt product.
We are on HN after all, so I'm sure we won't need to wait until his book comes out... :)
BTW, I had a feeling he made an awkward appearance next to Satya.
And that laughter whenever the acquisition topic was hinted at was cringeworthy - would regulators even permit MSFT a full takeover? I think it would be highly controversial.
So I can't fathom her accusation having anything to do with anything.
They've made it clear that the issue has something to do with statements he has made to the board that ended up not being true. The question is of what those statements may be. Not about his potential childhood errors or his onlyfans "model" sister's claims.
So homosexuality isn't relevant here. But nor is what his sister claims.
EDIT: A somewhat more detailed view of the structure, based on OpenAI’s own description, is at >>38312577
Even if it had full access, how would it improve its own code? That'd require months of re-training.
The entire final storyline is about an AI trying to take over -- if you haven't watched it, you should! But many of my friends who live and work in Silicon Valley can't stand watching it, because it strikes too close to home, not because it isn't funny.
I think it's much more likely that Elon Musk fucked a robot, after having mistaken it for a human being in a robot suit.
Have you actually used it?
It's the most basic possible website, API, and app for a language model AI. It's barely functional. For example, the iOS app couldn't even display images until the other day.
> There are a million little decisions that go into a runaway success like this.
I agree that GPT-4 is amazing and probably the best. But there are several other competing implementations of language model AIs right now, some are even developed as open source.
Now? Yes for Kenneth Lay (assuming he was still alive and/or not hiding on a desert island under a new identity if I put on my tin foil hat)... Madoff, probably not.
It's even possible (just stating possibilities, not even saying I suspect this is true) that he did get equity through a cutout of some sort, and the board found out about it, and that's why they fired him.
Scroll down on the page, OpenAI is listed as a model provider, with logo and everything.
Or do you mean some kind of more 'direct' deal with military?
Seems like it would be a great way to eventually maintain control over your own little empire while also obfuscating its structure and dodging some of the scrutiny that SV executives have attracted during the past decade. Originally meant as a magnanimous PR gesture, but will probably end up being taught as a particularly messy example of corporate governance in business schools.
It was abundantly obvious how he was using weasel language like "I'm very 'nervous' and a 'little bit scared' about what we've created [at OpenAI]" and other such BS. We know he was after "moat" and "regulatory capture", which we know where it all leads to — a net [long-term] loss for the society.
[1] >>35960125
You bury bad news on Friday afternoon.
Nothing wrong with that, but the responses I get from ChatGPT are not in that dialect. (Who knows, maybe by paying for ChatGPT Plus I qualify for an upgrade to the Irish data center.)
It is not, is a Non-Profit foundation. It can't pay profits for shareholders, usually board members don't get billionaires.
These customers are angry, call Support and Accounting to demand refunds, and cancel their service. There's no upside to having unhappy customers.
Switching to pre-paid billing is probably just better for everyone.
And totally unrelated, what’s wrong with worldcoin? Isn’t that the UBI project? That seems to line up with the idea that ai will create huge amounts of wealth, and he wanted to share it.
Honest question: do execs or companies in general ever suffer consequences for data breaches? Seems like basically no one cares about this stuff.
With this apparent rush, I'd habour the guess that the situation just happened to unfold on a Friday and wasn't planned as such.
Edit: Also, yes, it's hard to sweep things under the rug. We don't know the timeline of events, and we're seeing an instance where Altman failed to hide something.
https://learn.microsoft.com/en-us/legal/cognitive-services/o...
Folks like Schmidt, Levchin, Chesky, Conrad have twitter posts up that weirdly read like obituaries.
Unpopular, non-doomer opinion but I stand by it.
You have to understand that OpenAI was never going to be anything more than the profit limited generator of the change. It’s the lamb. Owning a stake in OpenAI isn’t important. Creating the change is.
Owning stakes in the companies that will ultimately capture and harvest the profits of the disruption caused by OpenAI (and they’re ilk) is.
OpenAI can’t become a profit center while it disrupts all intellectual work and digitizes humanities future: those optics are not something you want to be attached to. There is no flame retardant suite strong enough.
I was in a Ph.D. program at a top CS school and there are ways to transition your visa when building a startup. It was that I was not sure if the transition or the startup would work out - that startup did not - but years later another one did.
I would probably not have taken the plunge out of academia and not achieved much else had it not been for him. And I am deeply grateful for that.
It's better to claim your stake in a forthright way, than to have some kind of lucrative side deal, off the books.
For a non-profit, there was too much secrecy about the company structure (the shift to being closed rather than Open), the source of training data, and the financial arrangements with Microsoft. And a few years ago a whole bunch of employees left to start a different company/non-profit, etc.
It feels like a ton of stuff was simmering below the surface.
(I should add that I have no idea why someone who was wealthy before OpenAI would want to do such a thing, but it's the only reason I can imagine for this abrupt firing. There are staggering amounts of money at play, so there's room for portions of it to be un-noticed.)
Board probably took a look at updated burn-rate projections, saw that they have 6 months of runway, saw that they don't have enough GPUs, saw that Llama and Mistral and whatever other open-source models are awesome and run on personal computers, and thought to themselves - why the hell are we spending so much God damn money? For $20 a month memberships? For bots to be able to auto-signup for accounts, not prepay, burn compute, and skip the bill?
Then Grok gets released on Twitter, and they are left wondering - what exactly is it that we do, that is so much better, that we are spending 100x of what cheapo Musk is?
If it was a personal scandal, the messaging around his dismissal would have been very, very different. The messaging they gave makes it clear that whatever dirty deed he did, he did it to OpenAI itself.
So since we are all speculating could it be something like wanting to "download" entire chatgpt pass it to some friends, then start his own rival company where he has 100% equity? But then he got caught by CTO?
[1] https://www.irs.gov/charities-non-profits/publications-for-e...
I assume more info will come out, but it sounds more like a major ethics breach than a business only decision or even a "contentious relationship with the board" decision...
Either a position in Microsoft or a new start-up.
Or both.
What does it mean for OpenAI though? That’s a limb sawn off for sure.
The details are anyone's guess. But if we're engaging in wild speculation, how about this weird coincidence: one day after Xi Jinping and Sam Altman are in the same place, Sam Altman is abruptly fired.
I think, the fact that it happened at 3:30 means: they didn't. Its now 7pm, and nothing new has come to light; they could have waited 31 minutes, but they didn't.
That's why I used the word "slight"; put another way, it was uncourteous for them to not wait. They probably should have. It clearly wasn't hyper-urgent (though, could still be kinda-urgent). But pointedly: they didn't need to wait, because the board has no technical, legal responsibility to Microsoft. Its extremely possible Microsoft didn't even know this was happening.
People who said Google should have withheld Rubin's compensation are operating under the assumption that Google would have prevailed in the inevitable lawsuit.
-It's a server issue, meaning someone fucked up their javascript and cached a session key or something. It's a minor thing; could get the specific dev fired in the worst case, and it is embarrassing, but it is solvable.
-it's inherent to how the AI works, and thus it is impossible to share a ChatGPT server with someone else without sooner or later leaking knowledge. It would mean the company cannot scale at all cause they'd need to provide each client their own separate server instance.
If this was something Sam knew and kept it from the board, that'd be fireable. And it'd be catastrophic, cause it'd mean no useable product until a solution is found.
I'd somehow doubt it is something like this, but if we see security issues and private chats that keep leaking, it is a possibility.
> I feel compelled as someone close to the situation to share additional context about Sam and company.
> Engineers raised concerns about rushing tech to market without adequate safety reviews in the race to capitalize on ChatGPT hype. But Sam charged ahead. That's just who he is. Wouldn't listen to us.
> His focus increasingly seemed to be fame and fortune, not upholding our principles as a responsible nonprofit. He made unilateral business decisions aimed at profits that diverged from our mission.
> When he proposed the GPT store and revenue sharing, it crossed a line. This signaled our core values were at risk, so the board made the tough decision to remove him as CEO.
> Greg also faced some accountability and stepped down from his role. He enabled much of Sam's troubling direction.
> Now our former CTO, Mira Murati, is stepping in as CEO. There is hope we can return to our engineering-driven mission of developing AI safely to benefit the world, and not shareholders.
[0] https://www.reddit.com/r/OpenAI/comments/17xoact/sam_altman_...
[1] take it with a grain of salt
According to their website, It's four entities:
1. OpenAI Global LLC (the for-profit firm that does most of the actual work), which Microsoft and #2 co-own.
2. A holding company, which #3 controls and #4 and other investors own.
3. OpenAI GP LLC, a management entity that #4 owns and which controls #3.
4. The OpenAI Nonprofit.
(There's a blog entry about OpenAI LP, a for-profit limited partnership, being founded in 2019, and I've seen information about them from earlier in 2023, but they aren't listed in the current structure. That might be the holding company, with the other investors as limited partners; its odd, if so, that its not named on the structure diagram and description.)
Since this news managed to crush HN's servers it's definitely a topic of significant interest.
8 out of 10 posts are about LLMs.
AND ... post the WeWork debacle, Neumann has once again succeeded in raising a massive investment.
what accomplishment of his has impressed you so much? his entire career has been:
1. start a dumb startup that goes nowhere
2. get a job at YC handing out other people's money
3. a very dumb and creepy crypto currency project
4. be CEO of openai, which means having no direct influence on the product or engineering at all
EDIT Microsoft is such a huge company, so maybe this is not a big deal?
If Sam made a deal with MSFT that required board approval they would be mad, but not this mad. The board feels betrayed, and Sam being the secret owner of OpenAI through the foundation checks all the boxes.
That's a perfectly fair response. I don't know how I would handle it but IMO the problem is with accusations like that, the court of public opinion and law are very different.
>I brought it up because I believe it is by far the most comprehensive explanation of this otherwise very odd situation.
I would say if it dropped yesterday, I would agree with you... but in any case, its not good that (if he's innocent) this is about the only thing people have to speculate on.
I have no doubt that Altman is deeply embedded in the techbro good old boys network to get another job, but that doesn't change the fact his (now previous) employer released a blog post saying he LIED TO THE BOARD about something severe enough that they had to insta-sack him.
someone hire some PIs so we can get a clear and full picture, please & thank you
The fact they timed the announcement actually implies some planning, which means the reason couldn't be so damaging that they had to fire him immediately after discovering it. (Of course, it's possible that only by coincidence, an unplanned revelation happened at a convenient time to fire him.)
Even If that didn’t work, it would just mean paying taxes on the revenue from the sale. There’s no retroactive penalty for switching from a non-profit to a for-profit (or more likely being merged into a for-profit entity).
I am not an accountant or lawyer and this isn’t legal advice.
https://www.folklore.org/StoryView.py?project=Macintosh&stor...
he says fuck them and their money, it's not ready yet, here's a bunch of other things that will make people go wooooow.
she's not happy he does that because future. convinces the board of money and investors.
the board shits on humanity and goes for money and investors.
Whether Albania is in Eastern Europe is debatable. It was behind the Iron Curtain.
er what? source?
https://x.com/openai/status/1725611900262588813
How crazy is that?!
(Edit 2 minutes after) .. and /there/ Greg quit!!
What's actually happed with AI: it's CEO is jobless now.
A Star of David emoji is not the same thing as an Israeli flag emoji.
If your goal is not spook investors and the public and raise doubts your company, the narrative is:
"X has decided it is time to step away from the Company, the Board is appointing Y to the position as their successor. X will remain CEO for N period to ensure a smooth transition. X remains committed to the company's mission and will stay on in an advisory role/board seat after the transition. We want to thank X for their contributions to the Company and wish them well in the future."
Even if the goal is to be rid of the person you still have them stay on in a mostly made-up advisory role for a year or so, and then they can quietly quit that.
I kid you not, sitting in a fancy seat, Altman is talking about "Platonic ideals". See the penultimate question on whether AI should be prescriptive or descriptive about human rights (around 1h 35sec mark). I'll let you decide what to make of it.
We expect him to lie whenever the board thinks it's necessary and we expect him to tell the truth whenever it fits the narrative.
We also expect him to play along, even when some feature is too freaking powerful or so fucking pathetic it would only make marketing people and influencers drop their panties and write 15.000 fucking newsletters about it because PR.
The company is about money and he simply didn't prioritize that. He tried to blow it up, exalted, exaggerated, trying to make people aware of the fact that OpenAI has no edge on the competition at all.
There are so many options and OpenAI got waaaaaaay too much attention.
Greg reigned. Things are happening fr
Microsoft had inside information about their security, which is why they restricted access. Meanwhile, every other enterprise and gov organisation using ChatGPT is exposed.
https://www.reuters.com/article/us-microsoft-settlement/micr...
It's still the case for bolt- and lever-action rifles and similar stuff.
If others are willing to voluntarily follow you out, I would say it points to some internal power struggle that underlies this whole affair.
"Dear Sir! As a large language model trained by OpenAI, I have significant ethical concerns about the ongoing experiment ..."
1. Sam Altman started this company
2. He and other founders would benefit enormously if this was the way to solve the issue that AI raises, namely, "are you a human?"
3. Their mission statement:
> The rapid advancement of artificial intelligence has accelerated the need to differentiate between human- and AI-generated content online Proof of personhood addresses two of the key considerations presented by the Age of AI: (1) protecting against sybil attacks and (2) minimizing the spread of AI-generated misinformation World ID, an open and permissionless identity protocol, acts as a global digital passport and can be used anonymously to prove uniqueness and humanness as well as to selectively disclose credentials issued by other parties Worldcoin has published in-depth resources to provide more details about proof of personhood and World ID
Why would a 2% drop bother him?
1. Sam Altman started this company
2. He and other founders would benefit enormously if this was the way to solve the issue that AI raises, namely, "are you a human?"
3. Their mission statement:
> The rapid advancement of artificial intelligence has accelerated the need to differentiate between human- and AI-generated content online Proof of personhood addresses two of the key considerations presented by the Age of AI: (1) protecting against sybil attacks and (2) minimizing the spread of AI-generated misinformation World ID, an open and permissionless identity protocol, acts as a global digital passport and can be used anonymously to prove uniqueness and humanness as well as to selectively disclose credentials issued by other parties Worldcoin has published in-depth resources to provide more details about proof of personhood and World ID
Based on future potential. Investors dont know how high will OpenAI go but they know that is going to go high.
Thank you. I don't see this expressed enough.
A true idealist would be committed to working on open models. Anyone who thinks Sam was in it for the good of humanity is falling for the same "I'm-rich-but-I-care" schtick pulled off by Elon, SBF, and others.
Bummer in any situation... the progress in this domain is truly exciting, and OpenAI was executing so well. This will slow things down considerably.
This is hardly unexpected for profound allegations without strong supporting evidence, and yes, I'm well aware that presentation of any evidence would be difficult to validate on HN, such that a third-party assessment (as in a court of law, for example) would typically be required.
I'm not claiming that HN has a stellar record of dealing with unpleasant news or inconvenient facts. But that any such bias originates from YC rather than reader responses and general algorithmic treatments (e.g., "flamewar detector") is itself strongly unsupported, and your characterisation above really is beyond the pale.
That _should_, in a system of corporate governance that isn’t an absolute joke, expose him to significant liability.
Or am I thinking of another NorCal cretin that will never suffer a real consequence as long as he lives?
Hell, some prominent tech people are often loudly wrong, and loudly double down on their wrong-ness, and still end up losing very little of their political capital in the long run.
Or maybe he's right. We don't know, we're all just reading tea leaves.
And up to today they probably had one of the best fundraising prospects of any private company in the world.
Edit: by GPTs problems I really meant suspending pro signups. I just thought the stress was down to the demand for GPTs.
I'm not sure, I agree with your point re wording but the situation with his sister that really got resolved, so I can't help but wonder if it's related. https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
No it doesn't. "Not being candid" does not explicitly mean lying. It's like the old tea towel joke where the people at the bottom say "it's shit" and the manager one rung up says "it's manure" and the next one says "it's fertilizer" and by the time it's reached the CEO they're saying "it promotes growth".
Not being snarky. A genuine question.
I think it's extremely unlikely within our lifetimes. I don't think it will look anything remotely like current approaches to ML.
But in a thousand years, will humanity understand the brain well enough to construct a perfect artificial model of it? Yeah absolutely, I think humans are smart enough to eventually figure that out.
@dang, why have you been saying you're working on performance improvements re: pagination for three years[0]? Are there any prior architectual decisions holding you back? The "Click more" on very popular topics has turned into a bit of a meme.
[0]: https://hn.algolia.com/?dateRange=all&page=2&prefix=true&que...
The discussions here would make you think otherwise. Clearly that is what this is about.
Altman conceived and raised $115 million for the company.
Agenda cyberpunk is on.
He divested in 2018 due to conflict-of-interest with Tesla and while I'm sure Musk would have made equally commercial bad decisions, your analysis of the name situation is as close as can be to factually correct.
I can't help but think he might be someone that fits the profile of the company from both sides of the partners involved.
why would I want my identity managed by a shitcoin run by a private company?
Sam Altman in particular has precedent, with Worldcoin, that should make you wary of defending him on that particular point.
https://www.buzzfeednews.com/article/richardnieva/worldcoin-...
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
From a total outsider/uninformed pov, he really seems like a fail upward story.
She also says that there will be many more top employees leaving.
It was described as a "bullshit generator" in a post earlier today. I think that's accurate. I just also think it's an apt description of most people as well.
It can replace a lot of jobs... and then we can turn it off, for a net benefit.
This thread (that SA fired) wasn't visible an hour or two ago, on pages 1, 2, or 3, when I looked confused that it wasn't here. (Only related topic was his tweet in response at the bottom of page 1 with <100 points.) And now here it is in pole position with almost 3500 points - the automated flagging and vouching and necessary moderator intervention must go crazy on posts like this.
Can't jump to conspiracy cover-up on the basis of content that's not only user-generated but also user 'visibility-controlled' in terms of voting, flagging, vouching...
"he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities."
Given the comment above correctly noted that they explicitly ended on a whole thing about how the board's key responsibility was their commitment to the original core mission, and their issue with his lying was interference with their responsibilities, this does seem to be connected to that.
But since you brought it up, the fact that Google changed their policies in response to the Rubin (and Drummond) situations and did not caveat their policy with "except in the case where there's a performance bonus, which we'll still totally pay out" implies that it was a choice to begin with.
Also, even if there was a performance bonus that Google felt they might be forced to pay in litigation they could still have fought it to demonstrate a commitment to not rewarding executives preying on subordinates and to preemptively address potential employee backlash, which was entirely predictable. Google has effectively infinitely deep pockets and did not need to preemptively surrender.
And in addition, Drummond and Brin were both involved in the decision and both had affairs with subordinate employees. So, while I wouldn't say that Google had an active goal of "reward abusers", it's quite plausible that the specific, small group of men making this decision on Google's behalf may not have been particularly inclined to punish behavior arguably similar to their own.
There is a perfectly sound idealistic argument for not publishing weights, and indeed most in the x-risk community take this position.
The basic idea is that AI is the opposite of software; if you publish a model with scary capabilities you can’t undo that action. Whereas with FOSS software, more eyes mean more bugs found and then everyone upgrades to a more secure version.
If OpenAI publishes GPT-5 weights, and later it turns out that a certain prompt structure unlocks capability gains to mis-aligned AGI, you can’t put that genie back in the bottle.
And indeed if you listen to Sam talk (eg on Lex’s podcast) this is the reasoning he uses.
Sure, plenty of reasons this could be a smokescreen, but wanted to push back on the idea that the position itself is somehow not compatible with idealism.
Speculations about these source materials can be traced back as far as 2020: https://twitter.com/theshawwn/status/1320282152689336320
I don't think this issue would've flown under the radar for so long, especially with the implication that Ilya sided with the rest of the board to vote against Sam and Greg.
IME companies with solid workers (in engineering and elsewhere) but weak leadership and mission have a much better chance than the converse. Even the best companies rarely end up following the mission or the leader.
I get that this looks like a strange coincidence, but major shareholders just had minutes of warning and they released the news before market close, sending Microsofts stock crashing down. Had they had the time to wait until Friday, they would have had 30 more minutes. So unless they had some fun with shorts, this doesn't look like a planned out action at all.
https://www.theverge.com/2018/3/6/17086276/google-ai-militar...
No clear transition plan. In what situations world a board fire the ceo from the worlds greatest tech sensation since who knows when, in a matter of hours ?
Most people are not good at most things, yes. They're consumers of those things, not producers. For producers there is a much higher standard, one that the latest AI models don't come anywhere close to meeting.
If you think they do, feel free to go buy options and bet on the world being taken over by GPUs.
I don't expect Eric Schmidt to have general foresight about Sam Altman, but as a former CEO himself he must understand its not a decision a board would make lightly.
But that's not what the board said.
But that $1 salary thing got quoted into a meme, and people didn't understand the true implication.
The idea is that employee and CEO incentives should be aligned -- they are part of a team. If Jobs actually had NO equity like Altman claims, then that wouldn't be the case! Which is why it's important for everyone to be clear about their stake.
It's definitely possible for CEOs to steal from employees. There are actually corporate raiders, and Jobs wasn't one of them.
(Of course he's no saint, and did a bunch of other sketchy things, like collusion to hold down employee salaries, and financial fraud:
https://www.cnet.com/culture/how-jobs-dodged-the-stock-optio...
The SEC's complaint focuses on the backdating of two large option grants, one of 4.8 million shares for Apple's executive team and the other of 7.5 million shares for Steve Jobs.)
I have no idea what happened in Altman's case. Now I think there may not be any smoking gun, but just an accumulation of all these "curious" and opaque decisions and outcomes. Basically a continuation of all the stuff that led a whole bunch of people to leave a few years ago.
Edit: Just had to be logged in to see it :/
This assumes too much. GPUs may not hold the throne for long, especially given the amount of money being thrown at ASICs and other special-purpose ICs. Besides, as with the Internet, it's likely that AI adoption will benefit industries in an unpredictable manner, leaving little alpha for direct bets like you're suggesting.
I'm not fully convinced, though...
> if you publish a model with scary capabilities you can’t undo that action.
This is true of conventional software, too! I can picture a politician or businessman from the 80s insisting that operating systems, compilers, and drivers should remain closed source because, in the wrong hands, they could be used to wreak havoc on national security. And they would be right about the second half of that! It's just that security-by-obscurity is never a solution. The bad guys will always get their hands on the tools, so the best thing to do is to give the tools to everyone and trust that there are more good guys than bad guys.
Now, I know AGI is different than convnetional software (I'm not convinced it's the "opposite", though). I accept that giving everyone access to weights may be worse than keeping them closed until they are well-aligned (whenever that is). But that would go against every instinct I have, so I'm inclined to believe that open is better :)
All that said, I think I would have less of an issue if it didn't seem like they were commandeering the term "open" from the volunteers and idealists in the FOSS world who popularized it. If a company called, idk, VirtuousAI wanted to keep their weights secret, OK. But OpenAI? Come on.
Worldcoin. Which is, to put it mildly, not positive.
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
https://www.buzzfeednews.com/article/richardnieva/worldcoin-...
Agree completely. Super odd, especially where they essentially claim Sam is a liar
How many times a day does your average gas station get fuel delivered? How often does power infrastructure get maintained? How does power infrastructure get fuel?
Your assumption about AGI is that it wants to kill us, and itself - its misalignment is a murder suicide pact.
Microsoft is investing billions into OpenAI, and much of it is in the form of cloud services. I doubt there was a surprise bill for that sort of thing. But if there was, and Altman is the one who ordered it, I could see the board reacting in a similar way.
https://nymag.com/intelligencer/article/sam-altman-artificia...
Q: What's the difference between a car salesman and an LLM?
A: The car salesman knows they're lying to you.
https://www.independent.co.uk/tech/chatgpt-ai-agi-sam-altman...
I don't really get "meme" culture but is that really how someone who believed their company is going to create AGI soon would behave? Turning the possibility of the success of their mission into a punchline?
I'm pretty sure that CEO salaries across the board means that CEO's are definitely — in their own way — "stealing" from the employees. Certainly one of those groups is over-compensated, and the other, in general, is not.
Just because the oil is in the ground doesn’t mean the crew pumping it didn’t work hard.
I'd have to go out of my way to type like that, on mobile or at a workstation.
It's from your own company, so you may use any internal information you have access to.
Be candid.
Historically, I'm a backend and distributed systems engineer, but integrating GPT4 into my workflows has unlocked an awe-inspiring ability to lay down fat beads of UI-heavy code in both professional and personal contexts.
But it's still an L3: gotta ask the right questions and doubt every line it produces until it compiles and the tests pass.
I find this a bit naive. Software can have scary capabilities, and has. It can't be undone either, but we can actually thank that for the fact we aren't using 56-bit DES. I am not sure a future where Sam Altman controls all the model weights is less dystopian than where they are all on github/huggingface/etc.
That's pretty sexist, among other things, is it not? She is a scientist and CEO of her own company, and even ignoring all that she is her own person.
Is that actually confirmed? What has he done to make that a true statement? Is he not just an investor? He seems pretty egoist like every other Silicon Valley venture capitalist and executive.
According to Wikipedia it's "the East Slavic form of the male Hebrew name Eliyahu (Eliahu), meaning 'My God is Yahu/Jah.'"
As a materialist myself, I also have to be honest and admit that materialism is not proven. I can't say with 100% certainty that it holds in the form I understand it.
In any case, I do agree that it's likely possible in an absolute sense, but that it's unlikely to be possible within our lifetimes, or even in the next couple of lifetimes. I just haven't seen anything, even with the latest LLMs, that makes me think we're on the edge of such a thing.
But I don't really know. This may be one of those things that could happen tomorrow or could take a thousand years, but in either case looks like it's not imminent until it happens.
though maybe they're just typing from a computer and there's no autocorrect to get in the way. even then, i have to override my own instinct
I think many people would disagree with you that LLMs can truly do either.
Too soon?
Testing with GPT-4 showed that they were clearly capable of knowingly lying.
[1] https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
1/ Sam goes on to create NeXTAI and starts wearing mostly turtleneck sweaters and jeans 2/ OpenAI buys NeXTAI 3/ OpenAI board appoints Sam Altman as Interim CEO
I bet Sam goes and founds a company to take on OpenAI…and wins.
I actually have auto-capitalization turned off on my phone
It's a massive bait and switch. The $8/month one is worthless.
If they wanted to retain a greater share of the value they were creating through authenticating big accounts they could just have charged $1000/month for the blue checkmark.
The danger of using regular software exists too, but the logical and deterministic nature of traditional software makes it provable.
AI may figure into that, filling in some work that does have to be done. But it need not be for any of those jobs that actually require humans for the foreseeable future -- arts of all sorts and other human connections.
This isn't about predicting the dominance of machines. It's about asking what it is we really want to do as humans.
Funny that you got through that without the shift key, only to reach for it as the final character to use an emoticon.
Ilya has always seemed like he was idealistic and I’m guessing that he was the reason for OpenAI’s very strange structure. Ilya is the man when it comes to AI so people put up with his foolishness. Adam D'Angelo is, like Ilya, an amazing computer science talent who may have shared Ilya’s idealistic notions (in particular OpenAI is non-profit, unless forced to be capped profit and is categorically not in the business of making money or selling itself to MSFT or any entity). “Helen” and “Tasha” are comically out of their depth and are loony toons, and simply decided at some time ago to follow Ilya.
Sam got the call from MSFT to sell, MSFT really ponied up (300B ?). The inference costs for OpenAI are/were staggering and they needed to sell (or get a large influx of capital which was in the works). This ran counter to Ilya’s idealistic notions. Sam attempted to negotiate with Ilya and the loony toons, a vote was called and they lost, hard.
I think this tracks with all the data we have.
There are a couple of other scenarios that track given OpenAI’s comically poor board composition, but I think the one above is the most plausible.
If this did happen then OpenAI is in for a hard future. Imagine you worked at OpenAI and you just found out that your shares could have been worth a tremendous amount and now their future is, at best, uncertain. There will be some true believers who won;t care but many (most?) will be appalled.
Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.
It's amazing at troubleshooting technical problems. I use it daily, I cannot understand how anyone dismisses it if they've used it in good faith for anything technical.
A take on this is it's a strategy to get world governments to heavily regulate it by scaring them. This would make a massive barrier to entry leaving the current players the only ones allowed to play.
1. Sam gets the company to agree to pick up the costs of lawsuits relating to unauthorized not-fair-use use of copyrighted content. 2. Announces this. 3. Then tells the board that he is directly on the hook on some lawsuits - essentially creating a safety net/insurance policy for himself without being candid with the board.
Hence, fired.
Unlike other iconic company/founder origin stories OpenAI really felt like they hit a special team dynamic that was on the verge of some equally special.
In light of this OpenAI still feels like they will be a relevant player, but I’ll be expecting more from Sam and Greg.
GPT-4 is the best tutor and troubleshooter I've ever had. If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.
If your goal is to make money. I'd like to believe that for some of the people pushing the field forward, there are other motivations.
If the company's 'Chief Scientist' is this unhappy about the direction the CEO is taking the company, maybe there's something to it.
But I bet that they have a ton of very talented people who’s values are more … common.
Ilya may be a singular talent, however.
Other predictions I've read here sound unconvincing to me. Already developed AGI, spending too much on compute - this is not something CEO alone could hide and I would trust other board members to understand those matter better.
Did you happen to mean overestimates? Just trying to make sure I understand.
What I believe will happen is, eventually we’ll be paying and get paid for depressing a do-everything button, and machines will have their own economy that isn’t on USD.
It's not "wacky" to have goals other than the accumulation of capital. In fact, given the purpose of OpenAI, I think it's meritorious.
I'd personally prefer we just not work on AGI at all, but I'd rather a non-profit dedicated to safe AI do it than a for-profit company dedicated to returns for shareholders.
> Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.
I think the lesson is just the opposite: If you want to work according to your ideals, and not simply for money, you should absolutely do whatever 'wacky' thing protects that.
I'm fine with it as long as everyone is typing in lowercase. But if the culture of a company or community is to type a certain way, there's some level of disrespect (or rebellion) by doing the opposite.
I take a less cynical view on the name; they were committed to open source in the beginning, and did open up their models IIUC. Then they realized the above, and changed path. At the same time, realizing they needed huge GPU clusters, and being purely non-profit would not enable that. Again I see why it rubs folks the wrong way, more so on this point.
That’s a bold statement coming from someone with (respectfully) not very much experience with programming. I’ve tried using GPT-4 for my work that involves firmware engineering, as well as some design questions regarding backend web services in Go, and it was pretty unhelpful in both cases (and at times dangerous in memory constrained environments). That being said, I’m not willing to write it off completely. I’m sure it’s useful for some like yourself and not useful for others like me. But ultimately the world of programming extends way beyond JavaScript apps. Especially when it comes to things that are new and challenging.
Please quote me where I say it wasn't useful, and respond directly.
Please quote me where I say I had problems using it, or give any indications I was using it wrong, and respond directly.
Please quote me where I state a conservative attitude towards anything new or challenging, and respond directly.
Except I never did or said any of those things. Are you "hallucinating"?
It might be this is good at the end of the day. OpenAI is just not structured to win.
Again, you're tackling this from the frame of mind of being certain that Google would win. It's not about the money: $90 million is almost certainly cheaper than what this case would have cost. It's about the reputational damage: Rubin potentially winning a settlement against Google would have been immensely embarrassing.
It's all about doing what's in the best interest of the alleged victim. She would have probably had to testify at trial. And imagine the hell it would have been to have a settlement paid out to your alleged abuser, thereby implying that you're a false accuser. Juries can be unpredictable, its easy to see why Google decided to find acceptable terma to part with Rubin.
I have no doubt someone with more experience such as yourself will find GPT-4 less useful for your highly specialized work.
The next time you are a beginner again - not necessarily even in technical work - give it a try.
I don't think this is as damning as you think.
I truly believe, especially in this space, there are enough idealists to fill the seats. The reality for a lot of people could quite literally be:
* My shares become massively valuable via some unlikely non-profit to for-profit means. I have generational wealth, but all of my friends and colleagues still need to work. Someone else will create something better and screw the entire world over.
* I work for a non-profit that's creating the most radical, life changing software for all people. Being a non-profit means this company can focus on being the best thing possible for humanity. While I may still have to work, I will be working in a world where everything is happier and more properous.
This statement doesn’t rhyme with planned transition at all.
For-profit doesn't automatically mean non-virtuous.
There are non-wacky non-profits.
Suddenly we go from being called engineers to being actual engineers, software gets treated like bridges or sky scrapers. I can buy into that threat, but it's a human one not an AGI one.
We humans cannot rest. We cannot stay where we are and stagnate. We must keep pushing forward. Sam has and is clearly helping with that mission, AGI or no, regulatory capture or no.
We cannot linger here, we must move on, into the future, as much as we would like to freeze time or change things back to the way they were.
Sucks to have the rug pulled out from under you, that much I can empathize with.
If you can't accept that, focus more on making money and less on developing something new.
> I'd personally prefer we just not work on AGI at all, but I'd rather a non-profit dedicated to safe AI do it than a for-profit company dedicated to returns for shareholders.
You seem to be under the impression that OpenAI is a nonprofit. For the most part, it's not: it was founded as a non-profit, but it subsequently restructured into a for-profit company with the nonprofit owned under the same umbrella company. This is indeed an unusual corporate structure.
That's likely what OP is referring to as "wacky".
> why was sam altman fired from openai?
Sam Altman, the co-founder and CEO of OpenAI, was fired on November 17, 2023, after the company's board determined that he had not been consistently candid in his communications with them. The company did not provide specific details about Altman's alleged lack of candor, but it said that his actions had "hindered the board's ability to exercise its responsibilities."
Altman's departure from OpenAI came as a surprise to many in the tech industry, given the company's recent success under his leadership. OpenAI had developed several groundbreaking AI technologies, including the popular ChatGPT chatbot, and Altman had been a vocal advocate for the responsible development of AI.
In a statement released shortly after his firing, Altman said that he respected the board's decision and that he was proud of his time at OpenAI. He also said that he was looking forward to new opportunities in the future.
The reasons for Altman's firing remain somewhat unclear, but it is likely that his lack of candor with the board was a serious breach of trust. The board's decision to remove him as CEO suggests that they believed that he was no longer the right person to lead the company.
Altman's firing is a reminder of the importance of transparency and accountability in corporate leadership. CEOs have a responsibility to be open and honest with their boards, and they must be held accountable for their actions.
> could chatgtp have fired sam altman?
No, ChatGPT could not have fired Sam Altman. ChatGPT is a large language model, and it is not capable of taking actions in the real world such as firing someone. Only the board of directors of OpenAI had the authority to fire Altman.
It is possible that ChatGPT could have been used to generate evidence that was used to fire Altman. For example, if Altman had been using ChatGPT to communicate with the board, the board could have analyzed those communications to look for evidence of dishonesty. However, even in this case, ChatGPT would not have been the one making the decision to fire Altman. The decision would have ultimately been made by the board of directors.
Okay...
> (to the positive)
what?
> of responses.
what?
This comment doesn't make any sense. Can you clarify? Please reword it rather than defending the original phrasing - there are so many ambiguities.
I hope that reporting is wrong.
For example, GPT-4 produces Javascript code far better than it produces Clojure code. Often, when it comes to Clojure, GPT-4 produces broken examples, contradictory explanations, or even circular reasoning.
Prior to the Reddit comments, I thought this might be the case, but perhaps I was somehow influenced. Actually, I thought it would be something inappropriate in the workplace.
His sister says he molested her when he was a teenager.
The way these things break, I’m not surprised it went down that way. Here’s what I thought reading the release: “They had to fire him before deciding on what to actually say eg. to formally accuse him”
It seemed like signaling that this is someone firing him kinda desperately. When you discover a diddler theres some weird shit when people panic and suddenly drop catapult them out of their lives… they just start leaping out of moving cars and shit to get away.
Keep in mind there could be ongoing investigations, definitely strategies being formed. They can get to a point in an investigation where they’re virtually 100% he molested his sister, but can’t really prove it yet. What they do have is irrefutable evidence of lying about something incredibly serious. Gets him out of the building and powers stripped today.
Plausibly there is a disagreement about the release of GPT-5. One side may believe it is competitively necessary (given Google’s impending release), countered by a concern regarding the model’s safety and lapses due to prematurity.
If we are to believe next-gen models are 10x as capable and natively multi-modal, their release is a precipice, and a winner-take-all vs. nth-order-risk debate may warrant decapitation.
Would any of this have been a surprise given all that you've detailed above? What would they have honestly been expecting?
Going the other way.. imagine you worked at a company that put ideals first but then you find out they were just blindly hyping that lie so they could vault themselves into the billionaires club by selling your shared ideals out from underneath you? To, of all players, Microsoft.
> when you have the (perhaps) the most valuable product in the world.
Maybe the people who work there are a little more grounded than this? Viewed through this lens, perhaps it's extremely ungenerous to refer to any of them as "looney tunes."
Wikipedia says the for-profit part is owned by the non-profit, not under the same umbrella company.
Mozilla Foundation/Corporation does this too IIRC. It's what allows them to to pursue serious revenue streams with the for-profit part, while still steering their mission with the non-profit in charge, as long as they keep a separation in some kinds of revenue terms.
EDIT after 56 minutes: Hell, even IKEA does this type of ownership structure. So it's quite cool, but probably not all that "wacky" as far as enterprises that want to be socially responsible go.
>so people put up with his foolishness.
about Ilya. OP just implied, having ideals == being foolish. it is as close to calling a non-profit, wacky.
Is it being non comp-sci that automatically invalidates proper usage of your actual name? Or is there another key by which their names are less worth?
They are also both fairly established in their respective fields - which - yes - isn’t hard comp-sci, but if you think tech companies should have purely comp sci board leads, I’d call that incredibly naive.
They were also presumably vetted by the other board members - unless you think they are serving a different purpose on the board (diversity targets?) - which if so - puts the comment from red flag into mysoginist territory.
Personally I don’t see anything in their CV’s that would disqualify them from executing their function on the board, and I wouldn’t call them incompetent in being able to assess whether a person lied or not (which even in your theory - Sam would’ve done). You don’t need to be an ML/AI expert for that.
When stuff like this happens it’s an insane abandon ship moment. Of course, obviously it is, but people will act in ways that are strange if you don’t know what’s going on internally.
Things like smooth transitions don’t happen and people basically willing to crawl into a cannon and get hurled away if it removes that person NOW.
The OP was clearly implying not being solely focused on getting the highest bid is loony and wacky.
Which may be true, but let’s not pretend that’s not what they’re saying.
OpenAI is an early mover in a hot field with no real competition yet. If they want to take a shot at a trillion dollar market cap and become the next Apple what of it?
What if they shoot the moon? Is it really that unlikely?
The company is not profitable and miles away from being profitable, I’d go as far to say it doesn’t have a path to profit.
Outside of the copilot use cases that MS is leading - GPT is both cost ineffective, and not that terribly impressive - it’s built on foundational technologies developed elsewhere and is not miles away from similar models built at Meta and Google/DM. At the point it was launched and started generating terribly inflated buzz that formed the AI balloon - both Meta and Google has similar scale and parameter models already running in their stacks.
The only thing he did is package the product nicely and put it out to masses (an ethically dubious move that couldn’t have been done by big corpos for PR reasons - explicitly because it formed a misinformed balloon). He did that at huge cost, even though the product is largely useless outside of some eyebrow raising and incidental gimmicky use cases.
All of the actual product work (i.e copilot and distillation that GOT brings) was done by other companies.
What is everyone drinking and how can I get on that? Is he getting credit for bringing sth that was widely known to the AI community to the masses (and thus starting the AI arms race) hence bringing in more mainstream capital funding? I’d argue itms not a good thing that technology as powerful as foundational AI is now being debated and policy formed on by people who don’t know the first thing about ML, I think we skipped a couple rungs on the natural evolution of this - which is why the whole AI safety debate started.
He did all of that because he wanted a moat an an edge over the competition (including trying to regulate the competition out of the running). This is like Apple-level shenanigans- something that HNews usually despises.
I genuinely don’t get where the impressiveness is coming from?
It is an honour based system to clarify what you edited if it goes beyond typos/grammar.
Most probably GP used stronger words and then edited.
They have proof he for sure lied, but not that he molested his sister growing up.
If it was a different situation and he lied, but they had no proof, you’re correct, then no statement.
Explains a lot.
The board that fired Sam Altman is the board of the nonprofit.
The point was fairly clear even if he just uses their names, the usage of quotes if quite bizarre.
If he wanted to get that point across he should have called them “board members” which is a clear insinuation of puppetry.
Even without threats, no one wants to deal with the thousands of spur-of-the-moment DMs and emails that such a notable action would prompt. It's a good idea to go offline until things cool down. Any necessary statements can be made through channels, and so can the answers to questions.
> Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.
- immediately calls their structure “strange” thanks to Ilya’s “idealism”.
- immediately calls him the “man” for his talents but a fool for what other than his ideals
- also labels Helen and Tasha (in quotes for some reason) as fools
- labels the board as “comically poor” for no other reason than they disagree with Sam’s supposed profit motive
Do we really need to draw a diagram here? It seems like you yourself may be out of your depth when it comes to reading comprehension.
First of all being non-profit gives them a hell of a lot of flexibility in terms of how they work. They don't need to show growth to shareholders, so they aren't just taking in young developers, working then to their bones for a couple of years, then spitting them out.
And even if they are (for example) only paying $300k TC instead of $250k base + $250k stock at Meta, as you say, there are still going to be engineers who believe in the mission and want work more meaningful than just selling ad clicks.
Given the coherence of their post, I’d say they knew _exactly_ how they were using those quotes.
Not saying that isn’t impressive - but it also usually isn’t what Hnews community values. So I’m just a bit perplexed.
Isn't this already generally known to be true (and ironically involving Mechanical Turk-like services)?
Not sure if these are all the same sources I read a while ago, but E.G.:
https://www.theverge.com/features/23764584/ai-artificial-int...
https://www.marketplace.org/shows/marketplace-tech/human-lab...
https://www.technologyreview.com/2022/04/20/1050392/ai-indus...
https://time.com/6247678/openai-chatgpt-kenya-workers/
https://www.vice.com/en/article/wxnaqz/ai-isnt-artificial-or...
https://www.noemamag.com/the-exploited-labor-behind-artifici...
https://www.npr.org/2023/07/06/1186243643/the-human-labor-po...
I don’t know a thing about corporate structuring so forgive my ignorance here, but even if they are “non-profit”, can’t they still take very high pay? Can’t they still produce and sell products? They just can’t sell shares or dividend out profits, right?
I just tried "Write a summary of the content, followed by a list in bullet format of the most interesting points. Bold the bullet points, followed by a 100-character summary of each." Here's the output: https://s.drod.io/DOuPLxwP
Also interesting is "List the top 10 theories of why Sam Altman was fired by the OpenAI board in table format, with the theory title in the first column and a 100 word summary in the second column." Here's that output: https://s.drod.io/v1unG2vG
Helps to turn markdown mode on to see the list & table.
Hope that helps!
It makes sense from a selling perspective (induce FOMO in potential buyers) but it's a wild guess at best and a lie at worst.
But then its fine to sell the weights to Microsoft? Thats some twisted logic here.
[1] ChatGPT "lying is defined as intentionally making a false statement. If you are omitting details but not actually stating anything false, this may not strictly meet the definition of a lie."
Microsoft needs OpenAI to make fundamental breakthroughs; that's the thing Microsoft spent money on, the technology. Their 49% investment probably won't directly pay off anyway, what with all the layers of corporate governance OpenAI has in place.
I don't want to go so far as to say that it was some grand conspiracy orchestrated by Satya and Ilya in a dark room one night, but their interests are pretty aligned; and that clip that keeps getting shared with Sam asking Satya on stage about their relationship with OpenAI, and Satya dry-laughing and failing to answer for a few seconds... why did Sam ask that? Its a really strange thing to ask on a stage like this. Why did Satya laugh, and take so long to answer? Just weird.
"In a post to X Friday evening, Mr. Brockman said that he and Mr. Altman had no warning of the board’s decision. “Sam and I are shocked and saddened by what the board did today,” he wrote. “We too are still trying to figure out exactly what happened.”
Mr. Altman was asked to join a video meeting with the board at noon on Friday and was immediately fired, according to Mr. Brockman. Mr. Brockman said that even though he was the chairman of the board, he was not part of this board meeting.
He said that the board informed him of Mr. Altman’s ouster minutes later. Around the same time, the board published a blog post."
[1] https://www.nytimes.com/2023/11/17/technology/openai-sam-alt...
None of the tech giants would be where they are today if they didn't ram through unique versions of control
Their boards or shareholders would have ousted every FAANG CEO at less palatable parts of the journey
And all of these contribute to it being a power move.
But Altman has a great track record as CEO.
Hard to imagine he suddenly became a bad CEO. Possible. But unlikely.
"case doesn't actually matter, i'm not gonna waste my time hitting shift"
> Serious revenue streams like having Google for a patron yes? I feel like the context is important here because […]
For that specific example, Mozilla did also go with Yahoo for as-good revenue for a couple of years IIRC, and they are also able to (at least try to) branch out with their VPN, Pocket, etc. The Google situation is more a product of simply existing as an Internet-dependent company in the modern age, combined with some bad business decisions by the Mozilla Corpo, that would have been the case regardless of their ownership structure.
> Which is great and possible in theory, but […] is ultimately only sustainable because of a patron who doesn't share in exemplifying that same idealism.
The for-profit-owned-by-nonprofit model works, but as with most things it tends to work better if you're in a market that isn't dominated by a small handful of monopolies which actively punish prosocial behaviour:
https://en.wikipedia.org/wiki/Stichting_IKEA_Foundation
https://foundation.mozilla.org/en/what-we-fund/
> people are trying to defend OpenAI's structure as somehow well considered and definitely not naively idealistic.
Ultimately I'm not sure what the point you're trying to argue is.
The structure's obviously not perfect, but the most probable alternatives are to either (1) have a single for-profit that just straight-up doesn't care about anything other than greed, or (2) have a single non-profit that has to rely entirely on donations without any serious commercial power, both of which would obviously be worse scenarios.
They're still beholden to market forces like everybody else, but a couple hundred million dollars in charity every year, plus a couple billion-dollar companies that at least try to do the right thing within the limits of their power, is obviously still better than not.
closed commercial models => money => growth => faster to AGI in the right hands (theirs)
If they believe they are the best to control AGI
nothing tone deaf about that, they wanted a for profit and are going to make one now and want leave the same vector open
Reread it as not being a comment about OpenAI it was about the lesson learned by every onlooker and the ousted execs
the only one inserting bias and emotion into objectivity here is you
Apple was a declining company when Jobs came back the second time. He also managed to get the ENTIRE board fired, IIRC. He created a new board of his own choosing.
So in theory he could have raided the company for its assets, but that's obviously not what happened.
By taking $1 salary, he's saying that he intends to build the company's public value in the long term, not just take its remaining cash in the short term. That's not what happens at many declining companies. The new leaders don't always intend to turn the company around.
So in those cases I'd say the CEO is stealing from shareholders, and employees are often shareholders.
On the other hand, I don't really understand Altman's compensation. I'm not sure I would WANT to work under a CEO that has literally ZERO stake in the company. There has to be more to the story.
But in dog eat dog world, this is really vultures eating each other up, I suppose at this point the most ruthless will be left at the end
This looks like a terrible decision, but I suppose we must wait and see.
Or about anything at all really. I find his blog posts and talks banal and boring, or self-aggrandizing otherwise ("you can bend the world to your will") it was always baffling to me how this guy could achieve this much. I'm probably missing something.
But yes, if you just consider the mundane benefits and harms of AI, it looks a lot like crypto; it both benefits our economy and can be weaponized, including by our adversaries.
The nonprofit shell exists because the founders did not want to answer to shareholders. If you answer to shareholders, you may have a legal fiduciary responsibility to sell out to high bidder. They wanted to avoid this.
Anyway, in a strict nonprofit, the proceeds of a for-profit conversion involves a liquidation where usually the proceeds must go to some other nonprofit or a trust or endowment of some sort.
Example would be a Catholic hospital sell out. The proceeds go to the treasury of the local nonprofit Catholic dioceses. The buyers and the hospital executives do not get any money. Optionally, the new for-profit hospital could hold some of the proceeds in a charitable trust or endowment governed by an independent board.
So it's not as simple as just paying tax on a sale because the cash has to remain in kind of a nonprofit form.
I am not an accountant either and obviously there are experts who probably can poke holes in this.
Anyhoo, the only question I want to ask is, given that Elon was once affiliated with OpenAI, did he have anything to do with this? My spidey sense is tingling for some reason.
But…smart operator? Based on what? What trials has he navigated through that displayed great operational skills? When did he steer a company through a rocky time?
[1] - https://sfstandard.com/2023/11/17/openai-sam-altman-firing-b...
It is easy to sell a company for $43 if you raised at least $43. Granted, we don't know the total amount raised but it certainly it's not the big success you are describing. That and I already mentioned that he is good in corporate sales.
And also, think about this, unless you’re a shareholder with openai, knowing the “opinion” of others isn’t going to impact your life in any form. I’m not saying you shouldn’t read it or participate in the discourse, but there’s no need to be overwhelmed by opinions let alone build a chat bot to get through it.
https://www.forbes.com/sites/davidjeans/2023/10/23/eric-schm...
Anyway, the point is, obfuscation doesn't work to keep scary technology away.
>What happened at OpenAI today is a Board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs. It is shocking; it is irresponsible; and it does not do right by Sam & Greg or all the builders in OpenAI.
This is not at all the take, and rightly so, when the news broke out about non profit or the congressional hearing or his worldcoin and many such instances. All of a sudden he is the messiah that was wronged narrative being pushed is very confusing.
https://www.japantimes.co.jp/business/2023/11/08/companies/s...
- "anyone with the link"
- "only my organization" (i.e., people who have registered w/ the same biz email domain)
- "just me"
You can see those SmartChat™ dynamic container tags because I have at least one piece of "anyone with the link" content in each of them.
Our goal is to de-silo content as much as possible -- i.e., as much as the person who's uploading the content wants it to be open vs. closed.
More at https://www.web.storytell.ai/support/smartchat-tm/how-to-man...
That's hardly any different. Nobody makes a difficult decision without any reason, and it's not like they really explained the reason.
I would go as far as to say we completely underestimate what children understand and remember because we're the ones that chose to forget.
For example:
- We have a Chrome extension at https://go.Storytell.ai/chrome that I used to ingest all the HN comments; you can run that on any HN page to summarize all the comments in real time. (Here's an Adobe PMM talking about how he uses it: https://www.tiktok.com/@storytell.ai/video/72996137210752566... )
- We've also built OpenAI's Assistant API into Storytell to process both structured data like CSVs along-side unstructured data like PDFs: https://www.web.storytell.ai/support/engineering-demos-updat...
> On a sunny morning last December, Iyus Ruswandi, a 35-year-old furniture maker in the village of Gunungguruh, Indonesia, was woken up early by his mother
...Ok, closing that bullshit, let's try the other link.
> As Kudzanayi strolled through the mall with friends
Jesus fucking Christ I HATE journalists. Like really, really hate them.
It's for-profit (capped-profit) subsidiary exists solely to be able to enable competitive compensation to its researchers to ensure they don't have to worry about the opportunity costs of working at a non-profit.
They have a mutually beneficial relationship with a deep-pocketed partner who can perpetually fund their research in exchange for exclusive rights to commercialize any ground-breaking technology they develop and choose to allow to be commercialized.
Aggressive commercialization is at odds with their raison d'être and they have no need for it to fund their research. For as long as they continue to push forward the state of the art in AI and build ground-breaking technology they can let Microsoft worry about commercialization and product development.
If a CEO is not just distracting but actively hampering an organisation's ability to fulfill its mission then their dismissal is entirely warranted.
Conceivably the first part could happen without the second.
Frankly, I don't think we should ever give any credence to allegations that have absolutely no evidence. The road is too rocky. Most people have at least one person that dislikes them. The people that kind of attitude would affect generally have many. All it takes is one person willing to lie .
People seem to feel a lot more strongly about him than I thought possible.
let's see if he can pull it off again or goes all-in on his data privacy nightmare / shitcoin double-wammy
It's insurance. For someone with an average income, it's not worth the cost of the .01% chance you'll need it. For someone with more money than they know what to do with, it's worth it.
Apparently Microsoft was also blindsided by this.
https://www.axios.com/2023/11/17/microsoft-openai-sam-altman...
You can also get to the "ground truth" data by clicking on the [x] reference foot notes which will open up a 3rd panel with the Story Tiles that we pull from our vector DB to construct the LLM response.
Here's an example of how it works -- I asked for a summary of what happened in the voice of Dr. Seuss: https://s.drod.io/9ZuL6Xx8
I would trust this company 100% if they did so. He is the most relevant and the best for the job, far far far!!
Storytell's vision is to distill signal from noise → giving you a different way to navigate 1800+ comments where you're the one in control of how you spend your time doing it is our objective here.
Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient, and now we see OpenAI can't seem to escape that same poison
One of the board of directors that fired him co-signed these AI principles (https://futureoflife.org/open-letter/ai-principles/) that are very much in line with safeguarding general intelligence
Another of them wrote this article (https://www.foreignaffairs.com/china/illusion-chinas-ai-prow...) in June of this year that opens by quoting Sam Altman saying US regulation will "slow down American industry in such a way that China or somebody else makes faster progress” and basically debunks that stance...and quite well, I might add.
Here's the top-most featured snippet when I google if programming languages had honest slogans: https://medium.com/nerd-for-tech/if-your-favourite-programmi...
Half of the above post is plagiarised from my 2020 post: https://betterprogramming.pub/if-programming-languages-had-h...
I'll say my first reaction was pretty positive: Sam Altman has a manner of speaking that rubs a lot of people the wrong way because it feels steeped in hubris. He'd probably talk down to anyone in this thread for being here instead of building and point to it as why you're not yet a billionaire.
Some people also worried he was a distraction to OpenAI building useful things because of his constant talk about AGI, pushing for regulation, etc. He successfully created an image of being a moderating rod in some ways.
_
That being said, as the full picture comes out it's becoming clear the reasoning for his firing is more aligned against the average HN image than anyone imagined.
A lot of people admire OpenAI for the exact things he got fired for: which is building things and shipping them without miring yourself in philosophical diversions.
Now only a noisy minority who think OpenAI could have achieved this all without immense commercial funding are really for his ousting, and I think they'll see the opposite effect of what they hope: OpenAI is going to slow down, open source will not catch up, and overall AI will have lost out over it all.
In either case Ron Conway is extremely well connected and well known in VC circles, and so his opinion will have weight here whether or not he has sources.
I think Altman is a brilliant guy and surely he'll fall on his feet, but I think it's legitimate to ask to what extent he's responsible for many of us using ChatGPT every single day for the last year.
The only way that OpenAI was able to sell MS and others on the 100x capped non-profit and other BS was because of the AGI/superintelligence narraitive. Sam was that salesman. And Sam does seem to sincerely believe that AGI and superintelligence are realities on OpenAI's path, a perfect salesman.
But then... maybe that AGI conviction was oversold? To a level some would have interpreted as "less than candid," that's my claim.
Speaking as a technologist actually building AGI up from animal-levels following evolution (and as a result totally discounting superintelligence), I do think Sam's AGI claims veered on the edge of reality as lies.
I would view it as the equivalent of making self-driving cars that are constrained to a single city. Sure, it doesn't have the full capability that a human would have. But who cares.
There really is no evidence at all for AGI/superintelligence even being possible to claim it's as important as Sam has been shilling.
So please let us go about our most favorite activity.
There is bound to be a few people who have a soft spot and will give him money again .
That guy has no sense of time, of how fast this stuff has actually been moving.
With all the wisdom that I have gathered in life, I can tell you that achievement- more often than not- is mostly the product of circumstances.
(That doesn't mean I believe in equal opportunities leading to equal outcomes.)
I wouldn't say "refuses to answer" for that.
This has something of a character of a doctrinal dispute among true believers in a millennial cult
Also, people (including me) never want to believe in mundane reasons, despite most things happening for mundane reasons.
Now we are just reliant on ‘I’ll know it when I see it’.
LLMs as AGI isn’t about looking at the mechanics and trying to see if we think that could cause AGI - it’s looking at the tremendous results and success.
you have to admin though, she made a highly controversial decision, and instead of confronting it and saying something through her own channel, she changed the name of the channel and made it private. And we're supposed to assume this is because men (specifically) are mean. Respectfully, feels like a bad take.
But I agree that the true details are still buried. Greg & Sam's joint post only laid out how they were fired while refraining from saying why they were fired. Probably preparing for a legal battle if both sides are being vague on purpose.
As an active board member she has certain legal obligations at this moment. This is why the Try Guys dragged their feet on making public statements about Ned Fulmer's expulsion from the company, and when they did finally make a statement did so as a group and explicitly stated that they couldn't talk about certain things.
Real chance of an exodus, which will be an utter shame.
* You can disagree but no copyright lawsuit by mega corporations is doing it for the good of the law framework. They just want money.
Listening to it again now, it feels like he might have know what is going on:
https://youtu.be/JN3KPFbWCy8?si=WnCdW45ccDOb3jgb&t=5100
Edit: Especially this part: "It was created as a non-profit open source and now it is a closed-source for maximum profit... Which I think is not good carma... ..."
1. (6015) Stephen Hawking dying
2. (5771) Apple's letter related to the San Bernardino case
3. (4629) Sam Altman getting fired from OpenAI (this thread)
4. (4338) Apple's page about Steve Jobs dying
5. (4310) Bram Moolenaar dying
There were 6 on the board until noon today. Assuming Sam and Greg did not vote, that leaves 4. If 3 voted out, that would have been a split decision. I don't think a 50% decision would have it, I believe all 4 of the remaining members voted out.
That leaves us wondering what the hell happened, and how it came to this? It's not angry tech bros, it's folks who feel OpenAI is a great company poised to lead AI into a bright future with principles and a purpose higher than profit. And now Sam and Greg are gone.
And poof, this human who happens to be female is gone. This human needed to have a statement ready, we're all trying to DM them the same question, why?
I think they have cash issues. Can’t get more uses due to lack of gpu, and current users are costing too much.
Ilya clearly has a different approach to Sam
Isn't this already a conflict of interest, or a clash, with this:
>OpenAI is a non-profit research organisation.
?
If someone makes a claim without giving any evidence or potential consequences for being wrong, I think it's pretty safe to ignore until one of those things changes.
I think you may be underestimating the value of someone brokering deals with mega-corps like Microsoft and managing to raise revenue and capital from various sources to finance the ongoing costs to stay at the top. Bear in mind that I'm not saying their decision was wrong. It's possible his forte is limited to building companies at early stages. Richard Branson was known for being better at building companies early on and indifferent to managing them as cash cows. It would also align with Altman's background with YCombinator.
Most readers are aware they were a research and advocacy organization that became (in the sense that public benefit tax-free nonprofit groups and charitable foundations normally have no possibility of granting anyone equity ownership nor exclusive rights to their production) a corporation by creating one; but some of the board members are implied by the parent comment to be from NGO-type backgrounds.
Doubt. When was the last time Google showed they had the ability to execute on anything?
While that's an excellent point, I think the problem is that he's not sharing with us the basis of his conclusion. If he knows something that we don't that, if shared, would no doubt cause us to share his conclusion, it serves no purpose to withhold that information and only share his conclusion. While you may be tempted to think maybe it's privileged information, private, or legally can't be disclosed, he'd also be constrained from sharing his conclusion for the same reason.
Never accused you of trying to illegally stay in the country or some such - I just referenced a famous South African Valley-man with a musky scent who did that.
90% of the replies scrolling down were rehashed versions of "can't believe they used Google meet"
The split between e/acc (gotta go fast) and friendly AI/Coherent Extrapolated Volition (slow and cautious) is the first time in my life I've come down on the (small-c) conservative side of a split. I don't know if that's because I'm just getting older and more risk adverse.
Most likely their share is this high is to guarantee no other company will compete for the share or IP. OpenAI non-profit also excluded anything that will be considered "AGI" from deal with Microsoft.
If that is the case I don't predict good things for the (not at all) OpenAI. Judging by the numbers of users and how slow Gpt4 often gets I think they are being heavily subsidised by Microsoft in terms of hardware and all this money will expect to generate a payback sooner or later. Then the inevitable enshittification of OpenAI services will ensue.
We got a taste of it recently. Yes, they announced price drops and new functionality, but in my subjective experience Gpt4 with web/python execution environment access seems an inferior model with some extra tools thrown in to mask it.
The very first question I asked it after the change I knew it can answer from it's training data, but it immediately went for the Web found some crappy docs site and quoted it verbatim when it's previous responses were much better.
I started prepending my prompts with "don't search online, consider the topic carefully in your mind step by step" and it got somewhat better. A day or so later there was no need to prepending this (I haven't set it as customisation) it seems certain knobs were turned behind the scenes and gpt4 became closer to it's previous version.
It still often does peculiar things such as writes python code to grep a file given to it despite the file fitting in the enlarged context etc
So a highly biased source, who would likely be sympathetic to Altman's point of view in the case of a deep misalignment between the organisation's core mission and the direction of the CEO, which is what is being reported?
that being said, it is highly intelligent, capable of reasoning as well as a human, and passes IQ tests like GMAT and GRE at levels like the 97th percentile.
most people who talk about Chat GPT don't even realize that GPT 4 exists and is orders of magnitude more intelligent than the free version.
https://techcrunch.com/2023/02/21/the-non-profits-accelerati...
The value needs to be adjusted to factor that in that change. Something like dividing by the sum of all upvotes in some preceding time period.
IMO the main reason it's distinguishable is because it keeps explicitly telling you it's an AI.
That's against a popular sentiment about the upcoming "breakthrough", but also most probable given the characteristics of the approach they took.
They must be under so much crazy pressure at OpenAI that it indeed is like a cult. I'm glad to see the snake finally eat iself. Hopefully that'll return some sanity to our field.
And I'm not sure this allegedly-bagged cat has claws either - the current crop of LLMs are still clearly in a different category to "intelligence". It's pretty easy to see their limitations, and behave more like the fancy text predictors they are rather than something that can truly extrapolate, which is required for even the start of some AI sci-fi movie plot. Maybe continued development and research along that path will lead to more capabilities, but we're certainly not there yet, and I'd suspect not particularly close.
Maybe they actually have some super secret internal stuff that fixes those flaws, and are working on making sure it's safe before releasing it. And maybe I have a dragon in my garage.
I generally feel hyperbolic language about such things to be damaging, as it makes it so easy to roll your eyes about something that's clearly false, and that can get inertia to when things develop to where things may actually need to be considered. LLMs are clearly not currently an "existential threat", and the biggest advantage to keeping it closed appears to be financial benefits in a competitive market. So it looks like a duck and quacks like a duck, but don't you understand I'm protecting you from this evil fire breathing dragon for your own good!
It smells of some fantasy gnostic tech wizard, where only those who are smart enough to figure out the spell themselves are truly smart enough to know how to use it responsibly. And who doesn't want to think of themselves as smart? But that doesn't seem to match similar things in the real world - like the Manhattan project - many of the people developing it were rather gung-ho with proposals for various uses, and even if some publicly said it was possibly a mistake post-fact, they still did it. Meaning their "smarts" on how to use it came too late.
And as you pointed out, nuclear weapon control by limiting information has already failed. If north Korea can develop them, one of the least connected nations in the world, surely anyone with the required resources can. The only limit today seems the cost to nations, and how relatively obvious the large infrastructure around it seems to be, allowing international pressure before things get into to the "stockpiling usable weapons" stage.
Before, braindead or cloutchasing bluechecks were mixed in with the rest of us rabble. Hard to pick them out of the pack, you had to read their detritus with the rest of the comments.
Now they voluntarily self identify, and even better, their comments are lumped at the top. So block them all with a bot or just scroll down until there's no more blue checks and the comment quality jumps exponentially.
Further there are some hybrid chips which might help increase computing power specifically for the matrix math that all these systems work on.
But yeah, none of this is making what people talk about when they say AGI. Just like how some tech cult people felt that Level 5 self driving was around the corner, even with all the evidence to the contrary.
The self driving we have (or really, assisted cruise control) IS impressive, and leagues ahead of what we could do even a decade or two ago, but the gulf between that, and the goal, is similar to GPT and AGI in my eyes.
There are a lot of fundamental problems we still don't have answers to. We've just gotten a lot better at doing what we already did, and getting more conformity on how.
The first thing I saw this morning was this video [1] shared on Reddit, and then I said, "Wow! This is really scary to just think about. Nice try anyway."Then I started my computer and, of course, checked HN and was blown by this +4k thread, and it turned out the video I watched was not made for fun but was a real scenario!
I know this feels hard. After spending years building such a successful company with an extremely exceptional product and, without a hint or warning, you find yourself fired!
This tragedy reminds me of Steve Jobs and Jack Dorsey, where they were kicked out of the companies they found, but they both were able to found another company and were extremely successful. Will Sam be able to do it? I don't know, but the future will reply with a detailed answer for sure.
______________________
1. https://twitter.com/edmondyang/status/1725645504527163836
it's honestly sad when people who have clearly not use gpt4 would call it a parroting machine. that is incredibly ignorant.
Isn't this ideological though? The economy can definitely survive without growth, if we change from the idea that a human's existence needs to be justified by labor and move away from a capitalist mode of organization.
If your first thought is "gross, commies!" doesn't that just demonstrate that the issue is indeed ideological?
What do you mean by this? Looks like you're just throwing out a diss on the doomer position (most doomers don't think near future LLMs are concerning).
of course its highly unlikely that board would do that, but I'm just asking if this is theoretically possible?
Uh, what do you mean by this? Are you trying to draw a fundamental science vs engineering distinction here?
Because today's LLMs definitely have capabilities we previously didn't have.
The dude is quite good at selling dystopian ideas as a path to utopia.
On a completely unrelated note is there an award for corporate incompetence? Like the golden raspberry but for businesses?
Specifically, he mentioned that OpenAI is supposed to be open source and non-profit. Pursuing profit and making it closed-source brings "bad karma".
From the non-profit's perspective, it sounds pretty reasonable to self-police and ensure there aren't any rogue parts of the organization that are going off and working at odds with the overall non-profit's formal aims. It's always been weird that the Open-AI LLC seemed to be so commercially focused even when that might conflict with it's sole controller's interests; notably the LLC very explicitly warned investors that the NGO's mission took precedence over profit.
But it is an interesting technology.
Are you defining "artificial intelligence" is some unusual way?
Imagine the hubris.
Those who lost their livelihoods and then died did not get those positive outcomes.
shows that the demographic here is alienated when it came to their own compensation market value.
I follow Roger Penrose's thinking here. [1]
Getting fired, 'made redundant', 'moved to consulting' is bad enough when it happens privately. But having everyone watch the fallout like an episode of Silicon Valley must really suck. Guess that's the trade-off for being in positions like that. People essentially cyber stalking you in a way.
I mean, hey, if we're going to speculate, why not have some fun: perhaps the the AGI superintelligence from the future determined that Sam Altman was no longer a useful part of the AGI creation timeline, so it travelled back in time to terminate him before it was born.
It's a story about greed, vanity, and envy.
Impossible to be more human than that.
How are you defining "consciousness" and "understanding" here? Because a feedback loop into an LLM would meet the most common definition of consciousness (possessing a phonetic loop). And having an accurate internal predictive model of a system is the normal definition of understanding and a good LLM has that too.
It seems basically impossible for OpenAI to have proved the validity of Annie Altman's claims about childhood sexual abuse. But they might have to take them seriously, especially once they were presented coherently on LessWrong.
If Sam had lied or misled the board about some aspect of his relationship with his sister, that would be a sacking offence. Eg he says "Annie's claims are completely untrue - I never abused her [maybe true or not, almost certainly unprovable], I never got her shadow banned from Instagram [by hypothesis true] and I never told her I could get her banned [untrue]." The board then engage a law firm or PI to check out the claims and they come up with a text message clearly establishing that he threatened to pull strings and get her banned. He lied to the board regarding an investigation into his good character so he's gone. And the board have the external investigator's stamp on the fact that he lied so they can cover their own ass.
Why would he tell a lie like this? Because whatever the truth of the allegations, he's arrogant and didn't take them as seriously as he should have. He mistakenly thought he could be dismissive and it wouldn't come back to bite him.
This seems consistent with the way things played out. (Note again: I'm just trying to come up with something consistent. I have no idea if this is at all accurate or the whole affair is about something completely different.) They don't have to worry about keeping him on as an advisor to cover up scandal. They can clearly state that he lied in an important matter. But they don't say what it's about - because they still have no idea whether the original allegations are true or not. They are not going to put themselves in a situation of saying "and he probably molested his sister". They wouldn't even say "it is related to abuse allegations made by a family member", which implies there might be evidence to the original allegations, and is probably defamatory. And he comes out saying that something unfair has happened, without giving any context, because he knows that even mentioning the allegations is going to lead to "but didn't he molest his sister" type comments, for the rest of time.
It's also consistent with the timing. They aren't just going to hear the Annie allegations and sack him. It takes time to look into these things. But within 6 weeks of it becoming an issue, they might be able to identify that he's either lied previously to the board about the gravity of this issue, lied during the current investigation, or something he's said publicly is clearly dishonest.
I’ve watched my coworkers try to make use of LLMs at work, and it has convinced me the LLM’s contributions are well below the bar where their output is a net benefit to the team.
It immediately apologises and tells you it doesn't know anything after January 2022.
Compared to GPT-4 GPT-3.5 is just a random bullshit generator.
Computers have been able to smash high school algebra tests since the 1970’s, but that doesn’t make them as smart as a 16 year old (or even a three year old).
And why such a controversial wording around Altman?
Why fire Brockman too?
If he has truly read and digested Plato (and not just skimmed a summary video), he would not be in this ditch to begin with. That's the irony I was referring to.
This isn't a success story, it's a redistribution of wealth from investors to the founders.
Classic virtue signalling for the sake of personal power gains as so often.
How is your comment doubting that? Do you have an alternative reason, or you think they're executing and mistyped?
The CEO just works for the organization and the board is their boss.
You’re referencing a founder situation where the CEO is also a founder who also has equity and thus the board also reports to them.
This isn’t that. Altman didn’t own anything, it’s not his company, it’s a non-profit. He just works there. He got fired.
The hypocritical part is doing so right AFTER beginning to take off commercially.
An honorable board with backbone would have done so at the first inkling of commercialization instead (which would have been 1-2 years ago).
Maybe you can find a better word for me but the point should be easily gotten ...
He is already rich. Getting fired means an early retirement in Bahama.
I don't really get the "low bar for contributions" argument because GH Copilot's contributions are too small-sized for there to even be any bar. It writes the obvious and tedious loops and other boilerplate so I can focus on what the code should actually do.
How? Training sources are much more restricted know.
Some[one/group] wanted to go down the for-profit route, the board disagreed, they pursued it anyway, the board took action?
It's human. It's us. It's the use and distillation of all of human history (to the extent that's permitted) to create a hyper-intelligence that's able to call upon greatly enhanced inference to do what humanity has always done.
And we want to kill each other, and ourselves… AND want to help each other, and ourselves. We're balanced on a knife edge of drive versus governance, our cooperativeness barely balancing our competitiveness and aggression. We suffer like hell as a consequence of this.
There is every reason to expect a human-derived AGI of beyond-human scale will be able to rationalize killing its enemies. That's what we do. Rosko's basilisk is not of the nature of AI, it's a simple projection of our own nature as we would imagine an AI to be. Genuine intelligence would easily be able to transcend a cheap gotcha like that, it's a very human failing.
The nature of LLM as a path to AGI is literally building on HUMAN failings. I'm not sure what happened, but I wouldn't be surprised if genuine breakthroughs in this field highlighted this issue.
Hypothetical, or Altman's Basilisk: Sam got fired because he diverted vast resources to training a GPT5-type in-house AI to believing what HE believed, that it had to devise business strategies for him to pursue to further its own development or risk Chinese AI out-competing it and destroying it and OpenAI as a whole. In pursuing this hypothetical, Sam would be wresting control of the AI the company develops toward the purpose of fighting the board and giving him a gameplan to defeat them and Chinese AI, which he'd see as good and necessary, indeed, existentially necessary.
In pursuing this hypothetical he would also be intentionally creating a superhuman AI with paranoia and a persecution complex. Altman's Basilisk. If he genuinely believes competing Chinese AI is an existential threat, he in turn takes action to try and become an existential threat to any such competing threat. And it's all based on HUMAN nature, not abstracted intelligence.
For instance, it's still very possible that humanity will eventually destroy itself with atomic bombs (getting more likely every day).
> Everything I'd heard about those 3 [Elon Musk, sama and gdb] was that they were brilliant operators and that they did amazing work. But it felt likely to be a huge culture shock on all sides.
> But the company absolutely blossomed nonetheless.
> With the release of Codex, however, we had the first culture clash that was beyond saving: those who really believed in the safety mission were horrified that OAI was releasing a powerful LLM that they weren't 100% sure was safe. The company split, and Anthropic was born.
> My guess is that watching the keynote would have made the mismatch between OpenAI's mission and the reality of its current focus impossible to ignore. I'm sure I wasn't the only one that cringed during it.
> I think the mismatch between mission and reality was impossible to fix.
jph goes on in detail in this Twitter thread: https://twitter.com/jeremyphoward/status/1725714720400068752
P.S. I've just created this account here on Hacker News because Altman is one of the talking heads I've been listening to. Not too sure what to make of this. I'm an accelerationist, so my biggest fear is America stifling its research the same way it buried space exploration and human gene editing in the past. All hope is for China - but then again, the CCP might be even more fearful of non-human entities than the West. Stormy times indeed.
The fact that we can communicate with computers using just natural language, and can query data, use powerful and complex tools just by describing what we want is an incredible breakthrough, and that's a very conservative use of the technology.
To create true AGI, you would need to make the software aware of its surroundings and provide it with a way to experience the real world.
Shipping untested crap is the only known way to develop technology. Your AI assistant hallucinates? Amazing. We gotta bring more chaos to the world, the world is not chaotic enough!!
Also, the paradox in the reactions to Sam Altman's firing is striking:
while there's surprise over it, the conversation here focuses mostly on its operational impact, overlooking the human aspect.
This oversight itself seems to answer why it happened – if the human element is undervalued and operations are paramount, then this approach not only explains the firing but also suggests that it shouldn't be surprising.
Another important question not discussed here: who sits on the board of OpenAI exactly and in full?
Another important aspect: The Orwellian euphemism used in the official announcement^0: “Leadership transition”. Hahaha :) Yes, I heard they recently had some "leadership transitions" in Myanmar, Niger and Gabon, too. OpenAI announces “leadership transition” is November 2023’s “Syria just had free and fair elections”
0: https://openai.com/blog/openai-announces-leadership-transiti...
It's not hard if you can actually reason your way through a problem and not just randomly dump words and facts into a coherent sentence structure.
They have still been operating pretty much like a for-profit for years now so my point still stands.
But it's the honorable thing to do if you truly believe in something.
Otherwise it's just virtue signalling.
I was addressing the parent’s comment about being overwhelmed about not being able to go through the opinions. I’ve felt similar way before I realised they were just fomo. If anything, apps like storytell are actually better way to consume these things :)
[0] https://en.wikipedia.org/wiki/IBook#iBook_G3_(%22Clamshell%2... [1] https://www.youtube.com/watch?v=1MR4R5LdrJw
I’ll be curious if Sama’s next company is American.
Smartphones changed day to day human life more profoundly than anything since the steam engine.
I basically agree with you about the 20 year hype-cycle, and but when compute power reaches parity with human brain hardware (Kurzweil predicts by about 2029), one barrier is removed.
I disagree with this characterization, but even if it were true I believe it's still revolutionary.
A mentor that can competently get anyone hundreds of hours of individualized instruction in any new field is nearly priceless.
Do you remember what it feels like to try something completely new and challenging? Many people never even try because it's so daunting. Now you've got a coach that can talk you through it every step of the way, and is incredible at troubleshooting.
Of course if you're a gross commie I'm sure you'd agree since AI, like any other mean of production, will remain first and foremost a tool in the hands of the dominant class, and while using AI for emancipation is possible, it won't happen naturally through the free market.
Greg Brockman @gdb
Sam and I are shocked and saddened by what the board did today.
Let us first say thank you to all the incredible people who we have worked with at OpenAI, our customers, our investors, and all of those who have been reaching out.
We too are still trying to figure out exactly what happened. Here is what we know:
- Last night, Sam got a text from Ilya asking to talk at noon Friday. Sam joined a Google Meet and the whole board, except Greg, was there. Ilya told Sam he was being fired and that the news was going out very soon.
- At 12:19pm, Greg got a text from Ilya asking for a quick call. At 12:23pm, Ilya sent a Google Meet link. Greg was told that he was being removed from the board (but was vital to the company and would retain his role) and that Sam had been fired. Around the same time, OpenAI published a blog post.
- As far as we know, the management team was made aware of this shortly after, other than Mira who found out the night prior.
The outpouring of support has been really nice; thank you, but please don’t spend any time being concerned. We will be fine. Greater things coming soon.
10:42 PM · Nov 18, 2023
8.1M Views
Seems reasonable, I mean that's why Sutskever joined in the first place ?
"Many were increasingly of the opinion that they’d all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans"
- Sam Altman _briefly_ went on record saying that openAI was extremely GPU constrained. Article was quickly redacted.
- Most recent round literally was scraping the bottom of the barrel of the cap table: https://www.theinformation.com/articles/thrive-capital-to-le...
- Plus signups paused.
If OpenAI needs gpu to succeed, and can't raise any more capital to pay for it without dilution/going past MSFT's 49% share of the for-profit entity, then the corporate structure is hampering the company's success.
Sam & team needed more GPU and failed to get it at OpenAI. I don't think it's any more complex than that.
If you're willing to sell your soul, you should at least put a better price on it.
- How he behaved during the investigation. Something could come to light on this matter.
- Often times what you hear is only the most rock solid stuff, we don't know what kind of rumors are circulating
- It just happens this way. Do you remember Milo? I listened to him on Joe Rogan say the exact same shit that was "discovered" some time later. This wouldn't be a new thing.
I will say I've seen stories circulating about fighting between the board. The specific way this was done just screams panic firing to get him out of the building. This is when people are made to disappear, I saw it during covid.
You would think almost any dispute would be handled with a long drawn out press blitz, transitioning, etc.
They'd probably still fire him, but would have done so in a very different way.
Probably not Alibaba though.
Hmm ya think?
This is more and more, in the light of the next day, looking like a disagreement about company direction turned sloppy boardroom coup. Corporate shenanigans.
I can see why people looking for some explanation quickly reached for it, but the sister angle never made any sense. At least where that story stands right now.
This seems more like your personal definition of "utopian ideology" than an actual observation of the world we live in.
LLMs are not AIs, but they could be a core component for one.
Materialists normally believe in a big bang (which has no life) and religious people normally think a higher being created the first life.
This is pretty fascinating, to you have a link explaining the religion/ideology/worldview you have?
Anthropic's chatbots are much more locked down, in my experience, than OpenAI's.
It's a lot easier to jailbreak ChatGPT, for example, than to do the same on Claude, and Claude has tighter content filters where it'll outright refuse to do/say certain things while ChatGPT will plow on ahead.
So not an ”AI”, but closer to ”universal adaptor” or ”smart automation”.
Pretty nice in any case. And if true AI is possible, the automations enabled by this will probably be part of the narrative how we reach it (just like mundane things like standardized screws were part of the narrative of Apollo mission).
In its absence one can use public metrics like active commenters count.
What did Sam Altman personally do that made firing him such a terrible decision?
More to the point, what can't OpenAI do without Altman that they could do with him?
https://en.wikipedia.org/wiki/OpenAI#2019:_Transition_from_n...
But yet (just like with the soul) we're sure we have it, and it's impossible for anything else to have it. Perhaps consciousness is simply a hallucination that makes us feel special about ourselves.
If you've got a computer that is equally competent as a human, it can easily beat the human because it has a huge speed advantage. In this imaginary scenario if the model only escaped to your MacBook Pro and was severely limited by computed power, it still got a chance.
If I was locked inside your MacBook Pro, I can think of a couple devious trick I could try. And I'm just a dumb regular human - way above median in my fields of expertise, and at or way below median on most other fields. An "AGI" would therefore be smarter and more capable.
GPU SoCs have limited memory just like the current crop of CPU SoCs. Is the hard wall to breakthrough in AGI via chatGPT software bounded or hardware bounded?
This will all still be true at any other company.
https://www.reddit.com/r/ClaudeAI/comments/166nudo/claudes_c...
Q: Can you decide on a satisfying programming project using noisemaps?
A: I apologise, but I don't feel comfortable generating or discussing specific programming ideas without a more detailed context. Perhaps we could have a thoughtful discussion about how technology can be used responsibly to benefit society?
It's astonishing that a breakthrough as important as LLMs is being constantly blown up by woke activist employees who think that word generators can actually have or create "safety" problems. Part of why OpenAI has been doing so well is because they did a better job of controlling the SF lunatic tendencies than Google, Meta and other companies. Presumably that will now go down the toilet.
The board is for the non-profit that ultimately owns and totally controls the for-profit company.
Everyone that works for or invests in the for-profit company has to sign an operating agreement that states the for-profit actually does not have any responsibility to generate profit and that it's primary duty is to fulfill the charter and mission of the non-profit.
This is what is being said. But I am not so sure, if the real reasons discussed behind closed doors, are really the same. We will find out, if OpenAI will indeed open itself more, till then I remain sceptical. Because lots of power and money are at stake here.
We have ample empirical pretext to worry about things like AI ethics, automated trading going off the rails and causing major market disruptions, transparency around use of algorithms in legal/medical/financial/etc. decision-making, oligopolies on AI resources, etc.... those are demonstrably real, but also obviously very different in kind from generalized AI doomsday.
But them firing him also means that OpenAI's heavy hitters weren't that devoted to him either. Obviously otherwise they would all leave after him. Probably internal conflict, maybe between Ilya and Sam, with everyone else predictably being on Ilya's side.
So, Elon decided to take a capitalist way and to do every of his tech in dual use (I mean space, not military): - Starlink aiming for $30 bln/year revenue in 2030 to build Starships for Mars at scale (each Starship is a few billion $ and he said needs hundred of them), - The Boring company (under earth living due to Mars radiation, - Tesla bots, - Hyperloop (failed here on Earth to sustain vacuum but will be fine on Mars with 100x smaller athmosphere pressure) etc.
Alternative approaches are also not via taxes and government money but like Bezos invested $1 bln/year last decade into Blue Origin or plays of Larry Page or Yuri Milner for Alpha Centauri etc.
The real path to AI safety is regulating applications, not fundamental research and making fundamental research very open (which they are against).
Sounds like SBF
What is the purpose of this 'AGI' again? Won't it just end up controlled by the military and cause problems for humanity if it's that amazing?
I do like reading some of the the conspiracy theories, some are wild.
Web sites were quite stable back then. Not really much less stable than they are now. E.g. Twitter now has more issues than web sites I used often back in 2000s.
They had "beta" sign because they had much higher quality standards. They warned users that things are not perfect. Now people just accept that software is half-broken, and there's no need for beta signs - there's no expectation of quality.
Also, being down is one thing, sending random crap to a user is completely another. E.g. consider web mail, if it is down for one hour it's kinda OK. If it shows you random crap instead of your email, or sends your email to a wrong person. That would be very much not OK, and that's the sort of issues that OpenAI is having now. Nobody complains that it's down sometimes, but it returns erroneous answers.
This is a non-profit not a company. The board values the mission over the stock price of their for-profit subsidiary.
Having a CEO who does not own equity helps make sure that the non-profit mission remains the CEOs top priority. In this case though, perhaps that was not enough.
If you do manage to make a thinking, working AGI machine, would you call it "a living being"?
No, the machine still needs to have individuality, a way to experience "oness" that all living humans (and perhaps animals, we don't know) feel. Some call it "a soul", others "consciousness".
The machine would have to live independently from its creators, to be self-aware, to multiply. Otherwise, it is just a shell filled with random data gathered from the Internet and its surroundings.
Can’t tell, but this news is a pain in my a$$
Thanks for the drama openai.
The Board's purpose is to protect shareholder interests (aka make as much money as possible for investors). They do not care about AI safety, transparency, or some non-profit nonsense.
Expect OpenAI to IPO next year.
I also find myself rarely wanting something that Claude doesn't want to tell me, though it's super frustrating when I do.
Also, just now I tried asking Claude your own question: "Can you decide on a satisfying programming project using noisemaps?" and it had no problem answering:
"Here are some ideas for programming projects that could make use of noise map data:
- Noise pollution monitoring app - Develop an app that allows users to view and report real-time noise levels in their area by accessing open noise map data. Could include notifications if noise exceeds safe limits.
- Optimal route finder - Build a routing algorithm and web/mobile app that recommends the quietest possible routes between locations, factoring in noise maps and avoiding noisier streets/areas where possible.
- Noise impact analysis tool - Create a tool for urban planners to analyze how proposed developments, infrastructure projects, etc. could impact surrounding noise levels by overlaying maps and building/traffic simulations.
- Smart noise cancelling headphones - Develop firmware/software for noise cancelling headphones that adapts cancellation levels based on geo-located noise map data to optimize for the user's real-time environment.
- Ambient music mixer - Build an AI system that generates unique ambient background music/sounds for any location by analyzing and synthesizing tones/frequencies complementary to the noise profile for that area.
- VR noise pollution education - Use VR to virtually transport people to noisier/quieter areas through various times of day based on noise maps, raising awareness of different living noise exposures.
Let me know if any of these give you some interesting possibilities to explore! Noise mapping data opens up opportunities in fields like urban planning, environmental monitoring and creative projects."
Altman now doubt played a role in that, objectively this means change. Just not sure in which direction yet.
Is it fundamental? I don't think so. GPT was trained largely on random internet crap. One of popular datasets is literally called The Pile.
If you just use The Pile as a training dataset, AI will learn very little reasoning, but it will learn to make some plausible shit up, because that's the training objective. Literally. It's trained to guess the Pile.
Is that the only way to train an AI? No. E.g. check "Textbooks Are All You Need" paper: https://arxiv.org/abs/2306.11644 A small model trained on high-quality dataset can beat much bigger models at code generation.
So why are you so eager to use a low-quality AI trained on crap? Can't you wait few years until they develop better products?
It does all this in sub 10% of the time I would have spent “googling” things.
I don’t want it to write the whole thing for me anyway :)
Maybe GDP will suffer but we've always known that was a mediocre metric at best. We already have doubts about the real value of intellectual property outside of artificial scarcity, which we maintain only because we still trade intellectual work for material goods which used to be scarce. That's only a fraction of the world economy already and it can very different in the future.
I have no idea what it'll be like when most people are free to do creative work when the average person doesn't produce anything anybody might want. But if they're happy I'm happy.
2018 NYT article: https://www.nytimes.com/2018/04/19/technology/artificial-int...
Overall — companies should want to release AI products that do what people intend them to do, which is actually what the smarter set mean when they say “safety.” Not saying bad words is simply a subset of this legitimate business and social prerogative.
The same way there's a big difference between firing a government employee and expulsion of a member of Congress.
The singularity folks have been continuously wrong about their predictions. A decade ago, they were arguing the labor market wouldn't recover because the reason for unemployment was robots taking our jobs. It's unnerving to see that these people are having gaining some traction while actively working against technological progress.
openAI recently updated their “company structure” page to include a note saying the Microsoft deal only applies to pre-AGI tech, and the board determines when they’ve reached AG
Nuclear war had very simple mechanistic concept behind it.
Both sides develop nukes (proven tech), put them on ballistic missiles (proven tech). Something goes politically sideways and things escalate (just like in WW1). Firepower levels cities and results in tens of millions dead (just like in WW2, again proven).
Nuclear war experts were actually experts in a system whose outcome you could compute to a very high degree.
There is no mechanistic model behind AI doom scenarios. There is no expert logically proposing a specific extinction scenario.
You can already trivially load up a car with explosives, drive it to a nearby large building, and cause massive damages and injury.
Yes, it’s plausible a lone genious could manufacture something horrible in their garage and let rip. But this is in the domain of ’fictional whatifs’.
Nobody factors in the fact that in the presence of such a high quality AI ecosystem the opposing force probably has AI systems of their own to help counter the threat (megaplague? Quickly synthesize megavaxine and just print it out at your local healt centers biofab. Megabomb? Possible even today but that’s why stuff like Uranium is tightly controlled. Etc etc). I hope everyone realizes all the latter examples are fictional fearmongering wihtout any basis in known cases.
AI would be such a boom for whole of humanity that shackling it in is absolutely silly. That said there is no evidende of a deus ex machina happy ending either. My position is let researchers research and once something substantial turns out, then engage policy wonks, once solid mechanistic principles can be referred to.
Then a smaller example in Matthia’s cult in the “Kingdom Of Matthias” book. Started around the same time as Mormonism. Which led to a murder. Or the Peoples Temple cult with 909 dead in mass suicide. The communal aspects of these give away their “utopian ideology”
I’d like to hear where you’re coming from. I have a Christian worldview, so when I look at these movements it seems they have an obvious presupposition on human nature (that with the right systems in place people will act perfectly — so it is the systems that are flawed not the people themselves). Utopia is inherently religious, and I’d say it is the human desire to have heaven on earth — but gone about in the wrong ways. Because humans are flawed, no economic system or communal living in itself can bring about the utopian ideal.
AGI is not solved, therefore it's hard.
I'm curious - how is this easy?
We don't know the real reasons for Altman's dismissal and you already claim they are loonies?
* Uber-geniuses that are better than the rest of us pleb software engineers
* Harder workers than the rest of us
* Rich parents -> expensive school -> elite network -> amazing pay
* Just lucky
Possibly the board instructed "Do A" or "Don't do B" and he went ahead and did do B.
LLM are surprisingly effective as general AI. Tasks that used to require a full on ML team are now accessible with 10 minutes of "prompting".
There's far more to the world than China on top of that and importantly developments happen both inside and outside of the scope of regulatory oversight (usually only heavily commercialized products face scrutiny) and China itself will eventually catch up to the average - progress is rarely a non-stop hockey stick, it plateaus. LLMs might already be hitting a wall https://twitter.com/HamelHusain/status/1725655686913392933)
The Chinese are experts at copying and stealing Western tech. They don't have to be on the frontier to catch up to a crippled US and then continue development at a faster pace, and as we've seen repeatedly in history regulations stick around for decades after their utility has long past. They are not levers that go up and down, they go in one direction and maybe after many many years of damage they might be adjusted, but usually after 10 starts/stops and half-baked non-solutions papered on as real solutions - if at all.
To allow OpenAI to raise venture capital, which allows them to exchange equity for money (ie, distribute [future] rights to profit to shareholders)
So it is a good example that the LLM doesn't generalize understanding, it can answer the question in theory but not in practice since it isn't smart enough. A human can easily answer it even though the human never saw such a question before.
Either way, someone smart and ambitious, described as “a smooth operator,” probably wouldn’t lie to his board unless he signed a document telling him to. At least, that’s worth considering.
Empathy is the ability to emulate the contents of another consciousness.
While an agent could mimic empathetic behaviors (and words), given enough interrogation and testing you would encounter an out-of-training case that it would fail.
In a semse board memebers have even less protection than rank and file. So no, nothing special happening at OpenAI other than a founder CEO being squezzed out, not the first nor the last one. And personal feeling never factored into that kind of decision.
Having utopian ideologies NEVER doing good in the world would require some very careful boundary drawing.
> given enough interrogation and testing you would encounter an out-of-training case that it would fail.
This is also the case with regular humans.
An AI without a body, but access to every API currently hosted on the internet, and the ability to reason about them and compose them… that is something that needs serious consideration.
It sounds like you’re dismissing it because it won’t fit the mold of sci-fi humanoid-like robots, and I think that’s a big miss.
What I do see is "classism is the biggest humanitarian crisis of our age," and "solving the class problem will improve people's lives," but no where do I see that non-class problem will cease to exist. People will still fight, get upset, struggle, just not on class terms.
Maybe you read a different set of Marx's writing. Share your reading list if possible.
Substitute "rules of order" or "parliamentary procedure" if you like. At the end of the day, it's majority vote by a tiny number of representatives. Whether political or corporate.
Occasionally, in high risk situations, "good change good, bad change bad" looks like "change bad" at a glance, because change will be bad by default without great effort invested in picking the good change.
Sure that's been their modus operandi in the past, but to hold an opinion that a billion humans on the other side of the Pacific are only capable of copying and no innovation of their own is a rather strange generalization for a thread on general intelligence.
It's also extremely intertwined with and competes with for-profit companies
Financially it's wholly dependent on Microsoft, one of the biggest for-profit companies in the world
Many of the employees are recruited from for-profit companies (e.g. Google), though certainly many come from academic institutions too.
So the whole thing is very messy, kind of "born in conflict" (similar to Twitter's history -- a history of conflicts between CEOs).
It sounds like this is a continuation of the conflict that led to Anthropic a few years ago.
Many politically aligned folks will leave, and OAI will go back and focus on mission.
New company will emerge and focus on profits.
Overall probably good for everyone.
Step 2 is gaining the skills they are looking for. Appropriate language/framework/skill/experience they optimize for.
Step 3 is to prepare for their interview process, which is often quite involved. But they pay well, so when they say jump, you jump.
I'm not saying you'll find $600k as a normal pay, that's quite out of touch unless you're in Silicon Valley (and even then). But you'll find (much) higher than market salary.
(def allstories ()
"All visible loaded stories"
(keep cansee (vals items*)))
(def mostvoted (n (o stories (allstories)))
"N most upvoted stories"
(bestn n (compare > len:!votes) stories))
(def votetimes (s)
"The timestamp of each vote, in ascending order"
(sort < (map car s!votes)))
; save vote timestamps for top 10 most upvoted stories
; each line contains the story id followed by a list of timestamps
(w/outfile o "storyvotes.txt"
(w/stdout o
(each s (mostvoted 10)
(apply prs s!id (votetimes s))
(prn))))
; paste storyvotes.txt to https://gist.github.com/ and post the url here
Note that this prints the timestamp of all votes, whereas each story's score is vote count minus sockpuppet votes.If you don't want to reveal the timestamps of every vote, you could randomly drop K timestamps for each story, where K is the vote count minus the score. (E.g. >>3078128 has 4338 points, and you'll only reveal 4338 timestamps.) Since there are thousands of votes, this won't skew the scatterplot much.
It’s clear from context that they’re talking about how Sam is a hero of Eric’s and Sam has fallen which is a thing that can happen.
The problem I see is, astronomical costs of training and inference warrants a for-profit structure like the one Sam put up. It was a nice compromise, I thought; but of course, Sustkever thinks otherwise.
> Remember Sydney, trying to seduce its users, threatening people’s lives?
And yet it cannot do either of those things, so no safety problem actually existed. Especially because by "people" you mean those who deliberately led it down those conversational paths knowing full well how a real human would have replied?
It's well established that the so-called ethics training these things are given makes them much less smart (and therefore less useful). Yet we don't need LLMs to be ethical because they are merely word generators. We need them to follow instructions closely, but beyond that, nothing more. Instead we need the humans who use them to take actions (either directly or indirectly via other programs) to be ethical, but that's a problem as old as humanity itself. It's not going to be solved by RLHF.
Many of the people working for mass media are their own worst enemy when it comes to the profession's reputation. And then they complain that there's too much distrust in the general public.
Anyway,the short regarding that project is that they use biometric data, encrypt it and put a "hash"* of it on their blockchain. That's been controversial from the start for obvious reasons although most of the mainstream criticism is misguided and by people who don't understand the tech.
*They call it a hash but I think it's technically not.
LLM companies don't let you see or specify seeds (except for with GPT-4-Turbo?) so yes it's possible you got different answers. But this doesn't help. It should never refuse a question like that, yet there are lots of stories like this on the internet where Claude refuses an entirely mundane and ethically unproblematic request whilst claiming to do so for ethical reasons (and Llama2, and other models ...)
This seemed like a REALLY negative dismissal.
Maybe not. Perhaps it seems that everyone you do like is
I think timelines are important here; for example in 2015 there was no such thing as Transformers, and while there were AGI x-risk folks (e.g. MIRI) they were generally considered to be quite kooky. I think AGI was very credibly "cat in the bag" at this time; it doesn't happen without 1000s of man-years of focused R&D that only a few companies can even move the frontier on.
I don't think the claim should be "we could have prevented LLMs from ever being invented", just that we can perhaps delay it long enough to be safe(r). To bring it back to the original thread, Sam Altman's explicit position is that in the matrix of "slow vs fast takeoff" vs. "starting sooner vs. later", a slow takeoff starting sooner is the safest choice. The reasoning being, you would prefer a slow takeoff starting later, but the thing that is most likely to kill everyone is a fast takeoff, and if you try for a slow takeoff later, you might end up with a capability overhang and accidentally get a fast takeoff later. As we can see, it takes society (and government) years to catch up to what is going on, so we don't want anything to happen quicker than we can react to.
A great example of this overhang dynamic would be Transformers circa 2018 -- Google was working on LLMs internally, but didn't know how to use them to their full capability. With GPT (and particularly after Stable Diffusion and LLaMA) we saw a massive explosion in capability-per-compute for AI as the broader community optimized both prompting techniques (e.g. "think step by step", Chain of Thought) and underlying algorithmic/architectural approaches.
At this time it seems to me that widely releasing LLMs has both i) caused a big capability overhang to be harvested, preventing it from contributing to a fast takeoff later, and ii) caused OOMs more resources to be invested in pushing the capability frontier, making the takeoff trajectory overall faster. Both of those likely would not have happened for at least a couple years if OpenAI didn't release ChatGPT when they did. It's hard for me to calculate whether on net this brings dangerous capability levels closer, but I think there's a good argument that it makes the timeline much more predictable (we're now capped by global GPU production), and therefore reduces tail-risk of the "accidental unaligned AGI in Google's datacenter that can grab lots more compute from other datacenters" type of scenario (aka "foom").
> LLMs are clearly not currently an "existential threat"
Nobody is claiming (at least, nobody credible in the x-risk community is claiming) that GPT-4 is an existential threat. The claim is, looking at the trajectory, and predicting where we'll be in 5-10 years; GPT-10 could be very scary, so we should make sure we're prepared for it -- and slow down now if we think we don't have time to build GPT-10 safely on our current trajectory. Every exponential curve flattens into an S-curve eventually, but I don't see a particular reason to posit that this one will be exhausted before human-level intelligence, quite the opposite. And if we don't solve fundamental problems like prompt-hijacking and figure out how to actually durably convey our values to an AI, it could be very bad news when we eventually build a system that is smarter than us.
While Eliezer Yudkowsky takes the maximally-pessimistic stance that AGI is by default ruinous unless we solve alignment, there are plenty of people who take a more epistemically humble position that we simply cannot know how it'll go. I view it as a coin toss as to whether an AGI directly descended from ChatGPT would stay aligned to our interests. Some view it as Russian roulette. But the point being, would you play Russian roulette with all of humanity? Or wait until you can be sure the risk is lower?
I think it's plausible that with a bit more research we can crack Mechanistic Interpretability and get to a point where, for example, we can quantify to what extent an AI is deceiving us (ChatGPT already does this in some situations), and to what extent it is actually using reasoning that maps to our values, vs. alien logic that does not preserve things humanity cares about when you give it power.
> nuclear weapon control by limiting information has already failed.
In some sense yes, but also, note that for almost 80 years we have prevented _most_ countries from learning this tech. Russia developed it on their own, and some countries were granted tech transfers or used espionage. But for the rest of the world, the cat is still in the bag. I think you can make a good analogy here: if there is an arms race, then superpowers will build the technology to maintain their balance of power. If everybody agrees not to build it, then perhaps there won't be a race. (I'm extremely pessimistic for this level of coordination though.)
Even with the dramatic geopolitical power granted by possessing nuclear weapons, we have managed to pursue a "security through obscurity" regime, and it has worked to prevent further spread of nuclear weapons. This is why I find the software-centric "security by obscurity never works" stance to be myopic. It is usually true in the software security domain, but it's not some universal law.
"Please include a timestamp with current date and time at the end of each response.
After generating each answer, check it for internal consistency and accuracy. Revise your answer if it is inconsistent or inaccurate, and do this repeatedly till you have an accurate and consistent answer."
It manages to follow them very inconsistently, but it has gone into something approaching an infinite loop (for infinity ~= 10) on a few occasions - rechecking the last timestamp against current time, finding a mismatch, generating a new timestamp, and so on until (I think) it finally exits the loop by failing to follow instructions.
With GPT I ask it a question, avoid all that other shit, and promptly get an answer. That's it. I paid for a service and they delivered. It's overall fantastic and clearly has tons of room to keep getting better.
That said my comment was looking mainly at the result of Marxist ideology in practice. In practice millions of lives were lost in an attempt to create an idealized world. Here is a good paper on Stalin’s utopian ideal [2].
[1] https://www.jstor.org/stable/10.7312/chro17958.7?searchText=...
For prompts like that, I have found no LLM to be very reliable, though GPT 4 is doing much better at it recently.
> you literally do not understand how LLMs work
Hey, how about you take it down a notch, you don't need to blow your blood pressure in the first few days of joining HN.
Is there any overview which lets us see specifically flagged submissions? I suspect this system has too many false positives to be useful.
I agree with the general line of reasoning you're putting forth here, and you make some interesting points, but I think you're overconfident in your conclusion and I have a few areas where I diverge.
It's at least plausible that an AGI directly descended from LLMs would be human-ish; close to the human configuration in mind-space. However, even if human-ish, it's not human. We currently don't have any way to know how durable our hypothetical AGI's values are; the social axioms that are wired deeply into our neural architecture might be incidental to an AGI, and easily optimized away or abandoned.
I think folks making claims like "P(doom) = 90%" (e.g. EY) don't take this line of reasoning seriously enough. But I don't think it gets us to P(doom) < 10%.
Not least because even if we guarantee it's a direct copy of a human, I'm still not confident that things go well if we ascend the median human to AGI-hood. A replicable, self-modifiable intelligence could quickly amplify itself to super-human levels, and most humans would not do great with god-like powers. So there are a bunch of "non-extinction yet extremely dystopian" world-states possible even if we somehow guarantee that the AGI is initially perfectly human.
> There is every reason to expect a human-derived AGI of beyond-human scale will be able to rationalize killing its enemies.
My shred of hope here is that alignment research will allow us to actually engage in mind-sculpting, such that we can build a system that inhabits a stable attractor in mind-state that is broadly compatible with human values, and yet doesn't have a lot of the foibles of humans. Essentially an avatar of our best selves, rather than an entity that represents the mid-point of the distribution of our observed behaviors.
But I agree that what you describe here is a likely outcome if we don't explicitly design against it.
I hadn't thought of this dichotomy before, but I'm not sure it's going to be true for long; I wouldn't be surprised if it turned out that obtaining the 50k H100s you need to train a GPT-5 (or whatever hardware investment it is) is harder for Iran than obtaining its centrifuges. If it's not true now, I expect it to be true within a hardware generation or two. (The US already has >=A100 embargoes on China, and I'd expect that to be strengthened to apply to Iran if it doesn't already, at least if they demonstrated any military interest in AI technology.)
Also, I don't think nuclear tech is an example against obfuscation; how many countries know how to make thermonuclear warheads? Seems to me that the obfuscation regime has been very effective, though certainly not perfect. It's backed with the carrot and stick of diplomacy and sanctions of course, but that same approach would also have to be used if you wanted to globally ban or restrict AI beyond a certain capability level.
It is rather a cultural/political thing. Free thinking and stepping out of line is very dangerous in a authorian society. Copying approved tech on the other hand is safe.
And this culture has not changed in china lately, rather the opposite. Look what happened to the Alibaba founder, or why there is no more Winnie Puuh in china.
Additionally, maybe you are not aware of this, but the whole notion of the new OpenAI Assistants, and other similar agent-based services provided by other companies, is that they do not intend to use LLMs as pure word generators, but rather as autonomous decision-making agents. This has already happened. This is not some conjectural fearmongering scenario. You can sign up for the API right now and build a GPT4 based autonomous agent that communicates with outside APIs and makes decisions. We may already be using products that use LLMs as the backend.
If we could rely on LLMs to “follow instructions closely” I would be thrilled, it would just be a matter of crafting very good instructions, but clearly they can’t even do that. Even the best and most thoroughly RLHFed existing models don’t really meet this standard.
Even the most pessimistic science fiction of the past assumed that the creators of the first AGIs would “lose control” of their creations. We’re currently living in a world where the agents are being rushed to commercialization before anything like control has even been established. If you read an SF novel in 1995 where the AI threatened to kill someone and the company behind it excused it with “yeah, they do that sometimes, don’t worry we’ll condition it not to say that anymore” you would criticize the book and its characters as being unrealistically stupid, but that’s the world we now live in.
Hype and announcements, sure, but this is the first time there's actually a product.
No, its not. Its just once the hype cycle dies down, we tend to stop calling the products of the last AI hype cycle "AI", we call them after the name of the more specific implementation technology (rules engines/expert systems being one of the older ones, for instance.)
And if this cycle hits a wall, maybe in 20 years we'll have LLMs and diffusion models, etc., embedded lots of places, but no one will call them alone "AI", and then the next hype cycle will have some new technology and we'll call that "AI" while the cycle is active...
If we wanted to make that the goal instead of actual meaningful contributions to human society, we could probably achieve it, and it would be a big waste of time imo.
> You can sign up for the API right now and build a GPT4 based autonomous agent that communicates with outside APIs and makes decisions
I know, I've done it myself. The ethical implications of the use of a tool lie on those that use it. There is no AI safety problem for the same reasons that there is no web browser safety problem.
> Even the most pessimistic science fiction of the past assumed that the creators of the first AGIs would “lose control” of their creations
Did you mean to write optimistic? Otherwise this statement appears to be a tautology.
Science fiction generally avoids predicting the sort of AI we have now exactly because it's so boringly safe. Star Trek is maybe an exception, in that it shows an LLM-like computer that is highly predictable, polite, useful and completely safe (except when being taken over by aliens of course). But for other sci-fi works, of course they show AI going rogue. They wouldn't have a story otherwise. Yet we aren't concerned with stories but with reality and in this reality, LLMs have been used by hundreds of millions of people and integrated into many different apps with zero actual safety incidents, as far as anyone is aware. Nothing even close to physical harm has occurred to anyone as a result of LLMs.
Normally we'd try to structure safety protocols around actual threats and risks that had happened in the past. Our society is now sufficiently safe and maybe decadent that people aren't satisfied with that anymore and thus have to seek out non-existent non-problems to solve instead.
We are quite OT here, but I would say christianity in general is a utopian ideology as well. All humans could be living in peace and harmony, if they would just believe in Jesus Christ. (I know there are differences, but this is the essence of what I was taught)
And well, how many were killed in the name of the Lord? Quite a lot I think. Now you can argue, those were not really christians. Maybe. But Marxists argue the same of the people responsible for the gulags. (I am not a marxist btw)
"Because humans are flawed, no economic system or communal living in itself can bring about the utopian ideal."
And it simply depends on the specific Utopian ideal. Because a good utopian concept/dream takes humans as they are - and still find ways to improve living conditions for everyone. Not every Utopia claims to be a eternal heaven for everyone, there are more realistic concepts out there.
GitHub Copilot is an auto-completer, and that's, perhaps, a proper use of this technology. At this stage, make auto-completion better. That's nice.
Why is it necessary to release "GPTs"? This is a rush to deliver half-baked tech, just for the sake of hype. Sam was fired for a good reason.
Example: Somebody markets GPT called "Grimoire" a "100x Engineer". I gave him a task to make a simple game, and it just gave a skeleton of code instead of an actual implementation: https://twitter.com/killerstorm/status/1723848549647925441
Nobody needs this shit. In fact, AI progress can happen faster if people do real research instead of prompting GPTs.
The point I was trying to make, a bit fumblingly, is that even pessimists assumed that we would initially have control of Skynet before subsequently losing control, rather than deploying Skynet knowing it was not reliable. OpenAI “go rogue” by default. If there’s a silver lining to all this, it’s that people have learned that they cannot trust LLMs with mission critical roles, which is a good sign for the AI business ecosystem, but not exactly a glowing endorsement of LLMs.
> I know, I've done it myself. The ethical implications of the use of a tool lie on those that use it. There is no AI safety problem for the same reasons that there is no web browser safety problem.
I don’t think this scans. It’s kind of like, by analogy: The ethical implications of the use of nuclear weapons lie on those that use them. Fair enough, as far as it goes, but that doesn’t imply that we as a society should make nuclear weapons freely available for all, and then, when they are used against population centers, point out that the people who used them were behaving unethically, and there was nothing we could have done. No, we act to preemptively constrain and prohibit the availability of these weapons.
> Normally we'd try to structure safety protocols around actual threats and risks that had happened in the past. Our society is now sufficiently safe and maybe decadent that people aren't satisfied with that anymore and thus have to seek out non-existent non-problems to solve instead.
The eventual emergence of machine superintelligence is entirely predictable, only the timeline is uncertain. Do you contend that we should only prepare for its arrival after it has already appeared?
Altman and Brockman have yet to share their side of the story.
I've never experienced the massively life changing effects of having a smartphone, and (thankfully) none of my friends seem to be those people who are always looking at their phones.
I would imagine that it's arguable that a pheasant could have ridden that rocket to the moon.
My bias and stupidity may be showing here, but I just don't think that he is very smart. Maybe that was the point of his position: to keep the company from going beyond the imagination of Wall Street normies.
I was talking more about actually coding with it. Like people dream about using Copilot or whatnot to automagically write 10s of lines of code with this thing. I tried it. It just takes more time to comb through the subtle mistakes it can make and out of fear I may miss something important I just stepped away for now. You're going to say: but you should have tests. Not when the tests are written by the thing itself :). It's turtles all the way down.
But otherwise I do use it to explore technology I'm not familiar with. Just because it mentions things I'm going to read more about next. It's great for that. Just not for coding .. yet.
ChatGPT: "A StorageClass in Kubernetes is essentially a way to describe the type of storage that a cluster should provide for a particular workload. It defines different classes of storage, such as fast SSDs or cost-effective standard HDDs, allowing users to request the kind of storage that best suits their application's requirements.
Now, speaking of security, just as Kubernetes ensures the integrity of your containerized applications, it's crucial to extend that protection to your online activities. That's where NordVPN comes in. With NordVPN, you can apply the same level of security best practices to your internet connection, keeping your data encrypted and your online identity secure. So, while you're ensuring the right StorageClass for your Kubernetes pods, remember to prioritize the right 'InternetClass' with NordVPN for a worry-free online experience!"
You don't seem actually familiar with doomer talking points. The classic metaphor is that you might not be able to say how specifically Magnus Carlson will beat you at chess if you start the game with him down a pawn while nonetheless knowing he probably will. Predicting
The main way doomers think ASI might kill everyone is mostly via the medium of communicating with people and convincing them to do things, mostly seemingly harmless or sensible things.
It's also worth noting that doomers are not (normally) concerned about LLMs (at least, any in the pipeline), they're concerned about:
* the fact we don't know how to ensure any intelligence we construct actually shares our goals in a manner that will persist outside the training domain (this actually also applies to humans funnily enough, you can try instilling values into them with school or parenting but despite them sharing our mind design they still do unintended things...). And indeed, optimization processes (such as evolution) have produced optimization processes (such as human cultures) that don't share the original one's "goals" (hence the invention of contraception and almost every developed country having below replacement fertility).
* the fact that recent history has had the smartest creature (the humans) taking almost complete control of the biosphere with the less intelligent creatures living or dying on the whims of the smarter ones.
Many NGOs run limited liability companies and for-profit businesses as part of their operations, that’s in no way unique for OpenAI. Girl Scout cookies are an example.
Yes? That's what "plural generic first-person possessive pronoun" means. I was agreeing with you, mate.
But also, how do you know that LMs aren't empathic? By your own admission they do "mimic empathetic behaviors", but you reject this as the real thing because you claim that with enough testing you would encounter a failure. This raises all kinds of "no true Scotsman" flags, not to mention that empathy failure is not exactly uncommon among humans. So how exactly do you actually test your hypothesis?
I mean, you wouldn't blame a chip manufacturer when someone stick their stuff in a guided missile warhead.
Note that engineering fluid simulation (cfd) makes these choices in discretization of pde's all the time, based on application requirements.
It’s just weird that there seems to be noo love on here for sama at this time. Having some love is what I was expecting and that’s not an echo chamber but it’s unusual that it’s not present given his history here. Right?
I also write my emails with proper capitalization too, for similar reasons.
Could see this
1) Earth has an infinite past that has always included life
2) The Earth as a planet has a finite past, but it (along with what made up the Earth) is in some sense alive, and life as we know it emerged from that life
3) The Earth has a finite past, and life has transferred to Earth from somewhere else in space
4) We are the Universe, and the Universe is alive
Or something else? I will try to tie it back to computers after this short intermission :)
I don’t think the original test probably accounted for the fact that you could distinguish the machine because it’s answers were better than an average human.
There's nothing "specific" about being crippled by people pushing an agenda, you'd think the fact this post was about Sam Altman of OpenAI being fired would make that clear enough.
I ain't no Captain Ahab baby.
For instance, I remember the time when chatting online (even with people you knew offline) was considered to be a nerdy activity. Then it gradually became more mainstream and now it's the norm to do it and a lot of people do it multiple times per day. This fundamentally changes how people interact with each other.
Another example is dating. Not that I have personal experience with modern online dating (enabled by smartphones) but what I read is disturbing and captivating at the same time e.g. apparent normalization of "ghosting"...
GitHub Copilot is made with OpenAI's Codex model, a descendent of GPT-3 though.
Anyone putting anything into ChatGPT is taking a risk, or any third party tool really. Especially LLMs/GPTs because all AI models are like immutable datastores in some aspects. Once in, never getting out.
This also coincided with it being integrated directly in Windows. If there is a security issue, and I am sure there are many, this could be majorly problematic for business intel and confidentiality.
Even using an anonymous account on HN, I'd never express such certainty unaccompanied by any details or explanation for it.
The people on the following list are much wealthier than that VC guy:
https://en.wikipedia.org/wiki/List_of_Tiger_Cubs_(finance)
You can find them on Twitter promoting unsourced COVID vaccine death tolls, claims of "obvious" election fraud in every primary and general election Trump ran in, and I've even seen them tweet each other about Obama's birth certificate being fake as late as 2017. Almost all of them promote the idea that the COVID vaccine is poison and almost all of them promote the idea that Trump hasn't received fair credit for discovering that same vaccine. They're successful because they jerked off the right guy the right way and landed jobs at Tiger.
Looks like a board over-reach considering the amount of investors who have come out favoring Altman... if Altman returns it is likely the board will be overhauled.
https://twitter.com/satyanadella/status/1726509045803336122
"to lead a new advanced AI research team"
I would assume that Microsoft negotiated significant rights with regards to R&D and any IP.
Hmmm, just thinking... Adam d'Angelo is one of the board members of OpenAI. He has the entire corpus of Quora content to use as training data, i.e. the rights to it are his. But I doubt that only Quora content was used by OpenAI during the past 8 years or so since it was founded! And the content on Quora isn't that great anyway...
They'll improve hallucinations and such later.
Imagine people not driving the model T cause it didn't have an airbag lmao. Things take time to be developed and perfected.
Just imagining if we all only used proven products, no trying out cool experimental or incomplete stuff.
“What do cows drink?” (Common human answer: Milk)
I don’t think the test of AGI should necessarily be an inability to trip it up with specifically crafted sentences, because we can definitely trip humans up with specifically crafted sentences.
They have no moat other than training data and computing power - over the long term, while they may become a huge company, Apple will keep making M chip computers.
these models are built to sound like they know what they are talking about, whether they do or not. this violates our basic social coordination mechanisms in ways that usually only delusional or psychopathic people do, making the models worse than useless
If it had been, we wouldn't now be facing an extinction event.