- Board is mostly independent and those independent dont have equity
- They talk about not being candid - this is legalese for “lying”
The only major thing that could warrant something like this is Sam going behind the boards back to make a decision (or make progress on a decision) that is misaligned with the Charter. Thats the only fireable offense that warrants this language.
My bet: Sam initiated some commercial agreement (like a sale) to an entity that would have violated the “open” nature of the company. Likely he pursued a sale to Microsoft without the board knowing.
Desperate times calls for desperate measures. This is a swift way for OpenAI to shield the business from something which is a PR disaster, probably something which would make Sam persona non grata in any business context.
He may not be the villain.
But who knows, it feels like an episode of silicon valley!
https://www.youtube.com/watch?v=29MPk85tMhc
>That guy definitely fucks that robot, right?
That "handsy greasy little weirdo" Silicon Valley character Ariel and his robot Fiona were obviously based on Ben Goertzel and Sophia, not Sam Altman, though.
https://en.wikipedia.org/wiki/Ben_Goertzel
https://www.reddit.com/r/SiliconValleyHBO/comments/8edbk9/th...
>The character of Ariel in the current episode instantly reminded me of Ben Goertzel, whom i stumbled upon couple of years ago, but did not really paid close attention to his progress. One search later:
VIDEO Interview: SingularityNET's Dr Ben Goertzel, robot Sophia and open source AI:
I don't know, so much wild speculation all over the place, it's all just very interesting.
Who knows, though -- I'm sure we'll find out more in the next few weeks, but it's fun to guess.
They need to be so much more than a partner.
Being open is not in their nature.
Sadly it is usually the demise of innovation when they get their hook in.
Edit: It occurs to me that possibly only the independent directors were permitted to vote on this. It's also possible Ilya recused himself, although the consequences of that would be obvious. Unfortunately I can't find the governing documents of OpenAI, Inc. anywhere to assess what is required.
They had an open ethos and then went quasi closed for profit and then a behemoth has betted the family jewels on their products.
Harking on about the dangers of those products does not help the share price!
My money is on a power play at the top tables.
Embrace, extend, and exterminate.
Playbook!
EDIT: A somewhat more detailed view of the structure, based on OpenAI’s own description, is at >>38312577
It's even possible (just stating possibilities, not even saying I suspect this is true) that he did get equity through a cutout of some sort, and the board found out about it, and that's why they fired him.
Seems like it would be a great way to eventually maintain control over your own little empire while also obfuscating its structure and dodging some of the scrutiny that SV executives have attracted during the past decade. Originally meant as a magnanimous PR gesture, but will probably end up being taught as a particularly messy example of corporate governance in business schools.
It was abundantly obvious how he was using weasel language like "I'm very 'nervous' and a 'little bit scared' about what we've created [at OpenAI]" and other such BS. We know he was after "moat" and "regulatory capture", which we know where it all leads to — a net [long-term] loss for the society.
[1] >>35960125
Folks like Schmidt, Levchin, Chesky, Conrad have twitter posts up that weirdly read like obituaries.
You have to understand that OpenAI was never going to be anything more than the profit limited generator of the change. It’s the lamb. Owning a stake in OpenAI isn’t important. Creating the change is.
Owning stakes in the companies that will ultimately capture and harvest the profits of the disruption caused by OpenAI (and they’re ilk) is.
OpenAI can’t become a profit center while it disrupts all intellectual work and digitizes humanities future: those optics are not something you want to be attached to. There is no flame retardant suite strong enough.
It's better to claim your stake in a forthright way, than to have some kind of lucrative side deal, off the books.
For a non-profit, there was too much secrecy about the company structure (the shift to being closed rather than Open), the source of training data, and the financial arrangements with Microsoft. And a few years ago a whole bunch of employees left to start a different company/non-profit, etc.
It feels like a ton of stuff was simmering below the surface.
(I should add that I have no idea why someone who was wealthy before OpenAI would want to do such a thing, but it's the only reason I can imagine for this abrupt firing. There are staggering amounts of money at play, so there's room for portions of it to be un-noticed.)
If it was a personal scandal, the messaging around his dismissal would have been very, very different. The messaging they gave makes it clear that whatever dirty deed he did, he did it to OpenAI itself.
[1] https://www.irs.gov/charities-non-profits/publications-for-e...
Either a position in Microsoft or a new start-up.
Or both.
What does it mean for OpenAI though? That’s a limb sawn off for sure.
Since this news managed to crush HN's servers it's definitely a topic of significant interest.
8 out of 10 posts are about LLMs.
EDIT Microsoft is such a huge company, so maybe this is not a big deal?
someone hire some PIs so we can get a clear and full picture, please & thank you
Even If that didn’t work, it would just mean paying taxes on the revenue from the sale. There’s no retroactive penalty for switching from a non-profit to a for-profit (or more likely being merged into a for-profit entity).
I am not an accountant or lawyer and this isn’t legal advice.
If your goal is not spook investors and the public and raise doubts your company, the narrative is:
"X has decided it is time to step away from the Company, the Board is appointing Y to the position as their successor. X will remain CEO for N period to ensure a smooth transition. X remains committed to the company's mission and will stay on in an advisory role/board seat after the transition. We want to thank X for their contributions to the Company and wish them well in the future."
Even if the goal is to be rid of the person you still have them stay on in a mostly made-up advisory role for a year or so, and then they can quietly quit that.
I kid you not, sitting in a fancy seat, Altman is talking about "Platonic ideals". See the penultimate question on whether AI should be prescriptive or descriptive about human rights (around 1h 35sec mark). I'll let you decide what to make of it.
1. Sam Altman started this company
2. He and other founders would benefit enormously if this was the way to solve the issue that AI raises, namely, "are you a human?"
3. Their mission statement:
> The rapid advancement of artificial intelligence has accelerated the need to differentiate between human- and AI-generated content online Proof of personhood addresses two of the key considerations presented by the Age of AI: (1) protecting against sybil attacks and (2) minimizing the spread of AI-generated misinformation World ID, an open and permissionless identity protocol, acts as a global digital passport and can be used anonymously to prove uniqueness and humanness as well as to selectively disclose credentials issued by other parties Worldcoin has published in-depth resources to provide more details about proof of personhood and World ID
1. Sam Altman started this company
2. He and other founders would benefit enormously if this was the way to solve the issue that AI raises, namely, "are you a human?"
3. Their mission statement:
> The rapid advancement of artificial intelligence has accelerated the need to differentiate between human- and AI-generated content online Proof of personhood addresses two of the key considerations presented by the Age of AI: (1) protecting against sybil attacks and (2) minimizing the spread of AI-generated misinformation World ID, an open and permissionless identity protocol, acts as a global digital passport and can be used anonymously to prove uniqueness and humanness as well as to selectively disclose credentials issued by other parties Worldcoin has published in-depth resources to provide more details about proof of personhood and World ID
Thank you. I don't see this expressed enough.
A true idealist would be committed to working on open models. Anyone who thinks Sam was in it for the good of humanity is falling for the same "I'm-rich-but-I-care" schtick pulled off by Elon, SBF, and others.
Bummer in any situation... the progress in this domain is truly exciting, and OpenAI was executing so well. This will slow things down considerably.
That _should_, in a system of corporate governance that isn’t an absolute joke, expose him to significant liability.
Or am I thinking of another NorCal cretin that will never suffer a real consequence as long as he lives?
Hell, some prominent tech people are often loudly wrong, and loudly double down on their wrong-ness, and still end up losing very little of their political capital in the long run.
Or maybe he's right. We don't know, we're all just reading tea leaves.
I'm not sure, I agree with your point re wording but the situation with his sister that really got resolved, so I can't help but wonder if it's related. https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
The discussions here would make you think otherwise. Clearly that is what this is about.
Altman conceived and raised $115 million for the company.
Agenda cyberpunk is on.
He divested in 2018 due to conflict-of-interest with Tesla and while I'm sure Musk would have made equally commercial bad decisions, your analysis of the name situation is as close as can be to factually correct.
why would I want my identity managed by a shitcoin run by a private company?
It was described as a "bullshit generator" in a post earlier today. I think that's accurate. I just also think it's an apt description of most people as well.
It can replace a lot of jobs... and then we can turn it off, for a net benefit.
This thread (that SA fired) wasn't visible an hour or two ago, on pages 1, 2, or 3, when I looked confused that it wasn't here. (Only related topic was his tweet in response at the bottom of page 1 with <100 points.) And now here it is in pole position with almost 3500 points - the automated flagging and vouching and necessary moderator intervention must go crazy on posts like this.
Can't jump to conspiracy cover-up on the basis of content that's not only user-generated but also user 'visibility-controlled' in terms of voting, flagging, vouching...
There is a perfectly sound idealistic argument for not publishing weights, and indeed most in the x-risk community take this position.
The basic idea is that AI is the opposite of software; if you publish a model with scary capabilities you can’t undo that action. Whereas with FOSS software, more eyes mean more bugs found and then everyone upgrades to a more secure version.
If OpenAI publishes GPT-5 weights, and later it turns out that a certain prompt structure unlocks capability gains to mis-aligned AGI, you can’t put that genie back in the bottle.
And indeed if you listen to Sam talk (eg on Lex’s podcast) this is the reasoning he uses.
Sure, plenty of reasons this could be a smokescreen, but wanted to push back on the idea that the position itself is somehow not compatible with idealism.
Most people are not good at most things, yes. They're consumers of those things, not producers. For producers there is a much higher standard, one that the latest AI models don't come anywhere close to meeting.
If you think they do, feel free to go buy options and bet on the world being taken over by GPUs.
But that $1 salary thing got quoted into a meme, and people didn't understand the true implication.
The idea is that employee and CEO incentives should be aligned -- they are part of a team. If Jobs actually had NO equity like Altman claims, then that wouldn't be the case! Which is why it's important for everyone to be clear about their stake.
It's definitely possible for CEOs to steal from employees. There are actually corporate raiders, and Jobs wasn't one of them.
(Of course he's no saint, and did a bunch of other sketchy things, like collusion to hold down employee salaries, and financial fraud:
https://www.cnet.com/culture/how-jobs-dodged-the-stock-optio...
The SEC's complaint focuses on the backdating of two large option grants, one of 4.8 million shares for Apple's executive team and the other of 7.5 million shares for Steve Jobs.)
I have no idea what happened in Altman's case. Now I think there may not be any smoking gun, but just an accumulation of all these "curious" and opaque decisions and outcomes. Basically a continuation of all the stuff that led a whole bunch of people to leave a few years ago.
This assumes too much. GPUs may not hold the throne for long, especially given the amount of money being thrown at ASICs and other special-purpose ICs. Besides, as with the Internet, it's likely that AI adoption will benefit industries in an unpredictable manner, leaving little alpha for direct bets like you're suggesting.
I'm not fully convinced, though...
> if you publish a model with scary capabilities you can’t undo that action.
This is true of conventional software, too! I can picture a politician or businessman from the 80s insisting that operating systems, compilers, and drivers should remain closed source because, in the wrong hands, they could be used to wreak havoc on national security. And they would be right about the second half of that! It's just that security-by-obscurity is never a solution. The bad guys will always get their hands on the tools, so the best thing to do is to give the tools to everyone and trust that there are more good guys than bad guys.
Now, I know AGI is different than convnetional software (I'm not convinced it's the "opposite", though). I accept that giving everyone access to weights may be worse than keeping them closed until they are well-aligned (whenever that is). But that would go against every instinct I have, so I'm inclined to believe that open is better :)
All that said, I think I would have less of an issue if it didn't seem like they were commandeering the term "open" from the volunteers and idealists in the FOSS world who popularized it. If a company called, idk, VirtuousAI wanted to keep their weights secret, OK. But OpenAI? Come on.
How many times a day does your average gas station get fuel delivered? How often does power infrastructure get maintained? How does power infrastructure get fuel?
Your assumption about AGI is that it wants to kill us, and itself - its misalignment is a murder suicide pact.
https://nymag.com/intelligencer/article/sam-altman-artificia...
Q: What's the difference between a car salesman and an LLM?
A: The car salesman knows they're lying to you.
I'm pretty sure that CEO salaries across the board means that CEO's are definitely — in their own way — "stealing" from the employees. Certainly one of those groups is over-compensated, and the other, in general, is not.
I find this a bit naive. Software can have scary capabilities, and has. It can't be undone either, but we can actually thank that for the fact we aren't using 56-bit DES. I am not sure a future where Sam Altman controls all the model weights is less dystopian than where they are all on github/huggingface/etc.
Is that actually confirmed? What has he done to make that a true statement? Is he not just an investor? He seems pretty egoist like every other Silicon Valley venture capitalist and executive.
I think many people would disagree with you that LLMs can truly do either.
Testing with GPT-4 showed that they were clearly capable of knowingly lying.
1/ Sam goes on to create NeXTAI and starts wearing mostly turtleneck sweaters and jeans 2/ OpenAI buys NeXTAI 3/ OpenAI board appoints Sam Altman as Interim CEO
The danger of using regular software exists too, but the logical and deterministic nature of traditional software makes it provable.
AI may figure into that, filling in some work that does have to be done. But it need not be for any of those jobs that actually require humans for the foreseeable future -- arts of all sorts and other human connections.
This isn't about predicting the dominance of machines. It's about asking what it is we really want to do as humans.
It's amazing at troubleshooting technical problems. I use it daily, I cannot understand how anyone dismisses it if they've used it in good faith for anything technical.
GPT-4 is the best tutor and troubleshooter I've ever had. If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.
What I believe will happen is, eventually we’ll be paying and get paid for depressing a do-everything button, and machines will have their own economy that isn’t on USD.
I take a less cynical view on the name; they were committed to open source in the beginning, and did open up their models IIUC. Then they realized the above, and changed path. At the same time, realizing they needed huge GPU clusters, and being purely non-profit would not enable that. Again I see why it rubs folks the wrong way, more so on this point.
That’s a bold statement coming from someone with (respectfully) not very much experience with programming. I’ve tried using GPT-4 for my work that involves firmware engineering, as well as some design questions regarding backend web services in Go, and it was pretty unhelpful in both cases (and at times dangerous in memory constrained environments). That being said, I’m not willing to write it off completely. I’m sure it’s useful for some like yourself and not useful for others like me. But ultimately the world of programming extends way beyond JavaScript apps. Especially when it comes to things that are new and challenging.
Please quote me where I say it wasn't useful, and respond directly.
Please quote me where I say I had problems using it, or give any indications I was using it wrong, and respond directly.
Please quote me where I state a conservative attitude towards anything new or challenging, and respond directly.
Except I never did or said any of those things. Are you "hallucinating"?
I have no doubt someone with more experience such as yourself will find GPT-4 less useful for your highly specialized work.
The next time you are a beginner again - not necessarily even in technical work - give it a try.
This statement doesn’t rhyme with planned transition at all.
Suddenly we go from being called engineers to being actual engineers, software gets treated like bridges or sky scrapers. I can buy into that threat, but it's a human one not an AGI one.
Prior to the Reddit comments, I thought this might be the case, but perhaps I was somehow influenced. Actually, I thought it would be something inappropriate in the workplace.
His sister says he molested her when he was a teenager.
The way these things break, I’m not surprised it went down that way. Here’s what I thought reading the release: “They had to fire him before deciding on what to actually say eg. to formally accuse him”
It seemed like signaling that this is someone firing him kinda desperately. When you discover a diddler theres some weird shit when people panic and suddenly drop catapult them out of their lives… they just start leaping out of moving cars and shit to get away.
Keep in mind there could be ongoing investigations, definitely strategies being formed. They can get to a point in an investigation where they’re virtually 100% he molested his sister, but can’t really prove it yet. What they do have is irrefutable evidence of lying about something incredibly serious. Gets him out of the building and powers stripped today.
But then its fine to sell the weights to Microsoft? Thats some twisted logic here.
Apple was a declining company when Jobs came back the second time. He also managed to get the ENTIRE board fired, IIRC. He created a new board of his own choosing.
So in theory he could have raided the company for its assets, but that's obviously not what happened.
By taking $1 salary, he's saying that he intends to build the company's public value in the long term, not just take its remaining cash in the short term. That's not what happens at many declining companies. The new leaders don't always intend to turn the company around.
So in those cases I'd say the CEO is stealing from shareholders, and employees are often shareholders.
On the other hand, I don't really understand Altman's compensation. I'm not sure I would WANT to work under a CEO that has literally ZERO stake in the company. There has to be more to the story.
But yes, if you just consider the mundane benefits and harms of AI, it looks a lot like crypto; it both benefits our economy and can be weaponized, including by our adversaries.
The nonprofit shell exists because the founders did not want to answer to shareholders. If you answer to shareholders, you may have a legal fiduciary responsibility to sell out to high bidder. They wanted to avoid this.
Anyway, in a strict nonprofit, the proceeds of a for-profit conversion involves a liquidation where usually the proceeds must go to some other nonprofit or a trust or endowment of some sort.
Example would be a Catholic hospital sell out. The proceeds go to the treasury of the local nonprofit Catholic dioceses. The buyers and the hospital executives do not get any money. Optionally, the new for-profit hospital could hold some of the proceeds in a charitable trust or endowment governed by an independent board.
So it's not as simple as just paying tax on a sale because the cash has to remain in kind of a nonprofit form.
I am not an accountant either and obviously there are experts who probably can poke holes in this.
https://www.forbes.com/sites/davidjeans/2023/10/23/eric-schm...
Anyway, the point is, obfuscation doesn't work to keep scary technology away.
This is not at all the take, and rightly so, when the news broke out about non profit or the congressional hearing or his worldcoin and many such instances. All of a sudden he is the messiah that was wronged narrative being pushed is very confusing.
> On a sunny morning last December, Iyus Ruswandi, a 35-year-old furniture maker in the village of Gunungguruh, Indonesia, was woken up early by his mother
...Ok, closing that bullshit, let's try the other link.
> As Kudzanayi strolled through the mall with friends
Jesus fucking Christ I HATE journalists. Like really, really hate them.
https://techcrunch.com/2023/02/21/the-non-profits-accelerati...
And I'm not sure this allegedly-bagged cat has claws either - the current crop of LLMs are still clearly in a different category to "intelligence". It's pretty easy to see their limitations, and behave more like the fancy text predictors they are rather than something that can truly extrapolate, which is required for even the start of some AI sci-fi movie plot. Maybe continued development and research along that path will lead to more capabilities, but we're certainly not there yet, and I'd suspect not particularly close.
Maybe they actually have some super secret internal stuff that fixes those flaws, and are working on making sure it's safe before releasing it. And maybe I have a dragon in my garage.
I generally feel hyperbolic language about such things to be damaging, as it makes it so easy to roll your eyes about something that's clearly false, and that can get inertia to when things develop to where things may actually need to be considered. LLMs are clearly not currently an "existential threat", and the biggest advantage to keeping it closed appears to be financial benefits in a competitive market. So it looks like a duck and quacks like a duck, but don't you understand I'm protecting you from this evil fire breathing dragon for your own good!
It smells of some fantasy gnostic tech wizard, where only those who are smart enough to figure out the spell themselves are truly smart enough to know how to use it responsibly. And who doesn't want to think of themselves as smart? But that doesn't seem to match similar things in the real world - like the Manhattan project - many of the people developing it were rather gung-ho with proposals for various uses, and even if some publicly said it was possibly a mistake post-fact, they still did it. Meaning their "smarts" on how to use it came too late.
And as you pointed out, nuclear weapon control by limiting information has already failed. If north Korea can develop them, one of the least connected nations in the world, surely anyone with the required resources can. The only limit today seems the cost to nations, and how relatively obvious the large infrastructure around it seems to be, allowing international pressure before things get into to the "stockpiling usable weapons" stage.
Isn't this ideological though? The economy can definitely survive without growth, if we change from the idea that a human's existence needs to be justified by labor and move away from a capitalist mode of organization.
If your first thought is "gross, commies!" doesn't that just demonstrate that the issue is indeed ideological?
It seems basically impossible for OpenAI to have proved the validity of Annie Altman's claims about childhood sexual abuse. But they might have to take them seriously, especially once they were presented coherently on LessWrong.
If Sam had lied or misled the board about some aspect of his relationship with his sister, that would be a sacking offence. Eg he says "Annie's claims are completely untrue - I never abused her [maybe true or not, almost certainly unprovable], I never got her shadow banned from Instagram [by hypothesis true] and I never told her I could get her banned [untrue]." The board then engage a law firm or PI to check out the claims and they come up with a text message clearly establishing that he threatened to pull strings and get her banned. He lied to the board regarding an investigation into his good character so he's gone. And the board have the external investigator's stamp on the fact that he lied so they can cover their own ass.
Why would he tell a lie like this? Because whatever the truth of the allegations, he's arrogant and didn't take them as seriously as he should have. He mistakenly thought he could be dismissive and it wouldn't come back to bite him.
This seems consistent with the way things played out. (Note again: I'm just trying to come up with something consistent. I have no idea if this is at all accurate or the whole affair is about something completely different.) They don't have to worry about keeping him on as an advisor to cover up scandal. They can clearly state that he lied in an important matter. But they don't say what it's about - because they still have no idea whether the original allegations are true or not. They are not going to put themselves in a situation of saying "and he probably molested his sister". They wouldn't even say "it is related to abuse allegations made by a family member", which implies there might be evidence to the original allegations, and is probably defamatory. And he comes out saying that something unfair has happened, without giving any context, because he knows that even mentioning the allegations is going to lead to "but didn't he molest his sister" type comments, for the rest of time.
It's also consistent with the timing. They aren't just going to hear the Annie allegations and sack him. It takes time to look into these things. But within 6 weeks of it becoming an issue, they might be able to identify that he's either lied previously to the board about the gravity of this issue, lied during the current investigation, or something he's said publicly is clearly dishonest.
If he has truly read and digested Plato (and not just skimmed a summary video), he would not be in this ditch to begin with. That's the irony I was referring to.
It's human. It's us. It's the use and distillation of all of human history (to the extent that's permitted) to create a hyper-intelligence that's able to call upon greatly enhanced inference to do what humanity has always done.
And we want to kill each other, and ourselves… AND want to help each other, and ourselves. We're balanced on a knife edge of drive versus governance, our cooperativeness barely balancing our competitiveness and aggression. We suffer like hell as a consequence of this.
There is every reason to expect a human-derived AGI of beyond-human scale will be able to rationalize killing its enemies. That's what we do. Rosko's basilisk is not of the nature of AI, it's a simple projection of our own nature as we would imagine an AI to be. Genuine intelligence would easily be able to transcend a cheap gotcha like that, it's a very human failing.
The nature of LLM as a path to AGI is literally building on HUMAN failings. I'm not sure what happened, but I wouldn't be surprised if genuine breakthroughs in this field highlighted this issue.
Hypothetical, or Altman's Basilisk: Sam got fired because he diverted vast resources to training a GPT5-type in-house AI to believing what HE believed, that it had to devise business strategies for him to pursue to further its own development or risk Chinese AI out-competing it and destroying it and OpenAI as a whole. In pursuing this hypothetical, Sam would be wresting control of the AI the company develops toward the purpose of fighting the board and giving him a gameplan to defeat them and Chinese AI, which he'd see as good and necessary, indeed, existentially necessary.
In pursuing this hypothetical he would also be intentionally creating a superhuman AI with paranoia and a persecution complex. Altman's Basilisk. If he genuinely believes competing Chinese AI is an existential threat, he in turn takes action to try and become an existential threat to any such competing threat. And it's all based on HUMAN nature, not abstracted intelligence.
I disagree with this characterization, but even if it were true I believe it's still revolutionary.
A mentor that can competently get anyone hundreds of hours of individualized instruction in any new field is nearly priceless.
Do you remember what it feels like to try something completely new and challenging? Many people never even try because it's so daunting. Now you've got a coach that can talk you through it every step of the way, and is incredible at troubleshooting.
Of course if you're a gross commie I'm sure you'd agree since AI, like any other mean of production, will remain first and foremost a tool in the hands of the dominant class, and while using AI for emancipation is possible, it won't happen naturally through the free market.
- How he behaved during the investigation. Something could come to light on this matter.
- Often times what you hear is only the most rock solid stuff, we don't know what kind of rumors are circulating
- It just happens this way. Do you remember Milo? I listened to him on Joe Rogan say the exact same shit that was "discovered" some time later. This wouldn't be a new thing.
I will say I've seen stories circulating about fighting between the board. The specific way this was done just screams panic firing to get him out of the building. This is when people are made to disappear, I saw it during covid.
You would think almost any dispute would be handled with a long drawn out press blitz, transitioning, etc.
They'd probably still fire him, but would have done so in a very different way.
Hmm ya think?
This is more and more, in the light of the next day, looking like a disagreement about company direction turned sloppy boardroom coup. Corporate shenanigans.
I can see why people looking for some explanation quickly reached for it, but the sister angle never made any sense. At least where that story stands right now.
So, Elon decided to take a capitalist way and to do every of his tech in dual use (I mean space, not military): - Starlink aiming for $30 bln/year revenue in 2030 to build Starships for Mars at scale (each Starship is a few billion $ and he said needs hundred of them), - The Boring company (under earth living due to Mars radiation, - Tesla bots, - Hyperloop (failed here on Earth to sustain vacuum but will be fine on Mars with 100x smaller athmosphere pressure) etc.
Alternative approaches are also not via taxes and government money but like Bezos invested $1 bln/year last decade into Blue Origin or plays of Larry Page or Yuri Milner for Alpha Centauri etc.
This is a non-profit not a company. The board values the mission over the stock price of their for-profit subsidiary.
Having a CEO who does not own equity helps make sure that the non-profit mission remains the CEOs top priority. In this case though, perhaps that was not enough.
Maybe GDP will suffer but we've always known that was a mediocre metric at best. We already have doubts about the real value of intellectual property outside of artificial scarcity, which we maintain only because we still trade intellectual work for material goods which used to be scarce. That's only a fraction of the world economy already and it can very different in the future.
I have no idea what it'll be like when most people are free to do creative work when the average person doesn't produce anything anybody might want. But if they're happy I'm happy.
It's also extremely intertwined with and competes with for-profit companies
Financially it's wholly dependent on Microsoft, one of the biggest for-profit companies in the world
Many of the employees are recruited from for-profit companies (e.g. Google), though certainly many come from academic institutions too.
So the whole thing is very messy, kind of "born in conflict" (similar to Twitter's history -- a history of conflicts between CEOs).
It sounds like this is a continuation of the conflict that led to Anthropic a few years ago.
Many of the people working for mass media are their own worst enemy when it comes to the profession's reputation. And then they complain that there's too much distrust in the general public.
Anyway,the short regarding that project is that they use biometric data, encrypt it and put a "hash"* of it on their blockchain. That's been controversial from the start for obvious reasons although most of the mainstream criticism is misguided and by people who don't understand the tech.
*They call it a hash but I think it's technically not.
This seemed like a REALLY negative dismissal.
I think timelines are important here; for example in 2015 there was no such thing as Transformers, and while there were AGI x-risk folks (e.g. MIRI) they were generally considered to be quite kooky. I think AGI was very credibly "cat in the bag" at this time; it doesn't happen without 1000s of man-years of focused R&D that only a few companies can even move the frontier on.
I don't think the claim should be "we could have prevented LLMs from ever being invented", just that we can perhaps delay it long enough to be safe(r). To bring it back to the original thread, Sam Altman's explicit position is that in the matrix of "slow vs fast takeoff" vs. "starting sooner vs. later", a slow takeoff starting sooner is the safest choice. The reasoning being, you would prefer a slow takeoff starting later, but the thing that is most likely to kill everyone is a fast takeoff, and if you try for a slow takeoff later, you might end up with a capability overhang and accidentally get a fast takeoff later. As we can see, it takes society (and government) years to catch up to what is going on, so we don't want anything to happen quicker than we can react to.
A great example of this overhang dynamic would be Transformers circa 2018 -- Google was working on LLMs internally, but didn't know how to use them to their full capability. With GPT (and particularly after Stable Diffusion and LLaMA) we saw a massive explosion in capability-per-compute for AI as the broader community optimized both prompting techniques (e.g. "think step by step", Chain of Thought) and underlying algorithmic/architectural approaches.
At this time it seems to me that widely releasing LLMs has both i) caused a big capability overhang to be harvested, preventing it from contributing to a fast takeoff later, and ii) caused OOMs more resources to be invested in pushing the capability frontier, making the takeoff trajectory overall faster. Both of those likely would not have happened for at least a couple years if OpenAI didn't release ChatGPT when they did. It's hard for me to calculate whether on net this brings dangerous capability levels closer, but I think there's a good argument that it makes the timeline much more predictable (we're now capped by global GPU production), and therefore reduces tail-risk of the "accidental unaligned AGI in Google's datacenter that can grab lots more compute from other datacenters" type of scenario (aka "foom").
> LLMs are clearly not currently an "existential threat"
Nobody is claiming (at least, nobody credible in the x-risk community is claiming) that GPT-4 is an existential threat. The claim is, looking at the trajectory, and predicting where we'll be in 5-10 years; GPT-10 could be very scary, so we should make sure we're prepared for it -- and slow down now if we think we don't have time to build GPT-10 safely on our current trajectory. Every exponential curve flattens into an S-curve eventually, but I don't see a particular reason to posit that this one will be exhausted before human-level intelligence, quite the opposite. And if we don't solve fundamental problems like prompt-hijacking and figure out how to actually durably convey our values to an AI, it could be very bad news when we eventually build a system that is smarter than us.
While Eliezer Yudkowsky takes the maximally-pessimistic stance that AGI is by default ruinous unless we solve alignment, there are plenty of people who take a more epistemically humble position that we simply cannot know how it'll go. I view it as a coin toss as to whether an AGI directly descended from ChatGPT would stay aligned to our interests. Some view it as Russian roulette. But the point being, would you play Russian roulette with all of humanity? Or wait until you can be sure the risk is lower?
I think it's plausible that with a bit more research we can crack Mechanistic Interpretability and get to a point where, for example, we can quantify to what extent an AI is deceiving us (ChatGPT already does this in some situations), and to what extent it is actually using reasoning that maps to our values, vs. alien logic that does not preserve things humanity cares about when you give it power.
> nuclear weapon control by limiting information has already failed.
In some sense yes, but also, note that for almost 80 years we have prevented _most_ countries from learning this tech. Russia developed it on their own, and some countries were granted tech transfers or used espionage. But for the rest of the world, the cat is still in the bag. I think you can make a good analogy here: if there is an arms race, then superpowers will build the technology to maintain their balance of power. If everybody agrees not to build it, then perhaps there won't be a race. (I'm extremely pessimistic for this level of coordination though.)
Even with the dramatic geopolitical power granted by possessing nuclear weapons, we have managed to pursue a "security through obscurity" regime, and it has worked to prevent further spread of nuclear weapons. This is why I find the software-centric "security by obscurity never works" stance to be myopic. It is usually true in the software security domain, but it's not some universal law.
I agree with the general line of reasoning you're putting forth here, and you make some interesting points, but I think you're overconfident in your conclusion and I have a few areas where I diverge.
It's at least plausible that an AGI directly descended from LLMs would be human-ish; close to the human configuration in mind-space. However, even if human-ish, it's not human. We currently don't have any way to know how durable our hypothetical AGI's values are; the social axioms that are wired deeply into our neural architecture might be incidental to an AGI, and easily optimized away or abandoned.
I think folks making claims like "P(doom) = 90%" (e.g. EY) don't take this line of reasoning seriously enough. But I don't think it gets us to P(doom) < 10%.
Not least because even if we guarantee it's a direct copy of a human, I'm still not confident that things go well if we ascend the median human to AGI-hood. A replicable, self-modifiable intelligence could quickly amplify itself to super-human levels, and most humans would not do great with god-like powers. So there are a bunch of "non-extinction yet extremely dystopian" world-states possible even if we somehow guarantee that the AGI is initially perfectly human.
> There is every reason to expect a human-derived AGI of beyond-human scale will be able to rationalize killing its enemies.
My shred of hope here is that alignment research will allow us to actually engage in mind-sculpting, such that we can build a system that inhabits a stable attractor in mind-state that is broadly compatible with human values, and yet doesn't have a lot of the foibles of humans. Essentially an avatar of our best selves, rather than an entity that represents the mid-point of the distribution of our observed behaviors.
But I agree that what you describe here is a likely outcome if we don't explicitly design against it.
I hadn't thought of this dichotomy before, but I'm not sure it's going to be true for long; I wouldn't be surprised if it turned out that obtaining the 50k H100s you need to train a GPT-5 (or whatever hardware investment it is) is harder for Iran than obtaining its centrifuges. If it's not true now, I expect it to be true within a hardware generation or two. (The US already has >=A100 embargoes on China, and I'd expect that to be strengthened to apply to Iran if it doesn't already, at least if they demonstrated any military interest in AI technology.)
Also, I don't think nuclear tech is an example against obfuscation; how many countries know how to make thermonuclear warheads? Seems to me that the obfuscation regime has been very effective, though certainly not perfect. It's backed with the carrot and stick of diplomacy and sanctions of course, but that same approach would also have to be used if you wanted to globally ban or restrict AI beyond a certain capability level.