Most readers are aware they were a research and advocacy organization that became (in the sense that public benefit tax-free nonprofit groups and charitable foundations normally have no possibility of granting anyone equity ownership nor exclusive rights to their production) a corporation by creating one; but some of the board members are implied by the parent comment to be from NGO-type backgrounds.
What do you mean by this? Looks like you're just throwing out a diss on the doomer position (most doomers don't think near future LLMs are concerning).
From the non-profit's perspective, it sounds pretty reasonable to self-police and ensure there aren't any rogue parts of the organization that are going off and working at odds with the overall non-profit's formal aims. It's always been weird that the Open-AI LLC seemed to be so commercially focused even when that might conflict with it's sole controller's interests; notably the LLC very explicitly warned investors that the NGO's mission took precedence over profit.
The CEO just works for the organization and the board is their boss.
You’re referencing a founder situation where the CEO is also a founder who also has equity and thus the board also reports to them.
This isn’t that. Altman didn’t own anything, it’s not his company, it’s a non-profit. He just works there. He got fired.
We have ample empirical pretext to worry about things like AI ethics, automated trading going off the rails and causing major market disruptions, transparency around use of algorithms in legal/medical/financial/etc. decision-making, oligopolies on AI resources, etc.... those are demonstrably real, but also obviously very different in kind from generalized AI doomsday.
Nuclear war had very simple mechanistic concept behind it.
Both sides develop nukes (proven tech), put them on ballistic missiles (proven tech). Something goes politically sideways and things escalate (just like in WW1). Firepower levels cities and results in tens of millions dead (just like in WW2, again proven).
Nuclear war experts were actually experts in a system whose outcome you could compute to a very high degree.
There is no mechanistic model behind AI doom scenarios. There is no expert logically proposing a specific extinction scenario.
You can already trivially load up a car with explosives, drive it to a nearby large building, and cause massive damages and injury.
Yes, it’s plausible a lone genious could manufacture something horrible in their garage and let rip. But this is in the domain of ’fictional whatifs’.
Nobody factors in the fact that in the presence of such a high quality AI ecosystem the opposing force probably has AI systems of their own to help counter the threat (megaplague? Quickly synthesize megavaxine and just print it out at your local healt centers biofab. Megabomb? Possible even today but that’s why stuff like Uranium is tightly controlled. Etc etc). I hope everyone realizes all the latter examples are fictional fearmongering wihtout any basis in known cases.
AI would be such a boom for whole of humanity that shackling it in is absolutely silly. That said there is no evidende of a deus ex machina happy ending either. My position is let researchers research and once something substantial turns out, then engage policy wonks, once solid mechanistic principles can be referred to.
You don't seem actually familiar with doomer talking points. The classic metaphor is that you might not be able to say how specifically Magnus Carlson will beat you at chess if you start the game with him down a pawn while nonetheless knowing he probably will. Predicting
The main way doomers think ASI might kill everyone is mostly via the medium of communicating with people and convincing them to do things, mostly seemingly harmless or sensible things.
It's also worth noting that doomers are not (normally) concerned about LLMs (at least, any in the pipeline), they're concerned about:
* the fact we don't know how to ensure any intelligence we construct actually shares our goals in a manner that will persist outside the training domain (this actually also applies to humans funnily enough, you can try instilling values into them with school or parenting but despite them sharing our mind design they still do unintended things...). And indeed, optimization processes (such as evolution) have produced optimization processes (such as human cultures) that don't share the original one's "goals" (hence the invention of contraception and almost every developed country having below replacement fertility).
* the fact that recent history has had the smartest creature (the humans) taking almost complete control of the biosphere with the less intelligent creatures living or dying on the whims of the smarter ones.
Many NGOs run limited liability companies and for-profit businesses as part of their operations, that’s in no way unique for OpenAI. Girl Scout cookies are an example.