In practice, board bylaws and common sense mean that individuals recuse themselves as needed and don't do stupid shit.
Were you watching a different show than the rest of us?
It's a real shame too, because this is a clear loss for the AI Alignment crowd.
I'm on the fence about the whole alignment thing, but at least there is a strong moral compass in the field- especially compared to something like crypto.
A charity acting (due to the influence of a conflicted board member that doesn't recuse) contrary to its charitable mission in the interests of the conflicted board member or who they represent does something similar with regard to liability of the firm to various stakeholders with a legally-enforceable interest in the charity and its mission, but also is also a public civil violation that can lead to IRS sanctions against the firm up to and including monetary penalties and loss of tax exempt status on top of whatever private tort liability exists.
I think people forget sometimes that comments come with a context. If we are having a conversation about Deep Water Horizon someone will chime in about how safe deep sea oil exploration is and how many failsafes blah blah blah.
“Do you know where you are right now?”
I mean, this is definitely one of my pet peeves, but the wider context of this conversation is specifically a board doing stupid shit, so that's a very relevant counterexample to the thing being stated. Board members in general often do stupid/short-sighted shit (especially in tech), and I don't know of any examples of corporate board members recusing themselves.
I think the comment to which you replied has a very reddit vibe, no doubt. But also, it's a completely valid point. Could it have been said differently? Sure. But I also immediately agreed with the sentiment.
Is this still true when the board gets overhauled after trying to uphold the moral compass.
If OpenAI didn't have adequate safeguards, either through negligence or becauase it was in fact being run deliberately as a fraudulent charity, that's a particular failure of OpenAI, not a “well, 501c3’s inherently don't have safeguard” thing.
We are talking about a failure of the system, in the context of a concrete example. Talking about how the system actually works is only appropriate if you are drawing specific arguments up about how this situation is an anomaly, and few of them do that.
Instead it often sounds like “it’s very unusual for the front to fall off”.
Look at these clowns (Ilya & Sam and their angry talkie-bot), it's a revelation, like Bill Gates on Linux in 2000:
Most people just tend to go about it more intelligently than Trump but "charitable" or "non-profit" doesn't mean the organization exists to enrich the commons rather than the moneyed interests it represents.
Eric Schmidt on Apple’s board is the example that immediately came to my mind. https://www.apple.com/ca/newsroom/2009/08/03Dr-Eric-Schmidt-...