The curious part was how did investors including Microsoft missed this in their DD. Or they found out but they wanted to invest in OpenAI so bad that they let go of this fact.
2. soooo is Sam back? how exactly might satya influence a nonprofit board he has zero votes in? why would the board flipflop so quickly in the span of 24 hrs when presumably they took the original action knowing the shitstorm that would ensue? none of these things make sense given that everyone in the room is presumably smart and knows how the world works
On the governance matter, the thesis is a bit more shakey.
> Helen Toner and Tasha McCauley... participating in an... AI governance organization... calls into question the independence of their votes.
> I am not accusing anyone (to be clear, even the Board Directors that I consider conflicted) of having acted subject to conflicts of interest. [AKA "Just Asking Questions" technique]
> If... this act of governance was unwise, it calls into serious question the ability of these people and their organizations... to conduct governance
So they're conflicted because they're also in governance, and they shouldn't govern because they might have been conflicted.It seems like the author's real problem isn't any specific conduct by these two board members, but more of a "you got chocolate in my peanut butter" issue.
As a prominent researcher in AI safety (I discovered prompt injection) I should explain that Helen Toner is a big name in the AI safety community - she’s one of the top 20 most respected people in our community, like Rohin Shah.
The “who on earth” question is a good question about Tasha. But grouping Helen in with Tasha is just sexist. By analogy, Tasha is like Kimbal Musk, whereas Helen is like Tom Mueller.
Tasha seems unqualified but Helen is extremely qualified. Grouping them together is sexist and wrong.
The board can try to anticipate the next actions after they fire them, but predicting the final outcome of what essentially are human actions seems quite difficult.
It’s one shit show over there
This seems to be an instance of “if you hear the dog whistle you’re the dog”
Then they found out it did
People on Twitter are making degrading memes of her and posting weird, creepy, harassing comments like this: “Why does Helen Toner have shelf tits” https://x.com/smolfeelshaver/status/1726073304136450511?s=46
Search for “Helen Toner” on Twitter and you will see she is being singled out for bullying by a bunch of weird creepy white dudes who I guess apparently work in tech.
> I think the AI safety community is a little early and notoriety therein probably isn’t sufficient to qualify somebody to direct an $80 bn company.
Normally you’d be right. In the specific case of OpenAI, however, their charter requires safety to be the number one priority of their directors, higher than making money or providing stable employment or anything else that a large company normally prioritizes. This is from OpenAI’s site:
“each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial” https://openai.com/our-structure
The board went from 10 people to 6 in the span of a year. That explains absolutely everything that happened at OpenAI.
Why couldn't we get that little tidbit from a media publication?
As Godel said, you can't completely understand the system while you are part of the system. So this is my take from someone detached from the VC culture in general.
Money is King. Who brings money is the true king. From this perspective, there is no significant difference between drug cartels (for ex. weed) and entrepreneurs. They both care about the people around them and can bring money. The only difference is how the story was framed. Now that cannabis is legal in some parts, what difference does it make? You can be a weed farmer/entrepreneur.
You should reconsider being on this site then if you think so little of entrepreneurs.
The partnership is useful for making it look exciting and relevant which helps with attracting talent i.e. it's largely branding/marketing.
So it's very much a once sided relationship in terms of dependency.
She is extremely knowledgeable about the details of all the ways that AI can cause catastrophe and also she’s one of the few people with the ear of the top leaders in the DoD (similar to Eric Schmidt in this regard).
Basically, she’s one of a very small number of people who are credibly reducing the chances that AI could cause a war.
If you’d like to learn about or become part of the AI safety community, a good way to start is to check out Rohin Shah’s Alignment Newsletter. http://rohinshah.com/alignment-newsletter/
Basically, he was saying he had very little to lose by going public with his story.
Most of them have traces that goes back to social connection, high status, good education. Of course there will be some exceptions. But if we look at the general status of the population, most come from the "well-off" background.
> You should reconsider being on this site then if you think so little of entrepreneurs.
I put that example because psychologist have found that social media is even more addicting than drug like weed. And pure weed doesn't make your body addicted to it, it is just the psychological effect. Also, we have seen what social media did to the fabric of society.
It's quite bad now but for someone like Adam with dataset like quora, he likely has bigger plans.
If you want to be the titan of an industry and do things that put you at the center of media attention, you have to expect comments of this kind and not be surprised when they happen. Whether you are a man, a woman or anybody else.
If you don't expect "not very nice" or ambivalent reactions from people, you are an amateur and you shouldn't be in the board of such a prominent company.
https://scholar.google.com/citations?user=NNnQg0MAAAAJ&hl=en
https://futureshub.anu.edu.au/experts/helen-toner
And she setup the Center for Security and Emerging Technology at Georgetown so also has some degree of executive experience.
It does look like governance very much played second fiddle, and the unsurprising outcome of that was that governance hasn't worked very well. I don't know who can rightfully take the blame for that, though, other than the Chair and maybe CEO. If the board wasn't fit, it was their job to fix it.
It’s convulated
> They all resigned within a few months of one another despite OpenAI looking like the rocketship of the century? Something feels a little odd about that.
Exactly what I was thinking when reading through the timeline.
Of course you have to be competent on the subject matter, work hard, iterate and have luck.
annnnd you just shredded your credibility with some casual bigotry
https://80000hours.org/podcast/episodes/helen-toner-on-secur...
How might international security be altered if the impact of machine learning is similar in scope to that of electricity? Today’s guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for any such disruptive technical changes that might threaten international peace.https://github.com/OWASP/www-project-top-10-for-large-langua...
In the citations:
14. Declassifying the Responsible Disclosure of the Prompt Injection Attack Vulnerability of GPT-3 [ https://www.preamble.com/prompt-injection-a-critical-vulnera... ]: Preamble; earliest disclosure of Prompt Injection
You might want to try a little more humility when making statements like that. You might think it bolsters your credibility but for many it makes you look out of touch and egotistical.
That’s more the domain of “AI ethics” which I guess is cool but I personally think is much much much less important than AI safety.
AI safety is concerned with preventing human extinction due to (for example) AI causing accidental war or accidental escalation.
For example, making sure that AI won’t turn a heated conventional war into a nuclear war by being used for military intelligence analysis (writing summaries of the current status of a war) and then incorrectly saying that the other side is preparing for a nuclear first strike -- due to the AI being prone to hallucination, or to prompt injection or adversarial examples which can be injected by 3rd-party terrorists.
For more information on this topic, you can reference the recent paper ‘Catalytic nuclear war’ in the age of artificial intelligence & autonomy: Emerging military technology and escalation risk between nuclear-armed states:
That's... not very much? There are 4,400 colonels in the USAF: https://diversity.defense.gov/LinkClick.aspx?fileticket=gxMV...
I read it as saying that they were conflicted because they're both from the "highly ideological" Open Philanthropy; on a small board, having two people ideologically aligned seems precarious.
Seems like she's run a robotics company or something like that. Definitely someone in the tech business.
Not everyone on a board needs to have done the exact thing that the company does.
Edit: This one for example: https://sfstandard.com/2023/11/17/openai-sam-altman-firing-b...
https://www.researchgate.net/publication/331744706_Precautio...
https://www.researchgate.net/publication/371166526_Regulatin...
https://link.springer.com/article/10.1007/s11569-020-00373-5
https://www.aei.org/technology-and-innovation/treading-caref...
https://itif.org/publications/2019/02/04/ten-ways-precaution...
It's clear what a branch of OpenAI thinks about this ...stuff..., they're making a career out of it. I agree with you!
He stated he did not have any equity in OpenAI. While he might not shares directly in OpenAI, but he might have indirectly, or has some other economic interest agreement.
OpenAI consists of at least the following entities:
OpenAI OpCo, LLC
OpenAI Global, LLC
OpenAI Nonprofit.
OpenAI GP LLC
OpenAI, Inc.
OpenAI, L.L.C
OpenAI, LP.
OpenAI Startup Fund Management, LLC
OpenAI Startup Fund I, L.P.
OpenAI Startup Fund GP I, L.L.C.
OpenAI Ireland Ltd
A Delaware Limited Partnership doesn't have shares, so he's technically correct by that statement.--ianal
[1] https://www.therobotreport.com/navii-autonomous-retail-robot...
I'm not sure where I am on the creepy scale but I'm happy to hate on him because I really don't think he should be anywhere near that board. And yes, Helen Toner does have a claim. Not sure about the level of experience in the typical role of a board member but plenty of cred when it comes to the subject matter.
So these three plus Mira Murati make 4 for 4 hot women governing OpenAI. I'm not a Data Scientist but that's a pattern. Not one ugly woman who has a concept of AI governance? Not one single George Elliot-looking genius?
https://smallbusiness.chron.com/difference-between-public-pr...
https://www.irs.gov/charities-non-profits/eo-operational-req...
"Under the tax law, a section 501(c)(3) organization is presumed to be a private foundation unless it requests, and qualifies for, a ruling or determination as a public charity. Organizations that qualify for public charity status include churches, schools, hospitals, medical research organizations, publicly-supported organizations (i.e., organizations that receive a specified portion of their total support from public sources), and certain supporting organizations."
Edit: Looking at the IRS determination letter from November 3, 2016, OpenAI was organized as a public charity under 170(b)(1)(A)(vi) "Organizations Receiving Substantial Support from a Governmental Unit or from the General Public"
Their last 990 form, filed November 15, 2021, for the calendar year 2020, shows total support over the past 5 years (2016-2020) of $133M, only $41M of which was individual donations of over 2% ($2.6M) so they easily met the 5-year public support test.
> weird creepy white dudes…
This is racism. And sexist. How do you know it’s white people or dudes?
You have two people left, we have no idea what he / she is, their work are not public outside of specific domain, and no Public / PR exposure to even anyone who follows tech closely.
Those two people we group them together. And they happened to be woman. ( At least we assumed their gender ). And we are now being called sexist? Seriously?
It’s not a business. It’s not competing for business. It’s a charity.
Like if you’re on the board of a charity fighting cancer is it a conflict to be on a board of another charity fighting AIDS? Or also part of a for profit company fighting cancer?
Of course not. You’d have a conflict of interest if you had a relationship that was opposed to the charity’s mission like a tobacco company, or if you were personally profiting off your role with the charity.
The post here doesn’t articulate why these are conflicts of interest.
This thread and all the other 15 threads about all this start with the tacit assumption that OpenAI is a high growth tech company, with investors and customers and founders and so on.
It’s not. It’s a charity.
It has about as much real world applicability as those people who charge money for their classes on how to trade crypto. Or maybe "how to make your own cryptocurrency".
Not only does current AI not have that ability, it's not clear that AI with relevant capabilities will ever be created.
IMO it's born out of a generation having grown up on "Ghost in the Shell" imagining that if an intelligence exists and is running on silicon, it can magically hack and exist inside every connected device on earth. But "we can't prove that won't happen".
Women are socially allowed to use artificial means to greatly improve their appearance. I'm referring to makeup and expensive hair treatments. Women from upper-middle or upper classes have even more of an advantage in using these. So if you're a thin woman, unless you were unlucky enough to be born disfigured, you're a single afternoon away of looking like a movie star.
If Sam Altman was socially allowed to wear makeup and a wig, you'd call him a heartthrob.
Total support includes over $70M in other income in 2018 and 2019 which is missing the required explanation in the 990's. In other words, out of the $92M in public support, $70M is unexplained other income.
Also, Open Philanthropy pledged $30 million in 2017 ($10 per year for 2017-2019) which is considered public support since they are a public charity. However that is more than the $22 in true public support that was reported. Perhaps they didn't complete the pledge.
In any case, finding out the key fact about a situation shouldn't require reading multiple articles by different publications. It should have been highly emphasized in any publication's reporting.
Ever seen a Catholic hospital with a Satan worshiper on the board?
If the mission of OpenAI and its reason for being created is to make sure AGI is kept in the public trust and not walled off by commercial forces then you’re not going to want people believing the opposite of that.
It does seem like the whole organization was "born in conflict", starting with Elon and Sam.
Then Reid resigned because of a COI, someone whose wife helped start the "offshoot" Anthropic, and then there was Elon's employee and mother of his children, etc.
I was going to say that some reporters weren't doing their jobs this whole time, but actually there are good links in the article, like this
https://www.bloomberg.com/news/articles/2023-07-13/republica...
It was reported that 3 directors left in 2023.
But yeah I agree it's weird that none of the breathless and short articles even linked to that one!!! (as far as I remember) That's a huge factor in what happened.
Are you claiming that physical appearance has nothing to do with politics, or that we just shouldn't comment on it?
I think it's pretty obvious that the OpenAI men aren't too attractive by most standards, as opposed to say US presidents, who are mostly sexually attractive.
The methods behind the different scenarios - disinformation, false-flagging, impersonation, stoking fear, exploiting the tools used to make the decisions - aren't new. States have all the capability to do them right now, without AI. But if a state did so, they would face annihilation if anyone found out what they were doing. And the manpower needed to run a large-scale disinformation campaign means a leak is pretty likely. So it's not worth it.
But, with AI, a small terrorist group could do it. And it'd be hard to know which ones were planning to, because they'd only need to buy the same hardware as any other small tech company.
(I hope I've summarized the article well enough.)
He has zero equity.
> the exclusion of the organization’s goals.
What did Altman do to clearly exclude OpenAPI's goals?
1. I read to the bottom of this, and somebody's like "BTW, Altman has no equity." Really, the CEO and a board member has no equity?
2. The CIA joined the board (former, whatever, you're CIA)? And suddenly, everybody has somewhere else to be.
3. "We got everything we need, run away! We all happen to be developing closely aligned products, thanks Microsoft."
+ B > It seems much easier to govern a single-digit number of highly capable people than to “govern” artificial superintelligence. If it turns out that this act of governance was unwise, then it calls into serious question the ability of these people and their organizations (Georgetown’s CSET, Open Philanthropy, etc.) to conduct governance in general, especially of the most impactful technology of the hundred years to come.
=C > Many people are saying we need more governance: maybe it turns out we need less.
How does this conclusion come from these premises?
No, it's not. They're grouped together because everyone knows who Sama, Greg Brockman, Ilya, and Adam D'Angelo (Quora founder / FB CTO) are, and maybe 5% knew who Helen and Tasha are. You linked to a rando twitter user making fun of her, but I've seen far more putting down Ilya for his hairline.
Rather than trying to assume it's some corporate conspiracy, I'm gonna go with Occam's Razor.
Could you elaborate? This explains exactly nothing to me.
Which actually isn't. Some places use other oaths, more or less derived from the Hippocratic Oath, other places don't do it at all. But hardly anyone uses the OG oath itself.
Like what happened to China after they released Tiktok, or what happened to Russia after they used their troll farms to affect public sentiment surrounding US elections?
"Flooding social media" isn't something difficult to do right now, with far below state-level resources. AIs don't come with built-in magical account-creation tools nor magical rate-limiter-removal tools. What changes with AI is the quality of the message that's crafted, nothing more.
No military uses tweets to determine if it has been nuked. AI doesn't provide a new vector to cause a nuclear war.
Also, if you're quoting, please the preceding sentence which is crucial:
He took no *direct* financial stake in the business, citing his concern regarding the use of profit as an incentive in AI development
If so then 4chan had prior art, discovering prompt injections when they made Microsoft's Tay chatbot become a racist on twitter.