zlacker

[return to "Emmett Shear becomes interim OpenAI CEO as Altman talks break down"]
1. Techni+02[view] [source] 2023-11-20 05:31:05
>>andsoi+(OP)
I still cannot process what’s happened to one of the most prominent and hyped companies of the past year in just one weekend.

If it’s true that Altman won’t return to OpenAI (or alternatively: that the current board won’t step down) then where does that leave OpenAI? Microsoft can’t be happy, as evidenced by reporting that Nadella was acting as mediator to bring him back. Does OpenAI survive this?

Will be super interesting when all the details come out regarding the board’s decision making. I’m especially curious how the (former) CEO of Twitch gets nominated as interim CEO.

Finally, if Altman goes his own way, it’s clear the fervent support he’s getting will lead to massive funding. Combined with the reporting that he’s trying to create his own AI chips with Middle East funding, Altman has big ambitions for being fully self reliant to own the stack completely.

No idea what the future holds for any of the players here. Reality truly is stranger than fiction.

◧◩
2. altdat+c3[view] [source] 2023-11-20 05:36:55
>>Techni+02
OpenAI has hundreds more employees, all of whom are incredibly smart. While they will definitely lose the leadership and talent of those two, it’s not as if a nuclear bomb dropped on their HQ and wiped out all their engineers!

So questioning whether they will survive seems very silly and incredibly premature to me

◧◩◪
3. karmas+w5[view] [source] 2023-11-20 05:50:33
>>altdat+c3
Survive as existing? They will.

But this is a disaster that can't be sugarcoated. Working in an AI company with a doomer as head is ridiculous. It will be like working in a tobacco company advocating for lung cancer awareness.

I don't think the new CEO can do anything to get back trust in record short amount of time. The sam loyalists will leave. The question remain, how is the new CEO going to hire new people, and will he be able to do so fast enough, and the ones who remain will accept the company that is a drastically different.

◧◩◪◨
4. bottle+l6[view] [source] 2023-11-20 05:54:32
>>karmas+w5
Ah yes you're either a doomer or e/acc. Pick an extreme. Everything must be polarized.
◧◩◪◨⬒
5. astran+Vc[view] [source] 2023-11-20 06:36:24
>>bottle+l6
There's a character in HPMOR named after the new CEO.

(That's the religious text of the anti-AI cult that founded OpenAI. It's in the form of a very long Harry Potter fanfic.)

◧◩◪◨⬒⬓
6. Feepin+de[view] [source] 2023-11-20 06:44:21
>>astran+Vc
Sorry, which character are you talking about? (Also lol "religious text", how dare people have didactic opinions.)
◧◩◪◨⬒⬓⬔
7. astran+3f[view] [source] 2023-11-20 06:50:07
>>Feepin+de
The one with the same name as the new CEO. Pretty straightforward.

> Also lol "religious text", how dare people have didactic opinions.

That's not what a religious text is, that'd just be a blog post. It's the part where reading it causes you to join a cult group house polycule and donate all your money to stopping computers from becoming alive.

◧◩◪◨⬒⬓⬔⧯
8. Feepin+gg[view] [source] 2023-11-20 06:58:36
>>astran+3f
Oh hey there he is, cool. I had a typo in my search, I think.

> That's not what a religious text is, that'd just be a blog post.

Yes, almost as if "Lesswrong is a community blog dedicated to refining the art of human rationality."

> It's the part where reading it causes you to join a cult group house polycule and donate all your money to stopping computers from becoming alive.

I don't think anybody either asked somebody to, or actually did, donate all their money. As to "joining a cult group house polycule", to my knowledge that's just SF. There's certainly nothing in the Sequences about how you have to join a cult group house polycule. To be honest, I consider all the people who joined cult group house polycules, whose existence I don't deny, to have a preexisting cult group house polycule situational condition. (Living in San Francisco, that is.)

◧◩◪◨⬒⬓⬔⧯▣
9. avalys+Gj[view] [source] 2023-11-20 07:20:18
>>Feepin+gg
“The Sequences”? Yes, this doesn’t sound like a quasi-religious cult at all…
◧◩◪◨⬒⬓⬔⧯▣▦
10. astran+hm[view] [source] 2023-11-20 07:38:31
>>avalys+Gj
The message is that if you do math in your head in a specific way involving Bayes' theorem, it will make you always right about everything. So it's not even quasi-religious, the good deity is probability theory and the bad one is evil computer gods.

This then causes young men to decide they should be in open relationships because it's "more logical", and then decide they need to spend their life fighting evil computer gods because the Bayes' theorem thing is weak to an attack called "Pascal's mugging" where you tell them an infinitely bad thing has a finite chance of happening if they don't stop it.

Also they invent effective altruism, which works until the math tells them it's ethical to steal a bunch of investor money as long as you use it on charity.

https://metarationality.com/bayesianism-updating

Bit old but still relevant.

◧◩◪◨⬒⬓⬔⧯▣▦▧
11. Feepin+4n[view] [source] 2023-11-20 07:43:42
>>astran+hm
> This then causes young men to decide they should be in open relationships because it's "more logical"

Yes, which is 100% because of "LessWrong" and 0% because groups of young nerds do that every time, so much so that there's actually an XKCD about it (https://xkcd.com/592/).

The actual message regarding Bayes' Theorem is that there is a correct way to respond to evidence in the first place. LessWrong does not mandate, nor would that be a good idea, that you manually calculate these updates: humans are very bad at it.

> Also they invent effective altruism, which works until the math tells them it's ethical to steal a bunch of investor money as long as you use it on charity.

Given that this didn't happen with anyone else, and most other EAs will tell you that it's morally correct to uphold the law, and in any case nearly all EAs will act like it's morally correct, I'm inclined to think this was an SBF thing, not an EA thing. Every belief system will have antisocial adherents.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
12. astran+Io[view] [source] 2023-11-20 07:53:33
>>Feepin+4n
> The actual message regarding Bayes' Theorem is that there is a correct way to respond to evidence in the first place.

No, there isn't a correct way to do anything in the real world, only in logic problems.

This would be well known if anyone had read philosophy; it's the failed program of logical positivism. (Also the failed 70s-ish AI programs of GOFAI.)

The main reason it doesn't work is that you don't know what all the counterfactuals are, so you'll miss one. Aka what Rumsfeld once called "unknown unknowns".

https://metarationality.com/probabilism

> Given that this didn't happen with anyone else

They're instead buying castles, deciding scientific racism is real (though still buying mosquito nets for the people they're racist about), and getting tripped up reinventing Jainism when they realize drinking water causes infinite harm to microscopic shrimp.

And of course, they think evil computer gods are going to kill them.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲
13. Feepin+cq[view] [source] 2023-11-20 08:02:01
>>astran+Io
> No, there isn't a correct way to do anything in the real world, only in logic problems.

Agree to disagree? If there's one thing physics teaches us, it's that the real world is just math. I mean, re GOFAI, it's not like Transformers and DL are any less "logic problem" than Eurisko or Eliza were. Re counterfactuals, yes, the problem is uncomputable at the limit. That's not "unknown unknowns", that's just the problem of induction. However, it's not like there's any alternative system of knowledge that can do better. The point isn't to be right all the time, the point is to make optimal use of available evidence.

> buying castles

They make the case that the castle was good value for money, and given the insane overhead for renting meeting spaces, I'm inclined to believe them.

> scientific racism is real (though still buying mosquito nets for the people they're racist about)

Honestly, give me scientific racists who buy mosquito nets over antiracists who don't any day.

> getting tripped up reinventing Jainism when they realize drinking water causes infinite harm to microscopic shrimp.

As far as I can tell, that's one guy.

> And of course, they think evil computer gods are going to kill them.

I mean, I do think that, yes. Got any argument against it other than "lol sci-fi"?

[go to top]