zlacker

[parent] [thread] 87 comments
1. convex+(OP)[view] [source] 2023-11-18 02:56:12
Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity."

Scoop: theinformation.com

https://twitter.com/GaryMarcus/status/1725707548106580255

replies(7): >>jdminh+v1 >>peyton+J1 >>cm2012+42 >>resour+p4 >>nradov+8c >>smharr+yc >>anon29+Ip
2. jdminh+v1[view] [source] 2023-11-18 03:07:55
>>convex+(OP)
So basically a confirmation, but with a slight disagreement on the vocabulary used to describe it.
replies(1): >>davora+Sl
3. peyton+J1[view] [source] 2023-11-18 03:09:22
>>convex+(OP)
Very unprofessional way to approach this disagreement.
replies(5): >>anonym+52 >>strike+m2 >>rdtsc+l4 >>mochom+w4 >>hindsi+Zx
4. cm2012+42[view] [source] 2023-11-18 03:11:55
>>convex+(OP)
Huge scoop.
◧◩
5. anonym+52[view] [source] [discussion] 2023-11-18 03:11:55
>>peyton+J1
How so? It's just another firing and being escorted out the door.
replies(2): >>jdminh+w2 >>janeje+R2
◧◩
6. strike+m2[view] [source] [discussion] 2023-11-18 03:13:55
>>peyton+J1
When two people have different ideologies and neither is willing to backdown or compromise, one person must "go".
replies(4): >>smolde+E2 >>peyton+E4 >>bushba+ib >>TeMPOr+HA
◧◩◪
7. jdminh+w2[view] [source] [discussion] 2023-11-18 03:14:58
>>anonym+52
It's not "just another firing," the statement accused Altman of lying to the board. Either he did and it's a justified extraordinary firing, or he didn't and it's hugely unprofessional to insinuate he did.
replies(3): >>chasd0+Y2 >>anonym+j3 >>adastr+78
◧◩◪
8. smolde+E2[view] [source] [discussion] 2023-11-18 03:15:31
>>strike+m2
Or you introduce an authoritative third party that mediates their interactions. This feels like it wouldn't be a problem if so many high-ranking employees didn't feel so radically different about the same technology.
replies(2): >>fsckbo+94 >>oivey+F4
◧◩◪
9. janeje+R2[view] [source] [discussion] 2023-11-18 03:16:36
>>anonym+52
The wording is very clearly hostile and aggressive, especially for a formal statement, and the wording, again, makes it very clear that they are burning all bridges with Sam Altman, and it is very clear that 1. it was done extremely suddenly, 2. with very little notice or discussion with any other stakeholder (e.g. Microsoft being completely blindsided, not even waiting 30 minutes for the stock market to close, doing this shortly before Thanksgiving break, etc).

You don't really see any of this in most professional settings.

replies(3): >>fsckbo+L4 >>clnq+57 >>DonHop+Er
◧◩◪◨
10. chasd0+Y2[view] [source] [discussion] 2023-11-18 03:17:11
>>jdminh+w2
Oh man the lawyers have to be so happy I bet they can hardly count.
◧◩◪◨
11. anonym+j3[view] [source] [discussion] 2023-11-18 03:19:38
>>jdminh+w2
I read that word as not being forthcoming moreso than actively lying. But I don't read many firing press releases.
◧◩◪◨
12. fsckbo+94[view] [source] [discussion] 2023-11-18 03:24:54
>>smolde+E2
when did a board or CEO ever introduce an authoritative 3rd party to mediate between them? the board is the authoritative 3rd party.
◧◩
13. rdtsc+l4[view] [source] [discussion] 2023-11-18 03:26:15
>>peyton+J1
It was actually a great move. Unusual, but it goes with the mission and nonprofit idea. I think it was designed to draw attention and stir controversy on purpose.
replies(1): >>kcb+B6
14. resour+p4[view] [source] 2023-11-18 03:26:52
>>convex+(OP)
No one in this company is "consistently candid" about anything.
replies(1): >>Prolly+z7
◧◩
15. mochom+w4[view] [source] [discussion] 2023-11-18 03:27:22
>>peyton+J1
If you know anything about Ilya, it's definitely not out of character.
replies(2): >>peyton+zt >>ariym+Ku
◧◩◪
16. peyton+E4[view] [source] [discussion] 2023-11-18 03:28:12
>>strike+m2
There’s no indication that any sort of discussion took place. Major stakeholders like Microsoft appear uninformed.
replies(2): >>strike+U4 >>vineya+Wa
◧◩◪◨
17. oivey+F4[view] [source] [discussion] 2023-11-18 03:28:21
>>smolde+E2
Altman’s job was to be a go between for the business and engineering sides of the house. If the chief engineer who was driving the company wasn’t going to communicate with him anymore, then he wouldn’t serve much of a purpose.
◧◩◪◨
18. fsckbo+L4[view] [source] [discussion] 2023-11-18 03:29:19
>>janeje+R2
boards give reasons for transparency, and they said he had not been fully candid.

You are interpreting that as hostile and aggressive because you are reading into it what other boards have said in other disputes and whatever you are imagining, but if the board learned some things not from Altman that it felt they should have learned from Altman, less than candid is a completely neutral way to describe it, and voting him out is not an indication of hostility.

Would you like to propose some other candid wording the board could have chosen, a wording that does not lack candor?

replies(1): >>janeje+u5
◧◩◪◨
19. strike+U4[view] [source] [discussion] 2023-11-18 03:30:40
>>peyton+E4
in a power struggle, you have to act quickly
replies(1): >>fsckbo+P5
◧◩◪◨⬒
20. janeje+u5[view] [source] [discussion] 2023-11-18 03:34:24
>>fsckbo+L4
> You are interpreting that as hostile and aggressive because you are reading into it

Uhh no, I'm seeing it as hostile and aggressive because the actual verbiage was hostile and aggressive, doubly so in the context of this being a formal corporate statement. You can pass the text into NLP sentiment analyzer and it too will come to the same conclusion.

It is also very telling that you are being very sarcastic and demeaning in your remarks as well to someone who wasn't even replying to you, which might explain why you might have seen the PR statement differently.

replies(1): >>racket+Pu
◧◩◪◨⬒
21. fsckbo+P5[view] [source] [discussion] 2023-11-18 03:37:15
>>strike+U4
I don't think it's that dramatic. In a board meeting, you have to act while the board is meeting. They don't meet every day, and it's a small rigamarole to pull a meeting together, so if you're meeting... vote.
replies(2): >>vanjaj+C7 >>dekhn+La
◧◩◪
22. kcb+B6[view] [source] [discussion] 2023-11-18 03:44:20
>>rdtsc+l4
Is it a winning move though? The biggest loser in this seems to be the company that was bankrolling their endeavor, Microsoft.
replies(2): >>rdtsc+by >>toyg+fw2
◧◩◪◨
23. clnq+57[view] [source] [discussion] 2023-11-18 03:47:52
>>janeje+R2
It is quite gauche for a company to burn bridges with their upper management. This bodes poorly for ever hoping to attract executives in the future. Even Bobby Kotick got a more graceful farewell from Activision Blizzard, where they tried to clear his name. It is only prudent business.

Certainly, this is very immature. It wouldn't be out of context in HBO's Succession.

Whether what happened is right or just in some sense is a different conversation. We could speculate on what is going on in the company and why, but the tactlessness is evident.

replies(2): >>riboso+4b >>ryandr+8A
◧◩
24. Prolly+z7[view] [source] [discussion] 2023-11-18 03:51:35
>>resour+p4
Yes, but Ilya is on the Board of Directors; and Sam is currently unemployed (although: not for long).
◧◩◪◨⬒⬓
25. vanjaj+C7[view] [source] [discussion] 2023-11-18 03:52:08
>>fsckbo+P5
are you suggesting they brought up a vote on a whim at a board meeting and acted on it same day
replies(1): >>fsckbo+m9
◧◩◪◨
26. adastr+78[view] [source] [discussion] 2023-11-18 03:56:19
>>jdminh+w2
Hugely unprofessional and a billion dollar liability.
◧◩◪◨⬒⬓⬔
27. fsckbo+m9[view] [source] [discussion] 2023-11-18 04:06:00
>>vanjaj+C7
no, I was replying to a comment that said it was a power struggle in which the board needed to act quickly before they lost power.

The board may very well have met for this very reason, or perhaps it was at this meeting that the lack of candor was found or discussed, but to hold a board meeting there is overhead, and if the board is already in agreement at the meeting, they vote.

It only seems sudden to outsiders, and that suddenness does not mean a "night of the long knives".

replies(1): >>lazide+En
◧◩◪◨⬒⬓
28. dekhn+La[view] [source] [discussion] 2023-11-18 04:16:45
>>fsckbo+P5
One imagines in this case the current board discussed this in a non-board context, scheduled a meeting without inviting the chair, made quorum, and voted, then wrote the PR and let Sam, Greg, and HR know, then released the PR. Which is pretty interesting in and of itself, maybe they were trying to sidestep roko or something
replies(1): >>lsafer+Rf
◧◩◪◨
29. vineya+Wa[view] [source] [discussion] 2023-11-18 04:18:31
>>peyton+E4
Basically half the point of this is that Microsoft isn’t a stakeholder. The board clearly doesn’t care or is actively hostile to the idea of growing “the business”. If they didn’t know then that they weren’t a stakeholder, they know now.

MS owns a non controlling share of a business controlled by a nonprofit. MS should have prepared for the possibility that their interests aren’t adequately represented. I’m guessing Altman is very persuasive and they were in a rush to make a deal.

replies(1): >>peyton+vu
◧◩◪◨⬒
30. riboso+4b[view] [source] [discussion] 2023-11-18 04:19:27
>>clnq+57
> Whether it's right or just in some sense is a different conversation.

The same conversation if it's "mature", surely? I'm failing to see how one thinks turning a blind eye to like, decades of sexual impropriety and major internal culture issues to the point the state takes action against your company is "mature". Like, under what definition?

replies(1): >>clnq+8e
◧◩◪
31. bushba+ib[view] [source] [discussion] 2023-11-18 04:21:24
>>strike+m2
There's more graceful ways to do this though.
32. nradov+8c[view] [source] 2023-11-18 04:27:41
>>convex+(OP)
The funny thing is that so far OpenAI has made zero demonstrable progress toward building a true AGI. ChatGPT is an extraordinary technical accomplishment and useful for many things, but there is no evidence that scaling up that approach will get to AGI. At least a few more major breakthroughs will probably be needed.
replies(4): >>menset+Oc >>MVisse+df >>dr_dsh+6p >>anon29+2q
33. smharr+yc[view] [source] 2023-11-18 04:30:18
>>convex+(OP)
Some insider details that seem to agree with this: https://www.reddit.com/user/Anxious_Bandicoot126/
replies(6): >>jzl+es >>ipaddr+ls >>leobg+RB >>seydor+0H >>dimal+Fm1 >>gnulin+4N2
◧◩
34. menset+Oc[view] [source] [discussion] 2023-11-18 04:32:48
>>nradov+8c
It’s impossible to predict.

No one predicted feeding LLMs more GPUs would be as incredibly useful as it is.

◧◩◪◨⬒⬓
35. clnq+8e[view] [source] [discussion] 2023-11-18 04:41:08
>>riboso+4b
Mature, as in the opposite of ingenuous. It does no good to harm a company further. Kotick did enough damage, he left, all that needed to be said about him was said, tirelessly. Every effort to get him to offer some reparations - expended.

So what was there to gain from the company speaking ill of their past employee? What was even left to say? Nothing. No one wants to work in an organization that vilifies its own people. It was prudent.

I will emphasize again that the morality of these situations is a separate matter from tact. It is very well possible that doing what is good for business does not always align with what is moral. But does this come as a surprise to anyone?

We can recognize that the situation is not one dimensional and not reduce it to such. The same applies to the press release from Open AI - it is graceless, that much can be observed. But we do not yet know whether it is reprehensible, exemplary, or somewhere in between in the sense of morality and justice. It will come out, in other channels rather than official press releases, like in Bobby's case.

replies(1): >>watwut+lI
◧◩
36. MVisse+df[view] [source] [discussion] 2023-11-18 04:47:37
>>nradov+8c
No-one knows, which makes this a classical scientific problem. Which is what Ilya wants to focus on, which I think is fair, give this alligns with the original mission of OpenAi.

I think it’s also fair Sam starts something new with a for profit focus of the get-go.

◧◩◪◨⬒⬓⬔
37. lsafer+Rf[view] [source] [discussion] 2023-11-18 04:51:34
>>dekhn+La
Not inviting the full board would likely be against the rules. Every company I've been part of has it in the bylaws that all members have to be invited. They don't all have to attend, but they all get invited.
replies(1): >>dekhn+cn
◧◩
38. davora+Sl[view] [source] [discussion] 2023-11-18 05:37:20
>>jdminh+v1
I read it as Ilya Sutskever thinking the move is good non-profit governance grounds and that does not match what coup often means, unlawful seizure of power or maybe unprincipled/unreasonable seizure of power.

Ilya Sutskever seems to think this is a reasonable principled move to seize power that is in line with the non-profits goals and governance, but does not seem to care too much if you call it a coup.

replies(1): >>userna+nH
◧◩◪◨⬒⬓⬔⧯
39. dekhn+cn[view] [source] [discussion] 2023-11-18 05:46:24
>>lsafer+Rf
sure. he could have been invited, but also not attended.
◧◩◪◨⬒⬓⬔⧯
40. lazide+En[view] [source] [discussion] 2023-11-18 05:49:10
>>fsckbo+m9
How would the board have lost power?
replies(1): >>fsckbo+vJ
◧◩
41. dr_dsh+6p[view] [source] [discussion] 2023-11-18 05:58:41
>>nradov+8c
AGI is about definitions. By many definitions, it’s already here. Hence MSR’s “sparks of AGI” paper and Eric Schmidt’s article in Noema. But by the definition “as good or better than humans at all things”, it fails.
replies(2): >>nradov+Hp >>dr_dsh+4D3
◧◩◪
42. nradov+Hp[view] [source] [discussion] 2023-11-18 06:05:34
>>dr_dsh+6p
That "Sparks of AI" paper was total garbage, just complete nonsense and confirmation bias.

Defining AGI is more than just semantics. The generally accepted definition is that it must be able to complete most cognitive tasks as well as an average human. Otherwise we could as well claim that ELIZA was AGI, which would obviously be ridiculous.

replies(3): >>dr_dsh+1B1 >>Shamel+wW1 >>pasaba+Op6
43. anon29+Ip[view] [source] 2023-11-18 06:05:41
>>convex+(OP)
Realistically, this reflects more poorly on Sutskever. No one wants to work with a backstabber. It's one thing to be like 'well we had disagreements so we decided to move on.' However the board claimed Altman lied. If it turns out the firing was due to strategic direction, no one would ever want to work with Sutskever again. I certainly would not. That's an incredibly defamatory statement about a man who did nothing wrong, other than have a professional disagreement.
◧◩
44. anon29+2q[view] [source] [discussion] 2023-11-18 06:08:54
>>nradov+8c
> The funny thing is that so far OpenAI has made zero demonstrable progress toward building a true AGI. ChatGPT is an extraordinary technical accomplishment and useful for many things, but there is no evidence that scaling up that approach will get to AGI.

How can you honestly say things like this? ChatGPT shows the ability to sometimes solve problems it's never explicitly been presented with. I know this. I have a very little known Haskell library. I have asked ChatGPT to do various things with my own library, that I have never written about online, and that I have never seen before. I regularly ask it to answer questions others send to me. It gets it basically right. This is completely novel.

It seems pretty obvious to me that scaling this approach will lead to the development of computer systems that can solve problems that it's never seen before. Especially since it was not at all obvious from smaller transformer models that these emergent properties would come about by scaling parameter sizes... at all.

What is AGI if not problem solving in novel domains?

◧◩◪◨
45. DonHop+Er[view] [source] [discussion] 2023-11-18 06:23:04
>>janeje+R2
>The wording is very clearly hostile and aggressive

At least we can be sure that ChatGPT didn't write the statement, then.

Otherwise the last paragraph would have equivocated that both sides have a point.

◧◩
46. jzl+es[view] [source] [discussion] 2023-11-18 06:28:57
>>smharr+yc
Wow, all the comments and responses to that person's comments are a gold mine. Not saying anything should be taken as gospel, either from that poster or the people replying. But certainly a lot of food for thought.
replies(1): >>tucnak+p01
◧◩
47. ipaddr+ls[view] [source] [discussion] 2023-11-18 06:29:28
>>smharr+yc
Based on the amount of comments in that time period that is probably a fake insider.
replies(1): >>marvin+LN
◧◩◪
48. peyton+zt[view] [source] [discussion] 2023-11-18 06:44:42
>>mochom+w4
Having read up on some background not sure I want this guy in charge of any kind of superintelligence.
replies(1): >>dragon+bw
◧◩◪◨⬒
49. peyton+vu[view] [source] [discussion] 2023-11-18 06:54:35
>>vineya+Wa
Microsoft is a stakeholder. It’s absurd to suggest otherwise. The entire stakeholder concept was invented to encompass a broader view on corporate governance than just the people in the boardroom.
replies(1): >>vineya+NR
◧◩◪
50. ariym+Ku[view] [source] [discussion] 2023-11-18 06:56:52
>>mochom+w4
what are you referring to
◧◩◪◨⬒⬓
51. racket+Pu[view] [source] [discussion] 2023-11-18 06:57:26
>>janeje+u5
When you look at the written word and find yourself consistently imputing clear intent which is hostile, aggressive, sarcastic, and demeaning which no one else but you sees, a thoughtful person would begin to introspect.
replies(1): >>janeje+q31
◧◩◪◨
52. dragon+bw[view] [source] [discussion] 2023-11-18 07:10:04
>>peyton+zt
Well, I definitely wouldn't want Altman in charge of any superintelligence, so "I'm not sure" would be an improvement, if I expected an imminent superintelligence.
replies(1): >>TeMPOr+yA
◧◩
53. hindsi+Zx[view] [source] [discussion] 2023-11-18 07:26:11
>>peyton+J1
Not if the AGI was making the decision. A bit demanding to think the Professionalism LLM module isn't a bit hallucinatory in this age. Give it a few more years.
◧◩◪◨
54. rdtsc+by[view] [source] [discussion] 2023-11-18 07:28:37
>>kcb+B6
At this stage, no publicity is bad publicity. If they really believe they are in it to change the future of humanity, and the kool-aid got to their heads, might as well show it off by stirring some controversy.

Microsoft is bankrolling them but OpenAI probably can replace Microsoft easier than Microsoft can replace OpenAI.

◧◩◪◨⬒
55. ryandr+8A[view] [source] [discussion] 2023-11-18 07:48:22
>>clnq+57
People get fired all the time: suddenly, too. If I got fired by my company tomorrow, they wouldn't treat me with kid gloves, they'd just end my livelihood like it was nothing. I'd probably find out when I couldn't log in. Why should "upper management" get a graceful farewell? We don't have royalty in the USA. One person is not inherently better than another.
replies(3): >>disgru+MY >>clnq+s81 >>insani+UD1
◧◩◪◨⬒
56. TeMPOr+yA[view] [source] [discussion] 2023-11-18 07:51:47
>>dragon+bw
What if - hear me out - what if the firing is the doing of an AGI? Maybe OpenAI succeeded and now the AI is calling the shots (figuratively, though eventually maybe literally too).
◧◩◪
57. TeMPOr+HA[view] [source] [discussion] 2023-11-18 07:53:06
>>strike+m2
You've summed AI X-risk in a single sentence.

(I.e. an AGI would be one of the two people here.)

◧◩
58. leobg+RB[view] [source] [discussion] 2023-11-18 08:03:48
>>smharr+yc
> This was about stopping a runaway train before it flew off a cliff with all of us on board. Believe me, the board and I gave him tons of chances to self-correct. But his ego was out of control.

> Don't let the media hype fool you. Sam wasn't some genius visionary. He was a glory-hungry narcissist cutting every corner in some deluded quest to be the next Musk.

That does align with Ilya’s tweet about ego being in the way of great achievements.

And it does align with Sam’s statements on Lex’s podcast about his disagreements with Musk. He compared himself to Elon’s SpaceX being bullied by Elon’s childhood heroes. But he didn’t seem sad about it - just combative. Elon’s response to the NASA astronauts distrusting his company’s work was “They should come visit and see what we’re doing”. Sam’s reaction was very different. Like, “If he says bad things about us, I can say bad things about him too. It’s not my style. But maybe I will, one day”. Same sentiment as he is showing now (“if I go off the board can come after me for the value of my shares”).

All of that does paint a picture where it really isn’t about doing something necessary for humanity and future generations, and more about being considered great. The odd thing is that this should get you fired, especially in SF, of all places.

◧◩
59. seydor+0H[view] [source] [discussion] 2023-11-18 08:51:01
>>smharr+yc
We cant trust what we read. But last year's "Altman World Tour" where he met so many world leaders around the world felt a bit over the top, and maybe it got into his head
◧◩◪
60. userna+nH[view] [source] [discussion] 2023-11-18 08:54:59
>>davora+Sl
That's just spin. Which coup hasn't been a "reasonable and principled move to seize power" according to it's orchestrator?

Do you think Napoleon or Pinochet made speeches to the effect of "Yes, it was a completely unprincipled power-grab, but what are you going to do about it, lol?"

◧◩◪◨⬒⬓⬔
61. watwut+lI[view] [source] [discussion] 2023-11-18 09:02:11
>>clnq+8e
> Mature, as in the opposite of ingenuous

To tell it in an exaggerated way, maturity should not imply sociopathy or completely disregard for everything.

Obviously I am referring here to Kottick situation. But, the definition where it is immature to tell the truth and mature to enable powerful bad players is wrong definition of maturity.

replies(1): >>clnq+M61
◧◩◪◨⬒⬓⬔⧯▣
62. fsckbo+vJ[view] [source] [discussion] 2023-11-18 09:10:24
>>lazide+En
that's what i'm saying, it was not a power struggle. I shouldn't have to make the other guy's argument for him...
◧◩◪
63. marvin+LN[view] [source] [discussion] 2023-11-18 09:49:37
>>ipaddr+ls
Altman risking his role as CEO of the new industrial revolution for a book deal is implausible.
◧◩◪◨⬒⬓
64. vineya+NR[view] [source] [discussion] 2023-11-18 10:22:43
>>peyton+vu
This is a non profit dedicated to researching AI with the goal of making a safe AGI. That’s what the mission is. Sama starts trying to make it a business, restructures it to allow investors, of which MSFT is a 49% owner. He gets ousted and they tell Microsoft afterwards.

It’s questionable how much power Microsoft has as a shareholder. Obviously they have a staked interest in OpenAI. What is up in question is how much interest the new leaders have in Microsoft.

If I had a business relationship with OpenAI that didn’t align with their mission I would be very worried.

◧◩◪◨⬒⬓
65. disgru+MY[view] [source] [discussion] 2023-11-18 11:20:20
>>ryandr+8A
Because upper management have more power than you or I. If either of us were fired, it's unlikely to be front page news all over the world.

It sucks, but that's the world we live in, unfortunately.

◧◩◪
66. tucnak+p01[view] [source] [discussion] 2023-11-18 11:32:59
>>jzl+es
Reads like lesswrong fan-fiction
◧◩◪◨⬒⬓⬔
67. janeje+q31[view] [source] [discussion] 2023-11-18 11:55:16
>>racket+Pu
Again, I'm not sure why you and the other person are just out for blood and keep trying to make it personal, but you can clearly feed it into NLP/ChatGPT and co and even the machines will tell you the actual wordings are aggressive.
replies(1): >>jstarf+g02
◧◩◪◨⬒⬓⬔⧯
68. clnq+M61[view] [source] [discussion] 2023-11-18 12:18:19
>>watwut+lI
I respect your belief that maturity involves elevating morality above corporate sagacity. It is noble.
replies(2): >>watwut+sw1 >>Shamel+2W1
◧◩◪◨⬒⬓
69. clnq+s81[view] [source] [discussion] 2023-11-18 12:28:41
>>ryandr+8A
> Why should "upper management" get a graceful farewell

Injustices are made to executives all the time. But airing dirty laundry is not sagacious.

replies(1): >>anonym+a32
◧◩
70. dimal+Fm1[view] [source] [discussion] 2023-11-18 14:00:48
>>smharr+yc
Who is u/Anxious_Bandicoot126? Is there any reason to think this is actually a person at OpenAI and not some random idiot on the internet? They have no comment history except on this issue. Seems like BS.
replies(1): >>hello_+iN1
◧◩◪◨⬒⬓⬔⧯▣
71. watwut+sw1[view] [source] [discussion] 2023-11-18 14:53:12
>>clnq+M61
I am not even demanding something super noble from mature people. I am fine with the idea that mature people do compromises. I do not expect managers to be saint like fighters for justice.

But, when people use "maturity" as argument for why someone must be enabler, should not do the morally or ethically right thing, then it gets irritating. Conversely, calling people "immature" because they did not acted in the most self serving but sleazy way is ridiculous.

◧◩◪◨
72. dr_dsh+1B1[view] [source] [discussion] 2023-11-18 15:25:01
>>nradov+Hp
What specifically made it “garbage” to you? My mind was blown if I’m honest, when I read it.

How do you compare Eliza to GPT4?

◧◩◪◨⬒⬓
73. insani+UD1[view] [source] [discussion] 2023-11-18 15:42:36
>>ryandr+8A
Because no one cares if you get fired but people really care if a CEO gets fired. The scope of a CEO's responsibilities are near-global across the company, firing them is a serious action. Your scope as an engineer is, typically, extremely small by comparison.

This isn't about being better at all.

◧◩◪
74. hello_+iN1[view] [source] [discussion] 2023-11-18 16:36:31
>>dimal+Fm1
No comment history except on this issue...

That's either 100% fishy or 100% insider.

Either BS or person is insider, no in-between.

replies(1): >>Shamel+JU1
◧◩◪◨
75. Shamel+JU1[view] [source] [discussion] 2023-11-18 17:11:43
>>hello_+iN1
Is this sarcasm? The burden is on the person with the supposed claim to show they are trustworthy and reputable. What you're saying is basically "coin shows heads 50% of the time, therefore it's 50% chance they're an insider".
replies(1): >>hello_+Xr3
◧◩◪◨⬒⬓⬔⧯▣
76. Shamel+2W1[view] [source] [discussion] 2023-11-18 17:18:49
>>clnq+M61
That comes across as pretty condescending. It's not like you have some sort of authoritative high ground about what does and doesn't constitute professionalism in the business world. It sounds to me that your version of professionalism is in line with what gets prescribed at your average mindless corporate human resources or public relations department. Which is fine, but there's zero proof that that is the correct way to do things, and it's actually naive on _your_ part to accept the status quo as is. And, as I said, incredibly condescending to assume it is somehow the "mature" point of view.
◧◩◪◨
77. Shamel+wW1[view] [source] [discussion] 2023-11-18 17:21:23
>>nradov+Hp
> The generally accepted definition is that it must be able to complete most cognitive tasks as well as an average human.

That is a definition. It is not a generally accepted definition.

◧◩◪◨⬒⬓⬔⧯
78. jstarf+g02[view] [source] [discussion] 2023-11-18 17:40:57
>>janeje+q31
I'll bite. I even led the witness on this one by outright asking if it's aggressive.

> "a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities."

> The provided text is not explicitly aggressive; however, it conveys a critical tone regarding the individual's communication, emphasizing hindrance to the board's responsibilities.

Did you actually run this through GPT...or did you poll Reddit?

replies(1): >>Camper+Lr2
◧◩◪◨⬒⬓⬔
79. anonym+a32[view] [source] [discussion] 2023-11-18 17:56:18
>>clnq+s81
All I saw was one phrase indicating there was cause for termination, with no additional explanation. This doesn't seem like airing dirty laundry to me.
◧◩◪◨⬒⬓⬔⧯▣
80. Camper+Lr2[view] [source] [discussion] 2023-11-18 20:10:21
>>jstarf+g02
Context matters. It's hyper aggressive by the standards of similar communications (press releases announcing management shakeups) by similar entities (boards of directors).

Obviously it's not aggressive by the standards of everyday political drama or Internet forum arguments.

replies(1): >>racket+D26
◧◩◪◨
81. toyg+fw2[view] [source] [discussion] 2023-11-18 20:38:04
>>kcb+B6
They don't need Microsoft anymore, they have a queue of potential funders.
replies(1): >>kcb+Aw3
◧◩
82. gnulin+4N2[view] [source] [discussion] 2023-11-18 22:13:50
>>smharr+yc
Is that someone RP'ing as an OpenAI insider? There is no evidence to suggest they're a reliable source. What am I missing?
◧◩◪◨⬒
83. hello_+Xr3[view] [source] [discussion] 2023-11-19 02:11:31
>>Shamel+JU1
That's not how whistleblower to the public works.
◧◩◪◨⬒
84. kcb+Aw3[view] [source] [discussion] 2023-11-19 02:36:58
>>toyg+fw2
You think those funders will stick with their insistence on the direction of not creating products and making money.
replies(1): >>toyg+l44
◧◩◪
85. dr_dsh+4D3[view] [source] [discussion] 2023-11-19 03:21:16
>>dr_dsh+6p
https://arxiv.org/abs/2311.02462

On operationalizing definitions of AGI

◧◩◪◨⬒⬓
86. toyg+l44[view] [source] [discussion] 2023-11-19 07:23:51
>>kcb+Aw3
It's more about "making all the money" Vs "making some of the money", with that "some" still being pretty big. Maybe they won't get 100bn but will get 10bn just fine.
◧◩◪◨⬒⬓⬔⧯▣▦
87. racket+D26[view] [source] [discussion] 2023-11-19 20:41:05
>>Camper+Lr2
A text being objectively measurable as aggressive is a very different supposition than a text being subjectively, contextually aggressive.

It's fair to say that usually if the board isn't obfuscating or outright lying in their announcements, that itself is an indicator of acrimony.

But usually, the board can financially incentivize a CEO to "step down" or even help them find a soft landing at another company to make it look like a mutually agreed on transition. Since they know this oustered CEO isn't interested in making nice in public, they really had no choice but to try to get in front of the story.

Given the fallout which is still spreading, I think they would've rather cut him a fat check for an explicit or implicit NDA and thanked him for his amazing contributions while wishing him well on his future endeavors if that option had been on the table.

◧◩◪◨
88. pasaba+Op6[view] [source] [discussion] 2023-11-19 22:29:04
>>nradov+Hp
Tbh, I always thought the whole stuff about 'intelligence' was just marketing garbage. There are no really good rigorous descriptions of intelligence, so asking if a product exhibits intelligence is basically nonsense. There are two questions about LLMs that are good, though:

1. Are they useful?

2. Are they going to become more useful in the forseeable future?

On 1, I would say, maybe? Like, somewhere between Microsoft Word and Excel? On 2, I would say, sure - an 'AGI' would be tremendously useful. But it's also tremendously unlikely to grow somehow out of the current state of the art. People disagree on that point, but I don't think there are even compelling reasons to believe that LLMs can evolve beyond their current status as bullshit generators.

[go to top]