zlacker

[parent] [thread] 13 comments
1. reissb+(OP)[view] [source] 2023-11-20 07:31:57
The board fired Altman for shipping too fast compared to their safety-ist doom preferences. The new interim CEO has said that he wants to slow AI development down 80-90%. Why on earth would you stay, if you joined to build + ship technology?

Of course, some employees may agree with the doom/safety board ideology, and will no doubt stay. But I highly doubt everyone will, especially the researchers who were working on new, powerful models — many of them view this as their life's work. Sam offers them the ability to continue.

If you think this is about "the big bucks" or "fame," I think you don't understand the people on the other side of this argument at all.

replies(2): >>Booris+l1 >>mianos+53
2. Booris+l1[view] [source] 2023-11-20 07:41:02
>>reissb+(OP)
Not enough people understand what OpenAI was actually built on.

OpenAI would not exist if FAANG had been capable of getting out of it's own way and shipping things. The moment OpenAI starts acting like the companies these people left, it's a no brainer that they'll start looking for the door.

I'm sure Ilya has 10 lifetimes more knowledge than me locked away in his mind on topics I don't even know exist... but the last 72 hours are the most brain dead actions I've ever seen out of the leadership of a company.

This isn't even cutting your own nose of to spite the face: this is like slashing your own tires to avoid going in the wrong direction.

The only possible justification would have been some jailable offense from Sam Altman, and ironically their initial release almost seemed to want to hint that before they were forced to explicitly state that wasn't the case. At the point where you're forced to admit you surprise fired your CEO for relatively benign reasons how much must have gone completely sideways to land you in that position?

replies(2): >>jonbel+b3 >>xvecto+R3
3. mianos+53[view] [source] 2023-11-20 07:52:33
>>reissb+(OP)
This is exactly why you would want people on the board who understand the technology. Unless they have some other technology that we don't know about, that maybe brought all this on, a GPT is not a clear path to AGI. That is a technical thing that to understand seems to be beyond most people without real experience in the field. It is certainly beyond the understanding of some dude that lucked into a great training set and became an expert, much the same way the The Knack became industry leaders.
replies(1): >>famous+I4
◧◩
4. jonbel+b3[view] [source] [discussion] 2023-11-20 07:52:59
>>Booris+l1
It’s possible be extremely smart in one narrow way and a complete idiot when it comes to understanding leadership, people, politics, etc.

For example, Elon Musk was smart enough to do some things … then he crashed and burned with Twitter because it’s about people and politics. He could not have done a worse job, despite being “smart.”

replies(1): >>mschus+In
◧◩
5. xvecto+R3[view] [source] [discussion] 2023-11-20 07:57:15
>>Booris+l1
I really hope this comes back around and bites Ilya and OAI in the ass. What an absurd decision. They will rightfully get absolutely crushed by the free market.
replies(1): >>Booris+Be
◧◩
6. famous+I4[view] [source] [discussion] 2023-11-20 08:01:32
>>mianos+53
>Unless they have some other technology that we don't know about, that maybe brought all this on, a GPT is not a clear path to AGI.

So Ilya Sutskever, one of the most distinguished ML researchers of his generation does not understand the technology ?

The same guy who's been on record saying LLMs are enough for AGI ?

replies(3): >>lucubr+5i >>mianos+Hi >>fallin+fm
◧◩◪
7. Booris+Be[view] [source] [discussion] 2023-11-20 08:43:09
>>xvecto+R3
Looks like you got your wish earlier than anyone would have expected: https://twitter.com/satyanadella/status/1726509045803336122
◧◩◪
8. lucubr+5i[view] [source] [discussion] 2023-11-20 08:59:16
>>famous+I4
To be clear, he thinks that LLMs are probably a general architecture, and thus capable of reaching AGI in principle with enormous amounts of compute, data, and work. He thinks for cost and economics reasons it's much more feasible to build or train other parts and have them work together, because that's much cheaper in terms of compute. As an example, with a big enough model, enough work, and the right mix of data you could probably have an LMM interpret speech just as well as Whisper can. But how much work does it take to make that happen without losing other capabilities? How efficient is the resulting huge model? Is the end result better than having the text/intelligence segment separate from the speech and hearing segment? The answer could be yes, depending, but it could also be no. Basically his beliefs are that it's complicated and it's not really a "Can X architecture do this" question but a "How cheap is this architecture to accomplish this task" question.
replies(1): >>famous+YT1
◧◩◪
9. mianos+Hi[view] [source] [discussion] 2023-11-20 09:03:30
>>famous+I4
Sorry, I am not including Ilya when I say not understand the technology.

In fact, he is exactly the type to be on the board.

He is not the one saying 'slow down we might accidentally invent an AGI that takes over the world'. As you say, he says, LLMS are not a path to a world dominating AGI.

◧◩◪
10. fallin+fm[view] [source] [discussion] 2023-11-20 09:26:12
>>famous+I4
AGI doesn't exist. There is no standard for what makes an AGI or test to prove that an AI is or isn't an AGI once built. There is no engineering design for even a hypothetical AGI like there is for other hypothetical tech e.g. a fusion reactor, so we have no idea if it is even similar to existing machine learning designs. So how can you be an expert on it? Being an expert on existing machine learning tech, which Ilya absolutely is, doesn't grant this status.
replies(1): >>famous+cU1
◧◩◪
11. mschus+In[view] [source] [discussion] 2023-11-20 09:34:21
>>jonbel+b3
> For example, Elon Musk was smart enough to do some things … then he crashed and burned with Twitter because it’s about people and politics. He could not have done a worse job, despite being “smart.”

That is, if you do not subscribe to one of the various theories that him sinking Twitter was intentional. The most popular ones I've come across are "Musk wants revenge for Twitter turning his daughter trans", "Saudi-Arabia wants to get rid of Twitter as a trusted-ish network/platform to prevent another Arab Spring" and "Musk wants to cozy up to a potential next Republican presidency".

Personally, I think all three have merits - because otherwise, why didn't the Saudis and other financiers go and pull an Altman on Musk? It's not Musk's personal money he's burning on Twitter, it's to a large degrees other people's money.

replies(1): >>dragon+h71
◧◩◪◨
12. dragon+h71[view] [source] [discussion] 2023-11-20 14:04:42
>>mschus+In
> Personally, I think all three have merits - because otherwise, why didn't the Saudis and other financiers go and pull an Altman on Musk? It's not Musk's personal money he's burning on Twitter, it's to a large degrees other people's money.

Of the $46 Billion Twitter deal ($44 equity + $2 debt buyout), it was:

* $13 Billion Loans (bank funded)

* $33 Billion Equity -- of this, ~$9 Billion was estimated to be investors (including Musk, Saudis, Larry Ellison, etc. etc.)

So its about 30% other investors and 70% Elon Musk money.

◧◩◪◨
13. famous+YT1[view] [source] [discussion] 2023-11-20 17:40:38
>>lucubr+5i
This is wholly besides the point. The person I'm replying to is clearly saying the only people who believe "GPT is on the path to AGI" are non technical people who don't "truly understand". Blatantly false.

It's like an appeal to authority against an authority that isn't even saying what you're appealing for.

◧◩◪◨
14. famous+cU1[view] [source] [discussion] 2023-11-20 17:41:20
>>fallin+fm
This is wholly besides the point. The person I'm replying to is clearly saying the only people who believe "GPT is on the path to AGI" are non technical people who don't "truly understand". Blatantly false. It's like an appeal to authority against an authority that isn't even saying what you're appealing for.
[go to top]