zlacker

[parent] [thread] 28 comments
1. convex+(OP)[view] [source] 2023-11-18 02:46:02
Followup tweet by Kara:

Dev day and store were "pushing too fast"!

https://twitter.com/karaswisher/status/1725702612379378120

replies(3): >>noahjk+41 >>woeiru+f2 >>skepti+P3
2. noahjk+41[view] [source] 2023-11-18 02:54:03
>>convex+(OP)
Seems unreasonable to make such a powerful decision over feature releases. Doesn't pass the smell test to me.
3. woeiru+f2[view] [source] 2023-11-18 03:02:49
>>convex+(OP)
This isn’t believable. You don’t fire a CEO and put out a press release accusing them of lying over Dev day. Unless he told them he wasn’t going to announce it and then did.
replies(4): >>yieldc+N2 >>brigad+r3 >>lumost+l4 >>alisto+x8
◧◩
4. yieldc+N2[view] [source] [discussion] 2023-11-18 03:07:27
>>woeiru+f2
I dont get the impression that everyone involved was that mature, given Greg’s tweet

I get the clout everyone has but this was to be a non profit that was already coup de tat into a for profit that grew extremely quickly into uncharted territory

This isnt a multi decade old fortune 500 company with mature C-Suite and boards, it just masquerades as one with a stacked deck, which apparently is part of the problem

replies(1): >>Cacti+r6
◧◩
5. brigad+r3[view] [source] [discussion] 2023-11-18 03:12:43
>>woeiru+f2
Reading about Ilya, it seems like he is fully bought into AI hysteria.
replies(2): >>strike+N3 >>wmf+j4
◧◩◪
6. strike+N3[view] [source] [discussion] 2023-11-18 03:14:43
>>brigad+r3
he seems like a more credible source than random people with no real ml experience
replies(2): >>reduce+j8 >>brigad+j9
7. skepti+P3[view] [source] 2023-11-18 03:14:55
>>convex+(OP)
A GPT app builder is pushing too fast for Ilya?
replies(1): >>Silver+O4
◧◩◪
8. wmf+j4[view] [source] [discussion] 2023-11-18 03:17:15
>>brigad+r3
Did he buy into it after he took the money from Microsoft? Because it seems like there's no turning back after that point.
replies(1): >>holler+H8
◧◩
9. lumost+l4[view] [source] [discussion] 2023-11-18 03:17:30
>>woeiru+f2
Alternate possibility would be that openAI faces core technical challenges delivering dev day features/promises. If deals were signed, they could be forced to deliver even if the board et al weren’t aligned on investment.
◧◩
10. Silver+O4[view] [source] [discussion] 2023-11-18 03:21:21
>>skepti+P3
Not the app builder but the app store and revenue sharing, if rumors are to be believed.
replies(1): >>silenc+f9
◧◩◪
11. Cacti+r6[view] [source] [discussion] 2023-11-18 03:32:10
>>yieldc+N2
Right?! Anyone paying attention back when Sam was brought on is not surprised. Sam and his investors _were_ the coup. They took an org specifically set up to do open research for the good of humanity, and Sam literally did the opposite. He monetized the work, sold it, didn’t reinvest as promised, reduced transparency, and put the weights under lock and key. He rode in after the hard work had been done, took credit for it, and sold it to lol the fucking Borg of all people.

And many people here who should know better fell for it.

◧◩◪◨
12. reduce+j8[view] [source] [discussion] 2023-11-18 03:47:27
>>strike+N3
That’s the funny thing isn’t it? Hinton, Bengio, and Sutskever, chief scientist of the tech behind OpenAI, all have strong opinions one way, but HN armchair experts handwave it away as fear mongering. Reminds me of climate change deniers. People just viscerally hate staring down upcoming disasters.
replies(3): >>rchaud+L9 >>ianbut+Ra >>mardif+yo
◧◩
13. alisto+x8[view] [source] [discussion] 2023-11-18 03:48:51
>>woeiru+f2
This is actually a semi-plausible angle. Given Sam's personality, I could see a scenario where there was disagreement about whether something in particular would be announced at demo day. He may have told some people he would keep in under wraps, but ended up going forward with it anyway.

I don't understand how that escalates to the point that he gets fired over it, though, unless there was something deeper implied by what was announced at demo day.

Edit: Theres a rumor floating around that "it" was the GPT store and revenue sharing. If that's the case, that's not even remotely a safety issue. It's just a disagreement about monetization, like how Larry and Sergey didn't want to put ads on Google.

replies(1): >>woeiru+Ni
◧◩◪◨
14. holler+H8[view] [source] [discussion] 2023-11-18 03:49:57
>>wmf+j4
That must be it because obviously no knowledgeable person could honestly come to believe that the technology is dangerous.
replies(1): >>wmf+ub
◧◩◪
15. silenc+f9[view] [source] [discussion] 2023-11-18 03:54:56
>>Silver+O4
That doesn't pass the smell test.

Those seems like implementation details, really strange.

◧◩◪◨
16. brigad+j9[view] [source] [discussion] 2023-11-18 03:55:07
>>strike+N3
Why do you believe "real ml experience" qualifies someone to speculate about the impact of what is currently science fiction technology on society?
replies(2): >>chpatr+Wa >>aamoyg+vX1
◧◩◪◨⬒
17. rchaud+L9[view] [source] [discussion] 2023-11-18 03:58:23
>>reduce+j8
Not surprising when you consider the volume of posts on GPT threads hand-wringing about "free speech" because the chatbot won't use slurs.
replies(1): >>mardif+dq
◧◩◪◨⬒
18. ianbut+Ra[view] [source] [discussion] 2023-11-18 04:07:22
>>reduce+j8
Yann LeCunn is a strong counter to the doomerism as one example. Jeremy Howard is another example. There are plenty of high profile, and distinguished researchers who don't buy into that line of thinking. None of them are eschewing safety taking into account the realities of how the technology can be used, but they aren't running the AI will kill us all line up the flagpole.
◧◩◪◨⬒
19. chpatr+Wa[view] [source] [discussion] 2023-11-18 04:08:08
>>brigad+j9
It's rapidly turning into science fact, unless you've been living under a rock the last year.
replies(1): >>brigad+wd
◧◩◪◨⬒
20. wmf+ub[view] [source] [discussion] 2023-11-18 04:12:06
>>holler+H8
I think he's telling the truth about his beliefs. But if he always believed that AI is dangerous they should have never done the Microsoft deal.
replies(1): >>aamoyg+RY1
◧◩◪◨⬒⬓
21. brigad+wd[view] [source] [discussion] 2023-11-18 04:28:35
>>chpatr+Wa
Science fiction or not, saying this person's opinion matters more because they have a better understanding of how it works is like saying automotive engineers should be considered experts on all social policy regarding automobiles.

Also it's not "rapidly turning into fact". There are still massive unsolved problems with AGI.

replies(1): >>chpatr+ef
◧◩◪◨⬒⬓⬔
22. chpatr+ef[view] [source] [discussion] 2023-11-18 04:39:43
>>brigad+wd
I think the guy running the company that's got the closest to AGI, one of the top experts in his field, knows more about what the dangers are, yes. Especially if they have something even scarier that they're not telling people.
replies(1): >>brigad+1h
◧◩◪◨⬒⬓⬔⧯
23. brigad+1h[view] [source] [discussion] 2023-11-18 04:50:48
>>chpatr+ef
There is no secret hidden "scary" AGI hidden in their basement. Also, speculating at the "damage" true AGI can cause is not that difficult and does not require a phd in ML.
replies(1): >>chpatr+pi
◧◩◪◨⬒⬓⬔⧯▣
24. chpatr+pi[view] [source] [discussion] 2023-11-18 05:00:14
>>brigad+1h
How would we know? They sat on GPT4 for 8 months.
◧◩◪
25. woeiru+Ni[view] [source] [discussion] 2023-11-18 05:03:10
>>alisto+x8
It’s not a big enough issue for a normal board to fire the CEO over. Now maybe Ilya made a power play as a result, but that would be insane.
◧◩◪◨⬒
26. mardif+yo[view] [source] [discussion] 2023-11-18 05:46:36
>>reduce+j8
And I can cite tons of other AI experts who disagree with that. Even the people you listed have a much more nuanced opinion compared to the batshit insane AI doomerism that is common in some circles. So why compare it to climate change that has an overwhelming scientific consensus? That's quite a dishonest way to frame the debate.
◧◩◪◨⬒⬓
27. mardif+dq[view] [source] [discussion] 2023-11-18 05:57:08
>>rchaud+L9
The only hand wringing is coming from white privileged liberals from SV who absolutely cannot fathom that the rest of the world does not want them to control what AI can and cannot say.

You can try framing it as some sort of "bad racists" versus the good and virtuous gatekeepers, but the reality is that it's a bunch of nerds with sometimes super insane beliefs (the SF AI field is full of effective altruists who think AI is the most important issue in the world and weirdos in general) that will have an oversized control on what can and can't be thought. It's just good old white saviorism but worse.

Again, just saying "stop caring about muh freeze peach!!" just doesn't work coming from one of the most privileged groups in the entire world (AI techbros and their entourage). Not when it's such a crucial new technology

◧◩◪◨⬒
28. aamoyg+vX1[view] [source] [discussion] 2023-11-18 17:19:34
>>brigad+j9
Submarines were considered science fiction shortly prior to WWI, and then you got such crazy technological advancements, that battleships were obsolete by the time they were built. Well, submarines weren't science fiction anymore, and were used in unrestricted warfare.

Hope we don't do that with AI. Pretty sure our AGI is going to be similar to that seen in the Alien franchise of films-- it essentially emulates human higher order logic with key distinctions.

◧◩◪◨⬒⬓
29. aamoyg+RY1[view] [source] [discussion] 2023-11-18 17:27:18
>>wmf+ub
Taking the Microsoft deal was incredibly smart. Microsoft acts as their shield and commercializes their technology (so that they do not need to worry about it) and in turn gives them nigh unlimited resources to innovate. Microsoft builds the products it wants, and OpenAI gets to innovate. This sort of investment for wealth generation is pretty close to ideal capitalism where people do not care about short term profit. I wouldn't be surprised if Satya Nadella orchestrated this, because it's OpenAI's role to innovate responsibly and Microsoft's job to sell to essentially fund their innovation. So wtf is Altman for?
[go to top]