zlacker

[parent] [thread] 12 comments
1. strike+(OP)[view] [source] 2023-11-18 03:14:43
he seems like a more credible source than random people with no real ml experience
replies(2): >>reduce+w4 >>brigad+w5
2. reduce+w4[view] [source] 2023-11-18 03:47:27
>>strike+(OP)
That’s the funny thing isn’t it? Hinton, Bengio, and Sutskever, chief scientist of the tech behind OpenAI, all have strong opinions one way, but HN armchair experts handwave it away as fear mongering. Reminds me of climate change deniers. People just viscerally hate staring down upcoming disasters.
replies(3): >>rchaud+Y5 >>ianbut+47 >>mardif+Lk
3. brigad+w5[view] [source] 2023-11-18 03:55:07
>>strike+(OP)
Why do you believe "real ml experience" qualifies someone to speculate about the impact of what is currently science fiction technology on society?
replies(2): >>chpatr+97 >>aamoyg+IT1
◧◩
4. rchaud+Y5[view] [source] [discussion] 2023-11-18 03:58:23
>>reduce+w4
Not surprising when you consider the volume of posts on GPT threads hand-wringing about "free speech" because the chatbot won't use slurs.
replies(1): >>mardif+qm
◧◩
5. ianbut+47[view] [source] [discussion] 2023-11-18 04:07:22
>>reduce+w4
Yann LeCunn is a strong counter to the doomerism as one example. Jeremy Howard is another example. There are plenty of high profile, and distinguished researchers who don't buy into that line of thinking. None of them are eschewing safety taking into account the realities of how the technology can be used, but they aren't running the AI will kill us all line up the flagpole.
◧◩
6. chpatr+97[view] [source] [discussion] 2023-11-18 04:08:08
>>brigad+w5
It's rapidly turning into science fact, unless you've been living under a rock the last year.
replies(1): >>brigad+J9
◧◩◪
7. brigad+J9[view] [source] [discussion] 2023-11-18 04:28:35
>>chpatr+97
Science fiction or not, saying this person's opinion matters more because they have a better understanding of how it works is like saying automotive engineers should be considered experts on all social policy regarding automobiles.

Also it's not "rapidly turning into fact". There are still massive unsolved problems with AGI.

replies(1): >>chpatr+rb
◧◩◪◨
8. chpatr+rb[view] [source] [discussion] 2023-11-18 04:39:43
>>brigad+J9
I think the guy running the company that's got the closest to AGI, one of the top experts in his field, knows more about what the dangers are, yes. Especially if they have something even scarier that they're not telling people.
replies(1): >>brigad+ed
◧◩◪◨⬒
9. brigad+ed[view] [source] [discussion] 2023-11-18 04:50:48
>>chpatr+rb
There is no secret hidden "scary" AGI hidden in their basement. Also, speculating at the "damage" true AGI can cause is not that difficult and does not require a phd in ML.
replies(1): >>chpatr+Ce
◧◩◪◨⬒⬓
10. chpatr+Ce[view] [source] [discussion] 2023-11-18 05:00:14
>>brigad+ed
How would we know? They sat on GPT4 for 8 months.
◧◩
11. mardif+Lk[view] [source] [discussion] 2023-11-18 05:46:36
>>reduce+w4
And I can cite tons of other AI experts who disagree with that. Even the people you listed have a much more nuanced opinion compared to the batshit insane AI doomerism that is common in some circles. So why compare it to climate change that has an overwhelming scientific consensus? That's quite a dishonest way to frame the debate.
◧◩◪
12. mardif+qm[view] [source] [discussion] 2023-11-18 05:57:08
>>rchaud+Y5
The only hand wringing is coming from white privileged liberals from SV who absolutely cannot fathom that the rest of the world does not want them to control what AI can and cannot say.

You can try framing it as some sort of "bad racists" versus the good and virtuous gatekeepers, but the reality is that it's a bunch of nerds with sometimes super insane beliefs (the SF AI field is full of effective altruists who think AI is the most important issue in the world and weirdos in general) that will have an oversized control on what can and can't be thought. It's just good old white saviorism but worse.

Again, just saying "stop caring about muh freeze peach!!" just doesn't work coming from one of the most privileged groups in the entire world (AI techbros and their entourage). Not when it's such a crucial new technology

◧◩
13. aamoyg+IT1[view] [source] [discussion] 2023-11-18 17:19:34
>>brigad+w5
Submarines were considered science fiction shortly prior to WWI, and then you got such crazy technological advancements, that battleships were obsolete by the time they were built. Well, submarines weren't science fiction anymore, and were used in unrestricted warfare.

Hope we don't do that with AI. Pretty sure our AGI is going to be similar to that seen in the Alien franchise of films-- it essentially emulates human higher order logic with key distinctions.

[go to top]