zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. srslac+I7[view] [source] 2023-05-16 12:00:15
>>vforgi+(OP)
Imagine thinking that regression based function approximators are capable of anything other than fitting the data you give it. Then imagine willfully hyping up and scaring people who don't understand, and because it can predict words you take advantage of the human tendency to anthropomorphize, so it follows that it is something capable of generalized and adaptable intelligence.

Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.

So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.

◧◩
2. chaxor+hB[view] [source] 2023-05-16 14:33:08
>>srslac+I7
What do you think about the papers showing mathematical proofs that GNNs (i.e. GATs/transformers) are dynamic programmers and therefore perform algorithmic reasoning?

The fact that these systems can extrapolate well beyond their training data by learning algorithms is quite different than what has come before, and anyone stating that they "simply" predict next token is severely shortsighted. Things don't have to be 'brain-like' to be useful, or to have capabilities of reasoning, but we have evidence that these systems have aligned well with reasoning tasks, perform well at causal reasoning, and we also have mathematical proofs that show how.

So I don't understand your sentiment.

◧◩◪
3. uh_uh+8E[view] [source] 2023-05-16 14:46:47
>>chaxor+hB
I just don't get how the average HN commenter thinks (and gets upvoted) that they know better than e.g. Ilya Sutskever who actually, you know, built the system. I keep reading this "it just predicts words, duh" rhetoric on HN which is not at all believed by people like Ilya or Hinton. Could it be that HN commenters know better than these people?
◧◩◪◨
4. shafyy+MN[view] [source] 2023-05-16 15:31:24
>>uh_uh+8E
The thing is, experts like Ilya Sutskever are so deep in that shit that they are heavily biased (from a tech and social/economic) perspective. Furthermore, many experts are wrong all the time.

I don't think the average HN commenter claims to be better at building these system than an expert. But to criticize, especially critic on economic, social, and political levels, one doesn't need to be an expert on LLMs.

And finally, what the motivation of people like Sam Altman and Elon Musk is should be clear to everbody with a half a brain by now.

◧◩◪◨⬒
5. Number+az1[view] [source] 2023-05-16 19:01:06
>>shafyy+MN
I honestly don't question Altman's motivations that much. I think he's blinded a bit by optimism. I also think he's very worried about existential risks, which is a big reason why he's asking for regulation. He's specifically come out and said in his podcast with Lex Friedman that he thinks it's safer to invent AGI now, when we have less computing power, than to wait until we have more computing power and the risk of a fast takeoff is greater, and that's why he's working so hard on AI.
◧◩◪◨⬒⬓
6. collab+jM1[view] [source] 2023-05-16 19:59:23
>>Number+az1
He's just cynical and greedy. Guy has a bunker with an airstrip and is eagerly waiting for the collapse he knows will come if the likes of him get their way

They claim to serve the world, but secretly want the world to serve them. Scummy 101

◧◩◪◨⬒⬓⬔
7. Number+GU1[view] [source] 2023-05-16 20:43:10
>>collab+jM1
Having a bunker is also consistent with expecting that there's a good chance of apocalypse but working to stop it.
[go to top]