zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. srslac+I7[view] [source] 2023-05-16 12:00:15
>>vforgi+(OP)
Imagine thinking that regression based function approximators are capable of anything other than fitting the data you give it. Then imagine willfully hyping up and scaring people who don't understand, and because it can predict words you take advantage of the human tendency to anthropomorphize, so it follows that it is something capable of generalized and adaptable intelligence.

Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.

So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.

◧◩
2. Yajiro+Y8[view] [source] 2023-05-16 12:08:33
>>srslac+I7
Who is to say that brains aren't just regression based function approximators?
◧◩◪
3. gumbal+rb[view] [source] 2023-05-16 12:21:29
>>Yajiro+Y8
My laptop emits sound as i do but it doesnt mean it can sing or talk. It’s software that does what it was programmed to, and so does ai. It may mimic the human brain but that’s about it.
◧◩◪◨
4. thesup+nk[view] [source] 2023-05-16 13:11:11
>>gumbal+rb
>> It’s software that does what it was programmed to, and so does ai.

That's a big part of the issue with machine learning models--they are undiscoverable. You build a model with a bunch of layers and hyperparameters, but no one really understands how it works or by extension how to "fix bugs".

If we say it "does what it was programmed to", what was it programmed to do? Here is the data that was used to train it, but how will it respond to a given input? Who knows?

That does not mean that they need to be heavily regulated. On the contrary, they need to be opened up and thoroughly "explored" before we can "entrust" them to given functions.

◧◩◪◨⬒
5. grumpl+Q82[view] [source] 2023-05-16 22:03:01
>>thesup+nk
> no one really understands how it works or by extension how to "fix bugs".

I don't think this is accurate. Sure, no human can understand 500 billion individual neurons and what they are doing. But you can certainly look at some and say "these are giving a huge weight to this word especially in this context and that's weighting it towards this output".

You can also look at how things make it through the network, the impact of hyperparameters, how the architecture affects things, etc. They aren't truly black boxes except by virtue of scale. You could use automated processes to find out things about the networks as well.

[go to top]