zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. srslac+I7[view] [source] 2023-05-16 12:00:15
>>vforgi+(OP)
Imagine thinking that regression based function approximators are capable of anything other than fitting the data you give it. Then imagine willfully hyping up and scaring people who don't understand, and because it can predict words you take advantage of the human tendency to anthropomorphize, so it follows that it is something capable of generalized and adaptable intelligence.

Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.

So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.

◧◩
2. Chicag+md[view] [source] 2023-05-16 12:32:56
>>srslac+I7
Why is it so hard to hear this perspective? Like, genuinely curious. This is the first I hear of someone cogently putting this thought out there, but it seems rather painfully obvious -- even if perhaps incorrect, but certainly a perspective that is very easy to comprehend and one that merits a lot of discussion. Why is it almost nonexistent? I remember even in the hay day of crypto fever you'd still have A LOT of folks to provide counterarguments/differing perspectives, but with AI these seem to be rather extremely muted.
◧◩◪
3. srslac+di[view] [source] 2023-05-16 12:58:56
>>Chicag+md
I'm not against machine learning, I'm against regulatory capture of it. It's an amazing technology. It still doesn't change the fact that they're just function approximators that are trained to minimize loss on a dataset.
◧◩◪◨
4. luxcem+FO[view] [source] 2023-05-16 15:34:33
>>srslac+di
> It still doesn't change the fact that they're just function approximators that are trained to minimize loss on a dataset.

That fact does not entail what theses models can or cannot do. For what we know our brain could be a process that minimize an unknown loss function.

But more importantly, what SOTA is now does not predict what it will be in the future. What we know is that there is rapid progress in that domain. Intelligence explosion could be real or not, but it's foolish to ignore its consequences because current AI models are not that clever yet.

◧◩◪◨⬒
5. tome+o21[view] [source] 2023-05-16 16:25:45
>>luxcem+FO
> For what we know our brain could be a process that minimize an unknown loss function.

Every process minimizes a loss function.

[go to top]