zlacker

[parent] [thread] 4 comments
1. gumbal+(OP)[view] [source] 2023-05-16 12:21:29
My laptop emits sound as i do but it doesnt mean it can sing or talk. It’s software that does what it was programmed to, and so does ai. It may mimic the human brain but that’s about it.
replies(1): >>thesup+W8
2. thesup+W8[view] [source] 2023-05-16 13:11:11
>>gumbal+(OP)
>> It’s software that does what it was programmed to, and so does ai.

That's a big part of the issue with machine learning models--they are undiscoverable. You build a model with a bunch of layers and hyperparameters, but no one really understands how it works or by extension how to "fix bugs".

If we say it "does what it was programmed to", what was it programmed to do? Here is the data that was used to train it, but how will it respond to a given input? Who knows?

That does not mean that they need to be heavily regulated. On the contrary, they need to be opened up and thoroughly "explored" before we can "entrust" them to given functions.

replies(2): >>gumbal+pw >>grumpl+pX1
◧◩
3. gumbal+pw[view] [source] [discussion] 2023-05-16 15:04:53
>>thesup+W8
AI models are not just input and output data. The mathematics in between are designed to mimic intelligence. There is no magic, no supra natural force, no real intelligence involved. It does what it was designed to do. Many dont know how computers work, while some in the past thought cars and engines were the devil. There’s no point in trying to exploit such folks in order to promote a product. We arent meant to know exactly what it will output because that’s what it was programmed to do.
replies(1): >>shaneb+lz
◧◩◪
4. shaneb+lz[view] [source] [discussion] 2023-05-16 15:18:46
>>gumbal+pw
"We arent meant to know exactly what it will output because that’s what it was programmed to do."

Incorrect, we can't predict its output because we cannot look inside. That's a limitation, not a feature.

◧◩
5. grumpl+pX1[view] [source] [discussion] 2023-05-16 22:03:01
>>thesup+W8
> no one really understands how it works or by extension how to "fix bugs".

I don't think this is accurate. Sure, no human can understand 500 billion individual neurons and what they are doing. But you can certainly look at some and say "these are giving a huge weight to this word especially in this context and that's weighting it towards this output".

You can also look at how things make it through the network, the impact of hyperparameters, how the architecture affects things, etc. They aren't truly black boxes except by virtue of scale. You could use automated processes to find out things about the networks as well.

[go to top]