zlacker

[return to "Ask HN: Is anyone else getting AI fatigue?"]
1. Frustr+Sf[view] [source] 2023-02-09 12:53:17
>>grader+(OP)
Engineers are always building things that are incredible, then turning their back on it as ordinary once the problem is solved, “oh that’s so normal, it was just a little math, a little tweak, no big deal”.

AI has gone through a lot of stages of “only X can be done by a human”-> “X is done by AI” -> “oh, that’s just some engineering, that’s not really human” or “no longer in the category of mystical things we can’t explain that a human can do”.

LLM is just the latest iteration of, “wow it can do this amazing human only thing X (write a paper indistinguishable from a human)” -> “doh, it’s just some engineering (it’s just a fancy auto complete)”.

Just because AI is a bunch of linear algebra and statistics does not mean the brain isn’t doing something similar. You don’t like terminology, but how is re-enforcement “Learning”, not exactly the same as reading books to a toddler and pointing at a picture and having them repeat what it is?

Start digging into the human with the same engineering view, and suddenly it also just become a bunch of parts. Where is the human in the human once all the human parts are explained like an engineer would. What would be left? The human is computation also, unless you believe in souls or other worldly mysticism. So why not think eventually AI as computation can be equal to human.

Just because Github CoPilot can write bad code, isn't a knock on AI, it's real, a lot of humans write bad code.

◧◩
2. noober+nj1[view] [source] 2023-02-09 17:10:42
>>Frustr+Sf
First off, I'm not sure why this is the most upvoted comment. The OP explicitly praises AI, he just smells the same grifters gathering around like they did to crypto and he's absolutely right, it is the exact same folks. He isn't claiming the mind is metaphysical or whatever.

On your claim that the mind is metaphysical OR it is a NN, you have to understand that this extremely false dichotomy is quite the stretch itself, as if there are no other possibilities, that it isn't even a range or it could be something else entirely. One of the critiques people have of NN from the "old guard" is the lack of symbolic intelligence. Claiming you don't need it and fitting is merely enough is suspect because even with OpenAI tier training, just the grammar is there, some of the semantic understanding is lacking. Appealing to the god of the gaps is a fallacy for a reason, although it may in fact turn out to be true, potentially that just more training might be all that is needed. EDIT: Anyway, the point is assuming symbolic reasoning is a part of intelligence (hell, it's how we discuss things) doesn't require mysticism, it just is an aspect that NNs currently don't have, or very charitably do not appear to have quite yet.

Regardless, there isn't really evidence that "what brains do is what NNs do" or vice versa. The argument as many times as it has been pushed has been primarily driven by analogy. But just because a painting looks like an apple doesn't mean you can eat the canvas. Similarities might betray some underlying relationship (an artist who made the painting took reference from an actual apple you can eat), but assuming an equivalence without evidence just strange behavior, and I'm not sure for what purpose.

[go to top]