zlacker

[parent] [thread] 10 comments
1. mi3law+(OP)[view] [source] 2023-11-18 07:46:48
It can be useful in certain contexts, most certainly as a code co-pilot, but that and yours/others' usage doesn't change the fundamental mismatch between the limits of this tech and what Sam and others have hyped it up to do.

We've already trained it on all the data there is, it's not going to get "smarter" and it'll always lack true subjective understanding, so the overhype has been real, indeed to bubble levels as per OP.

replies(2): >>tempes+x >>svnt+O
2. tempes+x[view] [source] 2023-11-18 07:51:08
>>mi3law+(OP)
> it's not going to get "smarter" and it'll always lack true subjective understanding

What is your basis for those claims? Especially the first one; I would think it's obvious that it will get smarter; the only questions are how much and how quickly. As far as subjective understanding, we're getting into the nature of consciousness territory, but if it can perform the same tasks, it doesn't really impact the value.

replies(1): >>mi3law+w8
3. svnt+O[view] [source] 2023-11-18 07:53:36
>>mi3law+(OP)
I would appreciate another example where a major new communications technology peaks in its implementation within the first year after it is introduced to the market.
replies(2): >>mi3law+Z6 >>Kaiser+C9
◧◩
4. mi3law+Z6[view] [source] [discussion] 2023-11-18 08:50:30
>>svnt+O
FTX / crypto, which just imploded last year.

Look, I'm an AGI/AI researcher myself. I believe and bleed this stuff. AI is here to stay and is forever a part of computing in many ways. Sam Altman and others bastardized it by overhyping it to current levels, derailing real work. All the traction OpenAI has accumulated, outside of github copoilot / codex, is itself so far away from product-market fit that people are playing off the novelty of AGI / the GPT/AI being on its way to "smarter" than human rather than any real usage.

Hype in tech is real. Overhype and bubbles are real. In AI in particular, there's been AI winters because of the overhype.

replies(1): >>svnt+T22
◧◩
5. mi3law+w8[view] [source] [discussion] 2023-11-18 09:02:26
>>tempes+x
My basis for these claims is from my research career, work described so far at aolabs.ai; still very much in progress, but form what I've learned I can respond to the 2 claims you're poking at--

1) we should agree on what we mean by smart or intelligent. That's really hard to do so let's narrow it down to "does not hallucinate" the way GPT does, or more high level has a subjective understanding of its own that another agent can reliably come to trust. I can tell you that AI/deep learning/LLM hallucination is a technically unsolvable problem, so it'll never get "smarter" in that way.

2) This connects to number 2. Humans and animals of course aren't infinitely "smart;" we fuck up and hallucinate in ways of our own, but that's just it, we have a grounded truth of our own, born of a body and emotional experience that grounds our rational experience, or the consciousness you talk about.

So my claim is really one claim, that AI cannot perform the same tasks or "true" intelligence level of a human in the sense of not hallucinating like GPT without having a subjective experience of its own.

There is no answer or understanding "out there;" it's all what we experience and come to understand.

This is my favorite topic. I have much more to share on it including working code, though at a level of an extremely simple organism (thinking we can skip to human level and even jump exponentially beyond that is what I'm calling out as BS).

replies(2): >>calf+Kc >>Paul-C+1z
◧◩
6. Kaiser+C9[view] [source] [discussion] 2023-11-18 09:11:27
>>svnt+O
> major new communications technology peaks in its implementation within the first year after it is introduced to the market.

Peak is perhaps the wrong word, local maximum before falling into the Trough of Disillusionment.

◧◩◪
7. calf+Kc[view] [source] [discussion] 2023-11-18 09:38:42
>>mi3law+w8
Then why can't the the grounded truth of ChatGPT be born of a body of silicon and emotional experience of zillions of lines of linguistic corpus?
replies(1): >>mi3law+rp
◧◩◪◨
8. mi3law+rp[view] [source] [discussion] 2023-11-18 11:25:38
>>calf+Kc
Those zillions of lines are given to ChatGPT in the form of weights and biases through backprop during pre-training. The data does not map to any experience of ChatGPT itself, so it's performance involves associations between data, not associations between data and its own experience of that data.

Compare ChatGPT to a dog-- a dog's experience of an audible "sit" command maps to that particular dog's history of experience, manipulated through pain or pleasure (i.e. if you associate treat + "sit", you'll have a dog with its own grounded definition of sit). A human also learns words like "sit," and we always have our own understanding of those words, even if we can agree on them together too certain degrees in lines of linguistic corpora. In fact, the linguistic corpora is borne out of our experiences, our individual understandings, and that's a one way arrow, so something trained purely on that resultant data is always an abstraction level away from experience, and therefore from true grounded understanding or truth. Hence GPT (and all deep learning) unsolvable hallucination or grounding problems.

replies(1): >>calf+xw
◧◩◪◨⬒
9. calf+xw[view] [source] [discussion] 2023-11-18 12:17:05
>>mi3law+rp
But I'm not seeing an explicit reason why experience is needed for intelligence. You're repeating this point over and over again but not actually explaining why, you're just assuming that it's a kind of given.
◧◩◪
10. Paul-C+1z[view] [source] [discussion] 2023-11-18 12:32:16
>>mi3law+w8
I don't see why "does not hallucinate" is a viable definition for "intelligent." Humans hallucinate, both literally, and in the sense of confabulating the same way that LLMs do. Are humans not intelligent?
◧◩◪
11. svnt+T22[view] [source] [discussion] 2023-11-18 21:13:50
>>mi3law+Z6
It seems we are talking about multiple different things. I never denied hype was a thing.

You’re talking about hype cycles now. Previously it seemed like you said AI was not going to be advancing.

LLMs are maybe headed into oversold territory, but LLMs are not the end of AI, even in the near term. They are just the UI front end.

[go to top]