zlacker

[parent] [thread] 1 comments
1. mi3law+(OP)[view] [source] 2023-11-18 11:25:38
Those zillions of lines are given to ChatGPT in the form of weights and biases through backprop during pre-training. The data does not map to any experience of ChatGPT itself, so it's performance involves associations between data, not associations between data and its own experience of that data.

Compare ChatGPT to a dog-- a dog's experience of an audible "sit" command maps to that particular dog's history of experience, manipulated through pain or pleasure (i.e. if you associate treat + "sit", you'll have a dog with its own grounded definition of sit). A human also learns words like "sit," and we always have our own understanding of those words, even if we can agree on them together too certain degrees in lines of linguistic corpora. In fact, the linguistic corpora is borne out of our experiences, our individual understandings, and that's a one way arrow, so something trained purely on that resultant data is always an abstraction level away from experience, and therefore from true grounded understanding or truth. Hence GPT (and all deep learning) unsolvable hallucination or grounding problems.

replies(1): >>calf+67
2. calf+67[view] [source] 2023-11-18 12:17:05
>>mi3law+(OP)
But I'm not seeing an explicit reason why experience is needed for intelligence. You're repeating this point over and over again but not actually explaining why, you're just assuming that it's a kind of given.
[go to top]