zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. sumthi+VE[view] [source] 2023-11-20 22:29:29
>>koie+(OP)
I have not read the essay, yet but when 'we' talk about > reasoning errors, we do not mean reason in some natural, universal, scientific kind of sense, right?

Given that the training data can only contain human reasoning and computational logic, reason in the sense of LLM's can only be interpreted as "rational facts AND nonsense humans made up to create systems that would support consumerism-driven sanity", correct?????

Please understand, I'm not mocking, I'm genuinely interested in the ways human reasoning radiates into the code LLM's learn while they realize (the computational equivalent of a new-born's eyes opening) their cognitive (&) sensory (that which triggers/causes/elicits/prompts/influences) their origins (every whatever-second/moment of their existence).

◧◩
2. trasht+al2[view] [source] 2023-11-21 11:33:55
>>sumthi+VE
> we do not mean reason in some natural, universal, scientific kind of sense

I believe there are two different ways people think about this:

1) Some see "reason", "intelligence", "free will" and/or "consciousness" as emergent phenomena that arises naturally from normal physical processes (or they dismiss the concepts completely as illusions for the same reasons).

2) Other seem to consider these somehow independent from physics, or if not will tend to hypothesize that it is linked through quantum mechanics to something more fundamental.

If interpretation 1) is correct, then we will probably see full AGI in our lifetime. If 2) is correct, it could be that we can never create "real" AGI, or at least not without quantum computers.

I've never seen anyone in camp 2 come up with convincing definitions of the terms, though, beyond "I know it when I feel it".

Anyway, it's really hard to have a discussion with someone with the opposite conviction, since these beliefs tend to be held axiomatically and/or religiously.

[go to top]