zlacker

[parent] [thread] 0 comments
1. xpe+(OP)[view] [source] 2024-04-21 01:28:00
> Compare this to how humans and LLMs learn, they both have no problem with inconsistent information.

I don't have time to fully refute this claim, but it is very problematic.

1. Even a very narrow framing of how neural networks deal with inconsistent training data would perhaps warrant a paper if not a Ph.D. thesis. Maybe this has already been done? Here is the problem statement: given a DNN with a given topology trained with SGD and a given error function, what happens when you present flatly contradictory training examples? What happens when the contradiction doesn't emerge until deeper levels of a network? Can we detect this? How?

2. Do we really _want_ systems that passively tolerate inconsistent information? When I think of an ideal learning agent, I want one that would engage in the learning process and seek to resolve any apparent contradictions. I haven't actively researched this area, but I'm confident that some have, if only because Tom Mitchell at CMU emphasizes different learning paradigms in his well-known ML book. So hopefully enough people reading that think "yeah, the usual training methods for NNs aren't really that interesting ... we can do better."

3. Just because humans 'tolerate' inconsistent information in some cases doesn't mean they do so well, as compared to ideal Bayesian agents.

4. There are "GOFAI" algorithms for probabilistic reasoning that are in many cases better than DNNs.

[go to top]