zlacker

[return to "My AI skeptic friends are all nuts"]
1. parado+4u[view] [source] 2025-06-03 00:29:20
>>tablet+(OP)
I like Thomas, but I find his arguments include the same fundamental mistake I see made elsewhere. He acknowledged that the tools need an expert to use properly, and as he illustrated, he refined his expertise over many years. He is of the first and last generation of experienced programmers who learned without LLM assistance. How is someone just coming out of school going to get the encouragement and space to independently develop the experience they need to break out of the "vibe coding" phase? I can almost anticipate an interjection along the lines of "well we used to build everything with our hands and now we have tools etc, it's just different" but this is an order of magnitude different. This is asking a robot to design and assemble a shed for you, and you never even see the saw, nails, and hammer being used, let alone understand enough about how the different materials interact to get much more than a "vibe" for how much weight the roof might support.
◧◩
2. hatthe+GC[view] [source] 2025-06-03 01:50:42
>>parado+4u
I think the main difference between shortcuts like "compilers" and shortcuts like "LLMs" is determinism. I don't need to know assembly because I use a compiler that is very well specified, often mathematically proven to introduce no errors, and errs on the side of caution unless specifically told otherwise.

On the other hand, LLMs are highly nondeterministic. They often produce correct output for simple things, but that's because those things are simple enough that we trust the probability of it being incorrect is implausibly low. But there's no guarantee that they won't get them wrong. For more complicated things, LLMs are terrible and need very well specified guardrails. They will bounce around inside those guardrails until they make something correct, but that's more of a happy accident than a mathematical guarantee.

LLMs aren't a level of abstraction, they are an independent entity. They're the equivalent of a junior coder who has no long term memory and thus needs to write everything down and you just have to hope that they don't forget to write something down and hope that some deterministic automated test will catch them if they do forget.

If you could hire an unpaid intern with long term memory loss, would you?

◧◩◪
3. red75p+VD[view] [source] 2025-06-03 02:04:59
>>hatthe+GC
> If you could hire an unpaid intern with long term memory loss, would you?

It's clearly a deficiency. And that's why one of the next generations of AIs will have long term memory and online learning. Although even the current generation of the models shows signs of self-correction that somewhat mitigate the "random walk" you've mentioned.

◧◩◪◨
4. hatthe+S41[view] [source] 2025-06-03 07:01:53
>>red75p+VD
ok, let me know when that happens
◧◩◪◨⬒
5. red75p+od1[view] [source] 2025-06-03 08:30:16
>>hatthe+S41
Don't worry. It would be hard to miss.

But, seriously, while it's not an easy task (otherwise we'd have seen it done already), it doesn't seem to be a kind of task that requires a paradigm shift or some fundamental discovery. It's a search in the space of network architectures.

Of course, we are talking about something that hasn't been invented yet, so I can't provide ironclad arguments like with, say, fusion power where we know what to do and it's just hard to do.

There is circumstantial evidence though. Complex problem solving skills that evolved in different groups of species: homo, corvidae, octopoda. Which points at either existence of multiple solutions to the problem of intelligence or at not that high complexity of a solution.

Anyway, with all the resources that are put into the development of AI will see the results (one way or another) soon enough. If long-term memory is not incorporated into AIs in about 5 years, then I'm wrong and it's indeed likely to be a fundamental problem with the current approach.

[go to top]