zlacker

[parent] [thread] 6 comments
1. galaxy+(OP)[view] [source] 2023-11-21 06:30:33
When you train an LLM you do that by executing some computer code with some inputs. The programmers who wrote the code you execute know exactly what it does. Just like Google knows exactly how its search-algorithm works. An LLM uses statistics and Markov-chains and what have you to generate the output for a given input.

It's like with any optimization algorithm. You cannot predict what exactly will be the result of a given optimization-run. But you know how the optimization algorithm works. The (more or less) optimal solution you get back might surprise you, might be counter-intuitive. But programmers who wrote the code that did the optimization, and have the source-code, know exactly how it works.

When you get a result from LLM you don't say "I can't possibly understand why it came up with this result?". You can understand that, it's just following the rules it was programmed to follow. You might not know those rules, you might not understand them, but programmers who wrote them do.

replies(4): >>IanCal+Z7 >>trasht+Ms >>jibal+hq2 >>kromem+oN3
2. IanCal+Z7[view] [source] 2023-11-21 07:46:13
>>galaxy+(OP)
You're mixing up what we mean by what rules it's following or how it's working.

If I ask how it's able to write a poem given a request and you tell me you know - it multiplies and adds this set of 1.8 trillion numbers together X times with this set of accumulators, I would argue you don't understand how it works enough to make any useful predictions.

Kind of like how you understand what insane spaghetti code is doing - it's running this code - but can have absolutely no idea what business logic it encodes.

replies(1): >>galaxy+0A1
3. trasht+Ms[view] [source] 2023-11-21 10:41:52
>>galaxy+(OP)
I think of LLM's as if we would create a human stem cell from scratch, including the DNA, and then grow it to a person.

We may know every we put every single atom in that stem cell, but still not know any more about the resulting baby (and later adult) than we do about humans made the natural way.

Oh, and if you're looking for reasons to regulate AI, this metaphor works for that, too.

◧◩
4. galaxy+0A1[view] [source] [discussion] 2023-11-21 16:50:34
>>IanCal+Z7
It is not "spaghetti-code" but well-engineered code I believe. The output of an LLM is based on billions of fine-tuned parameters but we know how those parameters came about, by executing the code of the AI-application in the training mode.

It doesn't really encode "business logic", it just matches your input with the best output it can come up with, based on how its parameters are fine-tuned. Saying that "We don't understand how it works" is just unnecessary AI-mysticism.

replies(1): >>IanCal+bJ1
◧◩◪
5. IanCal+bJ1[view] [source] [discussion] 2023-11-21 17:22:40
>>galaxy+0A1
The spaghetti code comparison is not to the code but the parameters.

> It doesn't really encode "business logic"

Doesn't it? Gpt architectures can build world models internally while processing tokens (see Othello got).

> we know how those parameters came about, by executing the code of the AI-application in the training mode.

Sure. But that's not actually a very useful description when trying to figure out how to use and apply these models to solve problems or understand what their limitations are.

> Saying that "We don't understand how it works" is just unnecessary AI-mysticism.

We don't to the level we want to.

Tell you what, let's flip it around. If we know how they work just fine, why are smart researchers doing experiments with them? Why is looking at the code and billions or trillions of floats not enough?

6. jibal+hq2[view] [source] 2023-11-21 20:16:22
>>galaxy+(OP)
What you fail to appreciate is the operation of an LLM is driven by the input data far more than is the case with most programs. Typical programs have a lot of business logic that determines their behavior--rules, as you say. E.g., an optimizing compiler has a large number of hand-crafted optimizations that are invoked when code fits the pattern they are intended for. But LLMs don't have programmed cases or rule like that--the core algorithms are input-agnostic. All of the variability of the output is purely a reflection of patterns in the input; the programmers never made any sort of decision like "if this word is seen do this".
7. kromem+oN3[view] [source] 2023-11-22 04:27:17
>>galaxy+(OP)
The training is known. The result is not.

The gap between what you think is the case and what's actually the case is that there isn't a single optimization step directed by the programming.

Instead, the training gives the network the freedom to make its own optimizations, which remain obfuscated from the programmers.

So we do know that we are giving the network the ability to self modify in order to optimize its performance on the task, and have a clear understanding of how this is set up.

But it isn't at all clear what the self modifications that improve the results are actually doing, as there's simply far too many interdependent variables to identify cause and effect for each node's weight changes from the initial to final state.

[go to top]