zlacker

[parent] [thread] 24 comments
1. Yajiro+(OP)[view] [source] 2023-05-16 12:08:33
Who is to say that brains aren't just regression based function approximators?
replies(4): >>shaneb+X >>gumbal+t2 >>lm2846+Wf >>pelagi+Gq
2. shaneb+X[view] [source] 2023-05-16 12:13:37
>>Yajiro+(OP)
Humanity isn't stateless.
replies(1): >>chpatr+gq
3. gumbal+t2[view] [source] 2023-05-16 12:21:29
>>Yajiro+(OP)
My laptop emits sound as i do but it doesnt mean it can sing or talk. It’s software that does what it was programmed to, and so does ai. It may mimic the human brain but that’s about it.
replies(1): >>thesup+pb
◧◩
4. thesup+pb[view] [source] [discussion] 2023-05-16 13:11:11
>>gumbal+t2
>> It’s software that does what it was programmed to, and so does ai.

That's a big part of the issue with machine learning models--they are undiscoverable. You build a model with a bunch of layers and hyperparameters, but no one really understands how it works or by extension how to "fix bugs".

If we say it "does what it was programmed to", what was it programmed to do? Here is the data that was used to train it, but how will it respond to a given input? Who knows?

That does not mean that they need to be heavily regulated. On the contrary, they need to be opened up and thoroughly "explored" before we can "entrust" them to given functions.

replies(2): >>gumbal+Sy >>grumpl+SZ1
5. lm2846+Wf[view] [source] 2023-05-16 13:34:08
>>Yajiro+(OP)
The problem is that you have to bring proofs

Who's to say we're not in a simulation ? Who's to say god doesn't exist ?

replies(1): >>dmreed+dp
◧◩
6. dmreed+dp[view] [source] [discussion] 2023-05-16 14:18:24
>>lm2846+Wf
You're right, of course, but that also makes your out-of-hand dismissals based on your own philosophical premises equally invalid.

Until a model of human sentience and awareness is established (note: one of the oldest problems out there alongside the movements of the stars. This is an ancient debate, still open-ended, and nothing anyone is saying in these threads is new), philosophy is all we have and ideas are debated on their merits within that space.

◧◩
7. chpatr+gq[view] [source] [discussion] 2023-05-16 14:23:24
>>shaneb+X
Neither is text generation as you continue generating text.
replies(1): >>shaneb+Bt
8. pelagi+Gq[view] [source] 2023-05-16 14:25:18
>>Yajiro+(OP)
A Boltzmann brain just materialized over my house.
replies(1): >>dpflan+fE
◧◩◪
9. shaneb+Bt[view] [source] [discussion] 2023-05-16 14:39:31
>>chpatr+gq
"Neither is text generation as you continue generating text."

LLM is stateless.

replies(1): >>chpatr+zv
◧◩◪◨
10. chpatr+zv[view] [source] [discussion] 2023-05-16 14:48:08
>>shaneb+Bt
On a very fundamental level the LLM is a function from context to the next token but when you generate text there is a state as the context gets updated with what has been generated so far.
replies(2): >>shaneb+hx >>jazzyj+7z
◧◩◪◨⬒
11. shaneb+hx[view] [source] [discussion] 2023-05-16 14:56:46
>>chpatr+zv
"On a very fundamental level the LLM is a function from context to the next token but when you generate text there is a state as the context gets updated with what has been generated so far."

Its output is predicated upon its training data, not user defined prompts.

replies(2): >>chpatr+By >>alpaca+nD
◧◩◪◨⬒⬓
12. chpatr+By[view] [source] [discussion] 2023-05-16 15:03:23
>>shaneb+hx
If you have some data and continuously update it with a function, we usually call that data state. That's what happens when you keep adding tokens to the output. The "story so far" is the state of an LLM-based AI.
replies(1): >>shaneb+zA
◧◩◪
13. gumbal+Sy[view] [source] [discussion] 2023-05-16 15:04:53
>>thesup+pb
AI models are not just input and output data. The mathematics in between are designed to mimic intelligence. There is no magic, no supra natural force, no real intelligence involved. It does what it was designed to do. Many dont know how computers work, while some in the past thought cars and engines were the devil. There’s no point in trying to exploit such folks in order to promote a product. We arent meant to know exactly what it will output because that’s what it was programmed to do.
replies(1): >>shaneb+OB
◧◩◪◨⬒
14. jazzyj+7z[view] [source] [discussion] 2023-05-16 15:06:01
>>chpatr+zv
the model is not effected by its inputs over time

its essentially a function that is called recursively on its result, no need to represent state

replies(1): >>chpatr+SD
◧◩◪◨⬒⬓⬔
15. shaneb+zA[view] [source] [discussion] 2023-05-16 15:12:28
>>chpatr+By
'If you have some data and continuously update it with a function, we usually call that data state. That's what happens when you keep adding tokens to the output. The "story so far" is the state of an LLM-based AI.'

You're conflating UX and LLM.

replies(2): >>chpatr+BD >>danena+s11
◧◩◪◨
16. shaneb+OB[view] [source] [discussion] 2023-05-16 15:18:46
>>gumbal+Sy
"We arent meant to know exactly what it will output because that’s what it was programmed to do."

Incorrect, we can't predict its output because we cannot look inside. That's a limitation, not a feature.

◧◩◪◨⬒⬓
17. alpaca+nD[view] [source] [discussion] 2023-05-16 15:25:21
>>shaneb+hx
> Its output is predicated upon its training data, not user defined prompts.

Prompts very obviously have influence on the output.

replies(1): >>shaneb+sK
◧◩◪◨⬒⬓⬔⧯
18. chpatr+BD[view] [source] [discussion] 2023-05-16 15:25:58
>>shaneb+zA
I never said LLMs are stateful.
◧◩◪◨⬒⬓
19. chpatr+SD[view] [source] [discussion] 2023-05-16 15:27:13
>>jazzyj+7z
Being called recursively on a result is state.
replies(1): >>jazzyj+L61
◧◩
20. dpflan+fE[view] [source] [discussion] 2023-05-16 15:29:12
>>pelagi+Gq
An entire generation of minds, here and gone in an instant.
◧◩◪◨⬒⬓⬔
21. shaneb+sK[view] [source] [discussion] 2023-05-16 15:52:42
>>alpaca+nD
"Prompts very obviously have influence on the output."

The LLM is also discrete.

◧◩◪◨⬒⬓⬔⧯
22. danena+s11[view] [source] [discussion] 2023-05-16 16:58:44
>>shaneb+zA
You're being pedantic. While the core token generation function is stateless, that function is not, by a long shot, the only component of an LLM AI. Every LLM system being widely used today is stateful. And it's not only 'UX'. State is fundamental to how these models produce coherent output.
replies(1): >>shaneb+1f1
◧◩◪◨⬒⬓⬔
23. jazzyj+L61[view] [source] [discussion] 2023-05-16 17:21:23
>>chpatr+SD
if you say so, but the model itself is not updated by user input, it is the same function every time, hence, stateless.
◧◩◪◨⬒⬓⬔⧯▣
24. shaneb+1f1[view] [source] [discussion] 2023-05-16 18:02:14
>>danena+s11
"State is fundamental to how these models produce coherent output."

Incorrect.

◧◩◪
25. grumpl+SZ1[view] [source] [discussion] 2023-05-16 22:03:01
>>thesup+pb
> no one really understands how it works or by extension how to "fix bugs".

I don't think this is accurate. Sure, no human can understand 500 billion individual neurons and what they are doing. But you can certainly look at some and say "these are giving a huge weight to this word especially in this context and that's weighting it towards this output".

You can also look at how things make it through the network, the impact of hyperparameters, how the architecture affects things, etc. They aren't truly black boxes except by virtue of scale. You could use automated processes to find out things about the networks as well.

[go to top]