zlacker

[return to "A non-anthropomorphized view of LLMs"]
1. chaps+g4[view] [source] 2025-07-06 23:04:41
>>zdw+(OP)
I highly recommend playing with embeddings in order to get a stronger intuitive sense of this. It really starts to click that it's a representation of high dimensional space when you can actually see their positions within that space.
◧◩
2. perchi+y6[view] [source] 2025-07-06 23:26:34
>>chaps+g4
> of this

You mean that LLMs are more than just the matmuls they're made up of, or that that is exactly what they are and how great that is?

◧◩◪
3. chaps+87[view] [source] 2025-07-06 23:31:02
>>perchi+y6
Not making a qualitative assessment of any of it. Just pointing out that there are ways to build separate sets of intuition outside of using the "usual" presentation layer. It's very possible to take a red-team approach to these systems, friend.
◧◩◪◨
4. perchi+Pb2[view] [source] 2025-07-07 18:09:26
>>chaps+87
Yes, and what I was trying to do is learn a bit more about that alternative intuition of yours. Because it doesn't sound all that different from what's described in the OP, or what anyone can trivially glean from taking a 101 course on AI at university or similar.
◧◩◪◨⬒
5. chaps+jX2[view] [source] 2025-07-08 01:06:13
>>perchi+Pb2
So what? :)
◧◩◪◨⬒⬓
6. perchi+Nj3[view] [source] 2025-07-08 05:47:51
>>chaps+jX2
Nothing? """:)"""

Was just confusing because your phrasing implied different.

https://en.wikipedia.org/wiki/Cooperative_principle

◧◩◪◨⬒⬓⬔
7. chaps+GN9[view] [source] 2025-07-10 15:51:27
>>perchi+Nj3
My real point was to emphasize that play is important because it expands our world of interfaces, whatever those interfaces may be, and however far those interfaces go.

Your antagonistic attitude is not productive nor playful. Lighten up, friend.

[go to top]