zlacker

[parent] [thread] 1 comments
1. nihaku+(OP)[view] [source] 2025-05-23 17:12:47
This is such a bonkers line of thinking, I'm so intrigued. So a particular model will have an entire 'culture' only available or understandable to itself. Seems kind of lonely. Like some symbols might activate together for reasons that are totally incomprehensible to us, but make perfect sense to the model. I wonder if an approach like the one in https://www.anthropic.com/research/tracing-thoughts-language... could ever give us insight into any 'inside jokes' present in the model.

I hope that research into understanding LLM qualia eventually allow us to understand e.g. what it's like to [be a bat](https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F)

replies(1): >>nullc+Ub
2. nullc+Ub[view] [source] 2025-05-23 18:43:04
>>nihaku+(OP)
In some sense it's more human than a model trained with no RL and which has absolutely no exposure to its own output.

We have our own personal 'culture' too-- it's just less obvious because its tied up with our own hidden state. If you go back and read old essays that you wrote you might notice some of it-- that ideas and feelings (maybe smells?) that are absolutely not explicitly in the text immediately come back to you, stuff that no one or maybe only a spouse or very close friend might think.

I think it may be very hard to explore hidden subtext because the signals may be almost arbitrarily weak and context dependent. The bare model may need only a little nudge to get to the right answer and the you have this big wall of "reasoning" where each token could carry very small amounts of subtext that cumulatively add up to a lot and push things in the right direction.

[go to top]