zlacker

[parent] [thread] 3 comments
1. yallne+(OP)[view] [source] 2023-11-18 14:05:18
if this is the test you're going to then you literally do not understand how LLMs work. it's like asking your keyboard to tell you what colour the nth pixel on the top row of your computer monitor is.
replies(3): >>Jensso+Xo >>mejuto+iH >>rezona+yW
2. Jensso+Xo[view] [source] 2023-11-18 16:31:10
>>yallne+(OP)
An LLM could easily answer that question if it was trained to do it. Nothing in its architecture makes it hard to answer, the attention part could easily look up the previous parts of its answer and refer to the fourth word but it doesn't do that.

So it is a good example that the LLM doesn't generalize understanding, it can answer the question in theory but not in practice since it isn't smart enough. A human can easily answer it even though the human never saw such a question before.

3. mejuto+iH[view] [source] 2023-11-18 18:03:59
>>yallne+(OP)
We all know it is because of the encodings. But as a test to see if it is a human or a computer it is a good one.
4. rezona+yW[view] [source] 2023-11-18 19:23:40
>>yallne+(OP)
Oh, I missed that GP said "of your answer" instead "of my question", as in: "What is the third word of this sentence?"

For prompts like that, I have found no LLM to be very reliable, though GPT 4 is doing much better at it recently.

> you literally do not understand how LLMs work

Hey, how about you take it down a notch, you don't need to blow your blood pressure in the first few days of joining HN.

[go to top]