zlacker

[return to "Jan Leike Resigns from OpenAI"]
1. nickle+491[view] [source] 2024-05-15 14:48:28
>>Jimmc4+(OP)
It is easy to point to loopy theories around superalignment, p(doom), etc. But you don't have to be hopped up on sci-fi to oppose something like GPT-4o. Low-latency response time is fine. The faking of emotions and overt references to Her (along with the suspiciously-timed relaxation of pornographic generations) are not fine. I suspect Altman/Brockman/Murati intended for this thing to be dangerous for mentally unwell users, using the exact same logic as tobacco companies.
◧◩
2. shmatt+1o1[view] [source] 2024-05-15 15:55:31
>>nickle+491
Realistically its all just probabilistic word generation. People "feel" like an LLM understands them but it doesn't, its just guessing the next token. You could say all our brains are doing are just guessing the next token but thats a little too deep for this morning

All these companies are doing now is taking an existing inferencing engine, making it 3% faster, 3% more accurate, etc. per quarter, fighting over the $20/month users

One can imagine product is now taking the wheel from engineering and are building ideas on how to monetize the existing engine. Thats essentially what GPT-4o is, and who knows what else is in the 1,2,3 year roadmaps for any of these $20 companies

To reach true AGI we need to get past guessing, and that doesn't seem close at all. Even if one of these companies gets better at making you "feel" like its understanding and not guessing, if it isnt actually happening, its not a breakthrough

Now with product leading the way, its really interesting to see where these engineers head

◧◩◪
3. Diogen+Gq1[view] [source] 2024-05-15 16:05:41
>>shmatt+1o1
> People "feel" like an LLM understands them but it doesn't, its just guessing the next token. You could say all our brains are doing are just guessing the next token but thats a little too deep for this morning

"Just" guessing the next token requires understanding. The fact that LLMs are able to respond so intelligently to such a wide range of novel prompts means that they have a very effective internal representation of the outside world. That's what we colloquially call "understanding."

◧◩◪◨
4. woodru+lt1[view] [source] 2024-05-15 16:17:13
>>Diogen+Gq1
To my understanding (ha!), none of these language models have demonstrated the "recursive" ability that's basic to human consciousness and language: they've managed to iteratively refine their internal world model, but that model implodes as the user performs recursive constructions.

This results in the appearance of an arms race between world model refinement and user cleverness, but it's really a fundamental expressive limitation: the user can always recurse, but the model can only predict tokens.

(There are a lot of contexts in which this distinction doesn't matter, but I would argue that it does matter for a meaningful definition of human-like understanding.)

◧◩◪◨⬒
5. johnth+Xj2[view] [source] 2024-05-15 20:49:48
>>woodru+lt1
Supposedly that was Q* all about. Search recursively, backtrack if dead end. who knows really, but the technology is still very new, I personally don't see why a sufficiently good world model can't be used in this manner.
[go to top]