zlacker

[return to "Jan Leike Resigns from OpenAI"]
1. nickle+491[view] [source] 2024-05-15 14:48:28
>>Jimmc4+(OP)
It is easy to point to loopy theories around superalignment, p(doom), etc. But you don't have to be hopped up on sci-fi to oppose something like GPT-4o. Low-latency response time is fine. The faking of emotions and overt references to Her (along with the suspiciously-timed relaxation of pornographic generations) are not fine. I suspect Altman/Brockman/Murati intended for this thing to be dangerous for mentally unwell users, using the exact same logic as tobacco companies.
◧◩
2. shmatt+1o1[view] [source] 2024-05-15 15:55:31
>>nickle+491
Realistically its all just probabilistic word generation. People "feel" like an LLM understands them but it doesn't, its just guessing the next token. You could say all our brains are doing are just guessing the next token but thats a little too deep for this morning

All these companies are doing now is taking an existing inferencing engine, making it 3% faster, 3% more accurate, etc. per quarter, fighting over the $20/month users

One can imagine product is now taking the wheel from engineering and are building ideas on how to monetize the existing engine. Thats essentially what GPT-4o is, and who knows what else is in the 1,2,3 year roadmaps for any of these $20 companies

To reach true AGI we need to get past guessing, and that doesn't seem close at all. Even if one of these companies gets better at making you "feel" like its understanding and not guessing, if it isnt actually happening, its not a breakthrough

Now with product leading the way, its really interesting to see where these engineers head

◧◩◪
3. Diogen+Gq1[view] [source] 2024-05-15 16:05:41
>>shmatt+1o1
> People "feel" like an LLM understands them but it doesn't, its just guessing the next token. You could say all our brains are doing are just guessing the next token but thats a little too deep for this morning

"Just" guessing the next token requires understanding. The fact that LLMs are able to respond so intelligently to such a wide range of novel prompts means that they have a very effective internal representation of the outside world. That's what we colloquially call "understanding."

◧◩◪◨
4. carom+os1[view] [source] 2024-05-15 16:12:35
>>Diogen+Gq1
I disagree. I use ChatGPT daily as a replacement for Google. It doesn't understand or have logic, it can spit out information very well though. It has a broad knowledge base. There is no entity there to have an understanding of the topic.

This becomes pretty clear when you get to more complex algorithms or low level details like drawing a stack frame. There is not logic there.

◧◩◪◨⬒
5. Diogen+zz1[view] [source] 2024-05-15 16:46:33
>>carom+os1
> It doesn't understand or have logic

I can ask ChatGPT questions that require logic to answer, and it will do just fine in most cases. It has certain limitations, but to say it isn't able to apply logic is just completely contrary to my experience with ChatGPT.

◧◩◪◨⬒⬓
6. jobs_t+3G1[view] [source] 2024-05-15 17:16:08
>>Diogen+zz1
give us an example please
◧◩◪◨⬒⬓⬔
7. Diogen+8m2[view] [source] 2024-05-15 21:05:12
>>jobs_t+3G1
I deliberately asked ChatGPT a logical question with a false premise: "If all snakes have legs, and a python is a snake, does a python have legs?"

ChatGPT answers:

> Yes, if we assume the statement "all snakes have legs" to be true and accept that a python is a type of snake, then logically, a python would have legs. This conclusion follows from the structure of a logical syllogism:

> 1. All snakes have legs.

> 2. A python is a snake.

> 3. Therefore, a python has legs.

> However, it’s important to note that in reality, snakes, including pythons, do not have legs. This logical exercise is based on the hypothetical premise that all snakes have legs.

ChatGPT clearly understands the logic of the question, answers correctly, and then tells me that the premise of my question is incorrect.

You can say, "But it doesn't really understand logic. It's just predicting the most likely token." Well, it responds exactly how someone who understands logic would respond. If you assert that that's not the same as applying logic, then I think you're essentially making a religious statement.

◧◩◪◨⬒⬓⬔⧯
8. root_a+pL2[view] [source] 2024-05-16 00:18:13
>>Diogen+8m2
> Well, it responds exactly how someone who understands logic would respond.

An animation looks exactly like something in motion looks, but it isn't actually moving.

◧◩◪◨⬒⬓⬔⧯▣
9. Diogen+Y63[view] [source] 2024-05-16 04:40:10
>>root_a+pL2
What's the difference between responding logically and giving answers that are identical to how one would answer if one were to apply logic?
◧◩◪◨⬒⬓⬔⧯▣▦
10. carom+tD7[view] [source] 2024-05-17 18:08:04
>>Diogen+Y63
The logic does not generalize to things outside of the training set. It cannot reason about code very well, but it can write you functions with memorized docs.
[go to top]