zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. johnwh+Uc1[view] [source] 2023-11-18 02:36:00
>>davidb+(OP)
Ilya booted him https://twitter.com/karaswisher/status/1725702501435941294
◧◩
2. dwd+zL1[view] [source] 2023-11-18 07:07:59
>>johnwh+Uc1
Jeremy Howard called ngmi on OpenAI during the Vanishing Gradients podcast yesterday, and Ilya has probably been thinking the same: LLM is a dead-end and not the path to AGI.

https://twitter.com/HamelHusain/status/1725655686913392933

◧◩◪
3. erhaet+1O1[view] [source] 2023-11-18 07:31:39
>>dwd+zL1
Did we ever think LLMs were a path to AGI...? AGI is friggin hard, I don't know why folks keep getting fooled whenever a bot writes a coherent sentence.
◧◩◪◨
4. Rugged+9P1[view] [source] 2023-11-18 07:43:48
>>erhaet+1O1
It's mostly a thing among the youngs I feel. Anybody old enough to remember the same 'OMG its going to change the world' cycles around AI every two or three decades knows better. The field is not actually advancing. It still wrestles with the same fundamental problems they were doing in the early 60s. The only change is external, where computer power gains and data set size increases allow brute forcing problems.
◧◩◪◨⬒
5. concor+T42[view] [source] 2023-11-18 10:03:49
>>Rugged+9P1
> The field is not actually advancing.

Uh, what do you mean by this? Are you trying to draw a fundamental science vs engineering distinction here?

Because today's LLMs definitely have capabilities we previously didn't have.

◧◩◪◨⬒⬓
6. oska+372[view] [source] 2023-11-18 10:20:45
>>concor+T42
They don't have 'artificial intelligence' capabilities (and never will).

But it is an interesting technology.

◧◩◪◨⬒⬓⬔
7. concor+y72[view] [source] 2023-11-18 10:24:11
>>oska+372
They can be the core part of a system that can do a junior dev's job.

Are you defining "artificial intelligence" is some unusual way?

◧◩◪◨⬒⬓⬔⧯
8. oska+k82[view] [source] 2023-11-18 10:31:13
>>concor+y72
I'm defining intelligence in the usual way and intelligence requires understanding which is not possible without consciousness

I follow Roger Penrose's thinking here. [1]

[1] https://www.youtube.com/watch?v=2aiGybCeqgI&t=721s

◧◩◪◨⬒⬓⬔⧯▣
9. wilder+Ci2[view] [source] 2023-11-18 11:52:44
>>oska+k82
It’s cool to see people recognizing this basic fact — consciousness is a prerequisite for intelligence. GPT is a philosophical zombie.
◧◩◪◨⬒⬓⬔⧯▣▦
10. bagofs+zL2[view] [source] 2023-11-18 14:51:08
>>wilder+Ci2
Problem is, we have no agreed-upon operational definition of consciousness. Arguably, it's the secular equivalent of the soul. Something everything believes they have, but which is not testable, locatable or definable.

But yet (just like with the soul) we're sure we have it, and it's impossible for anything else to have it. Perhaps consciousness is simply a hallucination that makes us feel special about ourselves.

◧◩◪◨⬒⬓⬔⧯▣▦▧
11. wilder+F43[view] [source] 2023-11-18 16:46:17
>>bagofs+zL2
I disagree. There is a simple test for consciousness: empathy.

Empathy is the ability to emulate the contents of another consciousness.

While an agent could mimic empathetic behaviors (and words), given enough interrogation and testing you would encounter an out-of-training case that it would fail.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
12. int_19+7h4[view] [source] 2023-11-18 23:29:58
>>wilder+F43
For one thing, this would imply that clinical psychopaths aren't conscious, which would be a very weird takeaway.

But also, how do you know that LMs aren't empathic? By your own admission they do "mimic empathetic behaviors", but you reject this as the real thing because you claim that with enough testing you would encounter a failure. This raises all kinds of "no true Scotsman" flags, not to mention that empathy failure is not exactly uncommon among humans. So how exactly do you actually test your hypothesis?

◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲
13. wilder+Pw6[view] [source] 2023-11-19 17:07:17
>>int_19+7h4
Great point and great question! Yes, it does imply that people who lack the capacity for empathy (as opposed to those who do not utilize their capacity for empathy) may lack conscious experience. Empathy failure here means lacking the data empathy provides rather than ignoring the data empathy provides (which as you note, is common). I’ve got a few prompts that are somewhat promising in terms of clearly showing that GPT4 is unable to correctly predict human behavior driven by human empathy. The prompts are basic thought experiments where a person has two choices: an irrational yet empathic choice, and a rational yet non-empathic choice. GPT4 does not seem able to predict that smart humans do dumb things due to empathy, unless it is prompted with such a suggestion. If it had empathy itself, it would not need to be prompted about empathy.
◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲◳
14. int_19+xD7[view] [source] 2023-11-19 22:19:37
>>wilder+Pw6
Can you give some examples of such prompts?
[go to top]