zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. johnwh+Uc1[view] [source] 2023-11-18 02:36:00
>>davidb+(OP)
Ilya booted him https://twitter.com/karaswisher/status/1725702501435941294
◧◩
2. dwd+zL1[view] [source] 2023-11-18 07:07:59
>>johnwh+Uc1
Jeremy Howard called ngmi on OpenAI during the Vanishing Gradients podcast yesterday, and Ilya has probably been thinking the same: LLM is a dead-end and not the path to AGI.

https://twitter.com/HamelHusain/status/1725655686913392933

◧◩◪
3. erhaet+1O1[view] [source] 2023-11-18 07:31:39
>>dwd+zL1
Did we ever think LLMs were a path to AGI...? AGI is friggin hard, I don't know why folks keep getting fooled whenever a bot writes a coherent sentence.
◧◩◪◨
4. Rugged+9P1[view] [source] 2023-11-18 07:43:48
>>erhaet+1O1
It's mostly a thing among the youngs I feel. Anybody old enough to remember the same 'OMG its going to change the world' cycles around AI every two or three decades knows better. The field is not actually advancing. It still wrestles with the same fundamental problems they were doing in the early 60s. The only change is external, where computer power gains and data set size increases allow brute forcing problems.
◧◩◪◨⬒
5. concor+T42[view] [source] 2023-11-18 10:03:49
>>Rugged+9P1
> The field is not actually advancing.

Uh, what do you mean by this? Are you trying to draw a fundamental science vs engineering distinction here?

Because today's LLMs definitely have capabilities we previously didn't have.

◧◩◪◨⬒⬓
6. oska+372[view] [source] 2023-11-18 10:20:45
>>concor+T42
They don't have 'artificial intelligence' capabilities (and never will).

But it is an interesting technology.

◧◩◪◨⬒⬓⬔
7. concor+y72[view] [source] 2023-11-18 10:24:11
>>oska+372
They can be the core part of a system that can do a junior dev's job.

Are you defining "artificial intelligence" is some unusual way?

◧◩◪◨⬒⬓⬔⧯
8. oska+k82[view] [source] 2023-11-18 10:31:13
>>concor+y72
I'm defining intelligence in the usual way and intelligence requires understanding which is not possible without consciousness

I follow Roger Penrose's thinking here. [1]

[1] https://www.youtube.com/watch?v=2aiGybCeqgI&t=721s

◧◩◪◨⬒⬓⬔⧯▣
9. wilder+Ci2[view] [source] 2023-11-18 11:52:44
>>oska+k82
It’s cool to see people recognizing this basic fact — consciousness is a prerequisite for intelligence. GPT is a philosophical zombie.
◧◩◪◨⬒⬓⬔⧯▣▦
10. bagofs+zL2[view] [source] 2023-11-18 14:51:08
>>wilder+Ci2
Problem is, we have no agreed-upon operational definition of consciousness. Arguably, it's the secular equivalent of the soul. Something everything believes they have, but which is not testable, locatable or definable.

But yet (just like with the soul) we're sure we have it, and it's impossible for anything else to have it. Perhaps consciousness is simply a hallucination that makes us feel special about ourselves.

◧◩◪◨⬒⬓⬔⧯▣▦▧
11. wilder+F43[view] [source] 2023-11-18 16:46:17
>>bagofs+zL2
I disagree. There is a simple test for consciousness: empathy.

Empathy is the ability to emulate the contents of another consciousness.

While an agent could mimic empathetic behaviors (and words), given enough interrogation and testing you would encounter an out-of-training case that it would fail.

[go to top]