zlacker

[parent] [thread] 2 comments
1. cthalu+(OP)[view] [source] 2023-11-20 06:07:03
>Certainly no one is suggesting these systems can become "alive"

No, that very much is the fear. They believe that by training AI on all of the things that it takes to make AI, at a certain level of sophistication, the AI can rapidly and continually improve itself until it becomes a superintelligence.

replies(1): >>ryanSr+f4
2. ryanSr+f4[view] [source] 2023-11-20 06:33:08
>>cthalu+(OP)
That's not alive in any meaningful sense.

When I say alive, I mean it's like something to be that thing. The lights are on. It has subjective experience.

It seems many are defining ASI as just a really fast self learning computer. And while sure, given the wrong type of access and motive, that could be dangerous. But it isn't anymore dangerous than any other faulty software that has access to sensitive systems.

replies(1): >>Feepin+G6
◧◩
3. Feepin+G6[view] [source] [discussion] 2023-11-20 06:49:27
>>ryanSr+f4
You're thinking about "alive" as "humanlike" as "subjective experience" as "dangerous". Instead, think of agentic behavior as a certain kind of algorithm. You don't need the human cognitive architecture to execute an input/output loop trying to maximize the value of a certain function over states of reality.

> But it isn't anymore dangerous than any other faulty software that has access to sensitive systems.

Seems to me that can be unboundedly dangerous? Like, I don't see you making an argument here that there's a limit to what kind of dangerous that class entails.

[go to top]