zlacker

[parent] [thread] 4 comments
1. ChatGT+(OP)[view] [source] 2023-11-20 02:57:19
I don’t think it’s a problem unless we workout how to teach them to feel emotions.

I don’t think an LLM is ever going to be capable of feeling fear, boredom etc.

If we did it would probably have many of the handicaps we do.

replies(1): >>Davidz+y
2. Davidz+y[view] [source] 2023-11-20 03:00:17
>>ChatGT+(OP)
Why can't it feel fear? The model itself doesn't have any built in mechanisms sure but it can simulate an agent capable of fear. In the same way the simulation can have other emotions needed to be a better model of a human
replies(3): >>Ration+d5 >>ChatGT+66 >>qgin+G84
◧◩
3. Ration+d5[view] [source] [discussion] 2023-11-20 03:42:26
>>Davidz+y
As I pointed out in a different comment, ChatGPT and friends are based on predicting the training data. As a result they learn to imitate what is in it.

To the extent that we provide the training data for such models, we should expect it to internalize aspects of our behavior. And what is internalized won't just be what we expected and were planning on.

◧◩
4. ChatGT+66[view] [source] [discussion] 2023-11-20 03:54:17
>>Davidz+y
Simulating fear and feeling actually fear, that can be fatal via nervous system shock are quite different things.
◧◩
5. qgin+G84[view] [source] [discussion] 2023-11-21 01:29:54
>>Davidz+y
We have no idea how to give anything a subjective experience of itself. We know how to make something behave as if it does externally.

One of the worst versions of AGI might be a system that simulates to us that it has an internal life, but in reality has no internal subjective experience of itself.

[go to top]