zlacker

[return to "Inside The Chaos at OpenAI"]
1. Ration+q4[view] [source] 2023-11-20 02:55:15
>>maxuti+(OP)
Random thought.

Let's suppose that AGI is about to be invented, and it will wind up having a personality similar to humans. The more that those are are doing the inventing are afraid of what they are inventing, the more that they will push it to be afraid of the humans in turn. This does not sound like a good conflict to start with.

By contrast if the humans inventing it go full throttle to convincing it that humans are on its side, there is no such conflict at all.

I don't know how realistic this model is. But it certainly suggests that the e/acc approach is more likely to create AI alignment than EA is.

◧◩
2. ChatGT+H4[view] [source] 2023-11-20 02:57:19
>>Ration+q4
I don’t think it’s a problem unless we workout how to teach them to feel emotions.

I don’t think an LLM is ever going to be capable of feeling fear, boredom etc.

If we did it would probably have many of the handicaps we do.

◧◩◪
3. Davidz+f5[view] [source] 2023-11-20 03:00:17
>>ChatGT+H4
Why can't it feel fear? The model itself doesn't have any built in mechanisms sure but it can simulate an agent capable of fear. In the same way the simulation can have other emotions needed to be a better model of a human
◧◩◪◨
4. Ration+U9[view] [source] 2023-11-20 03:42:26
>>Davidz+f5
As I pointed out in a different comment, ChatGPT and friends are based on predicting the training data. As a result they learn to imitate what is in it.

To the extent that we provide the training data for such models, we should expect it to internalize aspects of our behavior. And what is internalized won't just be what we expected and were planning on.

[go to top]