zlacker

[parent] [thread] 0 comments
1. Ration+(OP)[view] [source] 2023-11-20 03:42:26
As I pointed out in a different comment, ChatGPT and friends are based on predicting the training data. As a result they learn to imitate what is in it.

To the extent that we provide the training data for such models, we should expect it to internalize aspects of our behavior. And what is internalized won't just be what we expected and were planning on.

[go to top]