zlacker

[parent] [thread] 0 comments
1. Ration+(OP)[view] [source] 2023-11-20 03:38:50
I am fully aware that there are big assumptions.

But our most effective experiment so far is based on creating LLMs that try to act like humans. Specifically try to predict the next token that human speech would create. When AI is developed off of large scale models that attempt to imitate humans, shouldn't we expect that in some ways it will also imitate human emotional behavior?

What is "really" going on is another question. But any mass of human experience that you train a model on really does include our forms of irrationality in addition to our language and logic. With little concrete details for our speculation, this possibility at least deserves consideration.

[go to top]