zlacker

[parent] [thread] 0 comments
1. abel_+(OP)[view] [source] 2022-05-24 10:01:02
On the contrary -- the opposite will happen. There's a decent body of research showing that just by training foundation models on their outputs, you amplify their capabilities.

Less common opinion: this is also how you end up with models that understand the concept of themselves, which has high economic value.

Even less common opinion: that's really dangerous.

[go to top]