zlacker

[parent] [thread] 3 comments
1. idle_z+(OP)[view] [source] 2023-01-14 07:34:02
It's a pretty funny assertion. The whole point of ML models is to take training data and learn something general from it, the common threads, such that it can identify/generate more things like the training examples. If the model were, as they assert, just compressing and reproducing/collaging training images then that would just indicate that the engineers of the model failed to prevent overfitting. So basically they're calling StabilityAI's engineers bad at their job.
replies(1): >>realus+R5
2. realus+R5[view] [source] 2023-01-14 08:39:59
>>idle_z+(OP)
As a side discussion, is there any research model which tries to do what they describe? Like overfitting to the maximum possible to create a way to compress data. It might be useful in different ways.
replies(1): >>visarg+67
◧◩
3. visarg+67[view] [source] [discussion] 2023-01-14 08:53:59
>>realus+R5
Yes, look at NeRF (neural radiance fields) and SIREN (Implicit Neural Representations with Periodic Activation Functions)
replies(1): >>realus+sC
◧◩◪
4. realus+sC[view] [source] [discussion] 2023-01-14 14:22:20
>>visarg+67
The papers I'm finding on those look truly amazing! Thanks a lot for the insights
[go to top]