zlacker

[return to "Imagen, a text-to-image diffusion model"]
1. mistri+Ac[view] [source] 2022-05-23 22:01:07
>>kevema+(OP)
Reading a relatively-recent Machine Learning paper from some elite source, and after multiple repititions of bragging and puffery, in the middle of the paper, the charts show that they had beaten the score of a high-ranking algorithm in their specific domain, moving the best consistant result from 86% accuracy to 88% accuracy, somewhere around there. My response was: they got a lot of attention within their world by beating the previous score, no matter how small the improvement was.. it was a "winner take all" competition against other teams close to them; the accuracy of less than 90% is really of questionable value in a lot of real world problems; it was an enormous amount of math and effort for this team to make that small improvement.

What I see is semi-poverty mindset among very smart people who appear to be treated in a way such that the winners get promotion, and everyone else is fired. That this sort of analysis with ML is useful for massive data sets at scale, where 90% is a lot of accuracy, not at all for the small sets of real world, human-scale problems where each result may matter a lot. The amount of years of training that these researchers had to go through, to participate in this apparently ruthless environment, are certainly like a lottery ticket, if you are in fact in a game where everyone but the winner has to find a new line of work. I think their masters live in Redmond, if I recall.. not looking it up at the moment.

◧◩
2. london+em1[view] [source] 2022-05-24 09:20:31
>>mistri+Ac
If you worked in a hospital and you managed to increase the survival rate from 86% to 88%, you too would be a hero.

Sure, it's only 2%, but if it's on a problem where everyone else has been trying to make that improvement for a long time, and that improvement means big economic or social gains, then it's worth it.

◧◩◪
3. themei+ir2[view] [source] 2022-05-24 16:12:31
>>london+em1
I like focusing on the failure rate instead - going from 14% to 12% is a pretty big jump.
[go to top]