>>fc417f+PT1
I think for smaller software papers or ML learning on arxiv this might work. For larger papers on biomedical or hard-tech, I think it is much less likely. I struggle to keep up with BioRxiv.org as a medical professional, many articles would require 2 hours+ to confidently review as a professional and I would never trust a "public" review algorithm or network to necessarily do a great job. If you allow weekly updates on your topic area, you might get a 100 papers a week of which 90 are likely poor-quality, but who is going to review these? Definitely not me, I cannot judge a 100 papers a week. Granted, probably only 1 or 2 are directly relevant to your work, but even then the time sink is annoying. It is nice if a publisher has done some initial quality check, made sure the written text is succinct, direct and validated and backed up by well-presented data/figures/methods. Even if a totally open social network exists for upvoting/describing papers, I am afraid the need for these publishers will still be there and they will just exist regardless, and it will still be preferred by academics.
Three~five experts specifically asked to review a paper in a controlled environment versus a thousands random scientists or public people (which might be motivated by financial, malicious or other reasons) is probably still the better option. Larger, technically impressive multi-disciplinary papers with 20+ authors are basically impossible to review as individuals, you would like a few experts on the main methods to review it together in harmony with oversight from an reputable vendor/publisher. Such papers are also increasingly common in any biotech/hard-tech field.