zlacker

[parent] [thread] 5 comments
1. Murska+(OP)[view] [source] 2026-02-04 01:30:49
I am afraid that gatekeeping is partially essential and somewhat desired, as an academic you don't have time to read everything and some sort of quick signals, albeit very flawed, can be useful to stop wasting time reading crappy science. If you don't gatekeep you will get a lot of crappy papers or papers that mention the same thing and it will waste more time from people that wish to get a quick sense of the state of a topic/field from quality work. An open source voting system would be easily abused, so it will end up to be trusting a select service of peer reviewers or agencies. Especially if a paper includes a lot of experiments and figures that can be somewhat complicated or overwhelming. What do think?
replies(2): >>perfmo+8b >>fc417f+Ee
2. perfmo+8b[view] [source] 2026-02-04 02:55:09
>>Murska+(OP)
This is solved by social trust graph algorithms. These allow intersubjective ranking without a central authority.
3. fc417f+Ee[view] [source] 2026-02-04 03:26:58
>>Murska+(OP)
I'm inclined to agree, and yet the past decade of ML on arxiv moving at a breakneck pace seems to be a counterexample. In that case I observe citation "bubbles" where I can follow one good paper up and down the citation graph to find others.
replies(1): >>Murska+0o
◧◩
4. Murska+0o[view] [source] [discussion] 2026-02-04 05:00:57
>>fc417f+Ee
I think for smaller software papers or ML learning on arxiv this might work. For larger papers on biomedical or hard-tech, I think it is much less likely. I struggle to keep up with BioRxiv.org as a medical professional, many articles would require 2 hours+ to confidently review as a professional and I would never trust a "public" review algorithm or network to necessarily do a great job. If you allow weekly updates on your topic area, you might get a 100 papers a week of which 90 are likely poor-quality, but who is going to review these? Definitely not me, I cannot judge a 100 papers a week. Granted, probably only 1 or 2 are directly relevant to your work, but even then the time sink is annoying. It is nice if a publisher has done some initial quality check, made sure the written text is succinct, direct and validated and backed up by well-presented data/figures/methods. Even if a totally open social network exists for upvoting/describing papers, I am afraid the need for these publishers will still be there and they will just exist regardless, and it will still be preferred by academics.

Three~five experts specifically asked to review a paper in a controlled environment versus a thousands random scientists or public people (which might be motivated by financial, malicious or other reasons) is probably still the better option. Larger, technically impressive multi-disciplinary papers with 20+ authors are basically impossible to review as individuals, you would like a few experts on the main methods to review it together in harmony with oversight from an reputable vendor/publisher. Such papers are also increasingly common in any biotech/hard-tech field.

replies(1): >>fc417f+Yi1
◧◩◪
5. fc417f+Yi1[view] [source] [discussion] 2026-02-04 12:47:09
>>Murska+0o
> many articles would require 2 hours+ to confidently review as a professional

I think ML (and really all other fields) are the same. Skimming a paper never really leaves you certain of how rigorous it is.

I agree that a naive "just add voting" "review" mechanism would not suffice to replace journals. However there's no requirement that the review algorithm be so naive. Looked at differently, what is a journal except for a complicated algorithm for performing reviews?

> I am afraid the need for these publishers will still be there and they will just exist regardless, and it will still be preferred by academics.

Agreed. I doubt publishers are going away any time soon (if ever) regardless of how technically excellent any proposed replacement might be. I still think it's worthwhile to pursue alternatives though.

replies(1): >>Murska+wm7
◧◩◪◨
6. Murska+wm7[view] [source] [discussion] 2026-02-06 02:57:40
>>fc417f+Yi1
I agree with your points overall. Regarding "what is a journal except for a complicated algorithm for performing reviews"? I think one point is that there is a hard-to-quantify social contract between the journal editors and specialized reviewers, which are partially hand-selected over many years (and opaque). Editors overall do rely on verifiable experts in the field with an established reputation, both publically and privately. Reviewers can have some sort of direct interaction with the editors, coming with opaque trust-verification. Editors also tend to go to scientific meetings as well and do have undocumented or unofficial interactions with scientists (and their favorite reviewers). Now, reviewers might ask their students to review a paper and they sign off on that after a quick skim, but not all of them do and especially if the paper has more caliber/weight to it, they do tend to take it seriously personally.

Another issue when going to a decentralized tool is that I think it should apply some sort of gate-keeping to only allow academics or verified scientists to contribute reviews, but then you also need a way to prevent bias/friend/self-citation network interactions between the academic reviewers, which means you would need to keep good track of them? Not sure how to handle that.

[go to top]