zlacker

[return to "Show HN: Octosphere, a tool to decentralise scientific publishing"]
1. Murska+bF1[view] [source] 2026-02-04 01:30:49
>>crimso+(OP)
I am afraid that gatekeeping is partially essential and somewhat desired, as an academic you don't have time to read everything and some sort of quick signals, albeit very flawed, can be useful to stop wasting time reading crappy science. If you don't gatekeep you will get a lot of crappy papers or papers that mention the same thing and it will waste more time from people that wish to get a quick sense of the state of a topic/field from quality work. An open source voting system would be easily abused, so it will end up to be trusting a select service of peer reviewers or agencies. Especially if a paper includes a lot of experiments and figures that can be somewhat complicated or overwhelming. What do think?
◧◩
2. fc417f+PT1[view] [source] 2026-02-04 03:26:58
>>Murska+bF1
I'm inclined to agree, and yet the past decade of ML on arxiv moving at a breakneck pace seems to be a counterexample. In that case I observe citation "bubbles" where I can follow one good paper up and down the citation graph to find others.
◧◩◪
3. Murska+b32[view] [source] 2026-02-04 05:00:57
>>fc417f+PT1
I think for smaller software papers or ML learning on arxiv this might work. For larger papers on biomedical or hard-tech, I think it is much less likely. I struggle to keep up with BioRxiv.org as a medical professional, many articles would require 2 hours+ to confidently review as a professional and I would never trust a "public" review algorithm or network to necessarily do a great job. If you allow weekly updates on your topic area, you might get a 100 papers a week of which 90 are likely poor-quality, but who is going to review these? Definitely not me, I cannot judge a 100 papers a week. Granted, probably only 1 or 2 are directly relevant to your work, but even then the time sink is annoying. It is nice if a publisher has done some initial quality check, made sure the written text is succinct, direct and validated and backed up by well-presented data/figures/methods. Even if a totally open social network exists for upvoting/describing papers, I am afraid the need for these publishers will still be there and they will just exist regardless, and it will still be preferred by academics.

Three~five experts specifically asked to review a paper in a controlled environment versus a thousands random scientists or public people (which might be motivated by financial, malicious or other reasons) is probably still the better option. Larger, technically impressive multi-disciplinary papers with 20+ authors are basically impossible to review as individuals, you would like a few experts on the main methods to review it together in harmony with oversight from an reputable vendor/publisher. Such papers are also increasingly common in any biotech/hard-tech field.

◧◩◪◨
4. fc417f+9Y2[view] [source] 2026-02-04 12:47:09
>>Murska+b32
> many articles would require 2 hours+ to confidently review as a professional

I think ML (and really all other fields) are the same. Skimming a paper never really leaves you certain of how rigorous it is.

I agree that a naive "just add voting" "review" mechanism would not suffice to replace journals. However there's no requirement that the review algorithm be so naive. Looked at differently, what is a journal except for a complicated algorithm for performing reviews?

> I am afraid the need for these publishers will still be there and they will just exist regardless, and it will still be preferred by academics.

Agreed. I doubt publishers are going away any time soon (if ever) regardless of how technically excellent any proposed replacement might be. I still think it's worthwhile to pursue alternatives though.

[go to top]