https://pubpeer.com/ mentionned in the article is an interesting website that I will check out. However, peer review is supposed to be done by experts so I’m not sure how this website ensure that :).
I also found https://openreview.net/about interesting as a concept. Although it is a bit nerve wracking to have reviews public.
IMO open review is the way to go. Having reviews public goes a long long way towards auditing how trustworthy a process is and one of the main sources of trust in Open Source software.
It can be really nerve wracking having your work ventilated in public, but you get used to it. I also think it encourages more polite reviews in some people, as their interactions are also on public display.
There are a few famous developers who used to completely brow-beat people that have publicly started talking about how they are taking feedback to heart about moderating that and I don't think that would have happened if they had only been giving feedback in private.
Most early career people will not be willing to have their names publicized alongside their referee reports, because that will be too much risk for their career advancement. And let's face it, most referees are early career just by the sheer amount of papers out there to review.
It's not even just early career. The problem is that many fields in academia are small enough that researchers end up knowing most of the others. So, if your report is going to be public and the other guy, with whom you had a nice beer last year at the Annual Congress of Researchology, has produced a slightly disappointing paper (not even fraud, just mediocre science), it's going to be psychologically harder to go full-on steam on the guy and point out all the lacking points of the paper, knowing how much work it's going to take to just redo everything, just for the paper to be published in a less renowned journal.
Of course, you could say "it's just part of the job, you'll assume next time you see the guy to have a beer". But we're all humans, and, some exceptions apart, we want to be somewhat nice to people we like...
As you can see, I'm very torn by Open Review. I think it could easily yield a very mild, nicey-nicey reviewing among established peers of the field. Maybe the solution would be for reviews to be open, but manage a way for anonymity to be respected. Note that the Editors know it all already, and they should be the warrant of the quality of the process. The problem is, they struggle to get reviewers, so they can't be too picky about the quality of their reviews...
I work in software, not academia. Would you mind giving me a quick overview of what referee reports are and how they apply? Most of the results on Google seem to assume you already know what they are, and the one concise answer I can find makes it seem like they're job references?
Referee reports are reports produced -- for free -- by the peer review referees of their opinions / suggestions / comments on the papers that academic journals ask them to review on their behalf. Editors of the journals usually make their decision on whether a paper can be published as is, needs revision (and then to which degree), or rejected, according to the referee reports. In case of a revision, the authors of a paper will need to address the concerns in the referee reports in detail, often providing additional supporting materials etc. Referees are typically practicing scientists themselves and perform peer review for the good of the community (at least in theory). Most of the referees are anonymous to protect them from backlashes from criticizing the papers they review, particularly when some coauthor is a big shot. To be fair, not all referee reports are constructive, but at least the majority of them perform adequate gate-keeping to sift out the obvious bad apples.
Interesting. I'm not sure what the exact path forward would be, but a couple observations. The first is that I do think the lack of transparency into things like peer review hurts the scientific community overall. Often when I don't fully understand why a piece of code is the way it is, reading the comments of the original PR can be really illuminating. Being able to see what sort of issues were raised about a paper during review and how the author handled them could, I imagine, add a lot of valuable context to the paper itself.
The next thing I'm noticing is something my academic friends complain about a lot: the vengeful, egotistic nature of many high-level academics. I don't really know how to solve this problem when stuff isn't anonymous, but I have to imagine that an author being able to see why their paper was recommended against and even being able to respond to it would be good for the author in the long run. In software you learn not to take someone deciding to reject your pull request personally, but also when someone rejects a good PR everyone else can also see that.
Overall I think the process helps blunt egos a little, and also shines a light on those that can't get past their egos in a way that is also healthy for the community. But yeah, I hear you and don't have a 100% good solution to egotistic retribution beyond vague noises about that being a cultural issue that needs to be changed and a vague sense I can't back up with data that opening up the communication lines to public scrutiny might actually help shine a light on that culture and change it.