Are flaws in peer review someone else’s problem?

On April 8, 2013

paper stack

That stack of fellowship applications piled up on the coffee table isn’t going to review itself. You’ve got twenty-five to read before the rapidly approaching deadline, and you knew before you accepted the reviewing job that many of the proposals would fall outside your area of expertise. Sigh. Time to grab a coffee and get on with it.

As a professor of physics with some thirty-five years’ experience in condensed matter research, you’re fairly confident that you can make insightful and perceptive comments on that application about manipulating electron spin in nanostructures (from that talented postdoc you met at a conference last year). But what about the proposal on membrane proteins? Or, worse, the treatment of arcane aspects of string theory by the mathematician claiming a radical new approach to supersymmetry? Can you really comment on those applications with any type of authority?

Of course, thanks to Thomson Reuters there’s no need for you to be too concerned about your lack of expertise in those fields. You log on to Web of Knowledge and check the publication records. Hmmm. The membrane protein work has made quite an impact – the applicant’s Science paper from a couple of years back has already picked up a few hundred citations and her h-index is rising rapidly. She looks to be a real ‘star’ in her community. The string theorist is also blazing a trail.

Shame about the guy doing the electron spin stuff. You’d been very excited about that work when you attended his excellent talk at the conference in the U.S. but it’s picked up hardly any citations at all. Can you really rank it alongside the membrane protein proposal? After all, how could you justify that decision on any sort of objective basis to the other members of the interdisciplinary panel…?

Bibliometrics are the bane of academics’ lives. We regularly moan about the rate at which metrics such as the journal impact factor and the notorious h-index are increasing their stranglehold on the assessment of research. And, yet, as the hypothetical example above shows, we can be our own worst enemy in reaching for citation statistics to assess work outside – or even firmly inside – our  ‘comfort zone’ of expertise.

David Colquhoun, a world-leading pharmacologist at University College London and a blogger of quite some repute, has repeatedly pointed out the dangers of lazily relying on citation analyses to assess research and researchers. One article in particular, How to get good science, is a searingly honest account of the correlation (or lack thereof) between citations and the relative importance of a number of his, and others’, papers. It should be required reading for all those involved in research assessment at universities, research councils, funding bodies, and government departments – particularly those who are of the opinion that bibliometrics represent an appropriate method of ranking the ‘outputs’ of scientists.

Colquhoun, in refreshingly ‘robust’ language, puts it as follows:

“All this shows what is obvious to everyone but bone-headed bean counters. The only way to assess the merit of a paper is to ask a selection of experts in the field.

“Nothing else works.

“Nothing.”

An ongoing controversy in my area of research, nanoscience, has thrown Colquhoun’s statement into sharp relief. The controversial work in question represents a particularly compelling example of the fallacy of citation statistics as a measure of research quality. It has also provided worrying insights into scientific publishing, and has severely damaged my confidence in the peer review system.

The minutiae of the case in question are covered in great detail at Raphael Levy’s blog so I won’t rehash the detailed arguments here. In a nutshell, the problem is as follows. The authors of a series of papers in the highest profile journals in science – including Science and the Nature Publishing Group family – have claimed that stripes form on the surfaces of nanoparticles due to phase separation of different ligand types. The only direct evidence for the formation of those stripes comes from scanning probe microscopy (SPM) data. (SPM forms the bedrock of our research in the Nanoscience group at the University of Nottingham, hence my keen interest in this particular story.)

But those SPM data display features which appear remarkably similar to well known instrumental artifacts, and the associated data analyses appear less than rigorous at best. In my experience the work would be poorly graded even as an undergraduate project report, yet it’s been published in what are generally considered to be the most important journals in science. (And let’s be clear – those journals indeed have an impressive track record of publishing exciting and pioneering breakthroughs in science.)

So what? Isn’t this just a storm in a teacup about some arcane aspect of nanoscience? Why should we care? Won’t the problem be rooted out when others fail to reproduce the work? After all, isn’t science self-correcting in the end?

Good points. Bear with me – I’ll consider those questions in a second. Take a moment, however, to return to the academic sitting at home with that pile of proposals to review. Let’s say that she had a fellowship application related to the striped nanoparticle work to rank amongst the others. A cursory glance at the citation statistics at Web of Knowledge would indicate that this work has had a major impact over a very short period. Ipso facto, it must be of high quality.

And yet, if an expert – or, in this particular case, even a relative SPM novice – were to take a couple of minutes to read one of the ‘stripy nanoparticle’ papers, they’d be far from convinced by the conclusions reached by the authors. What was it that Colquhoun said again? “The only way to assess the merit of a paper is to ask a selection of experts in the field. Nothing else works. Nothing.”

In principle, science is indeed self-correcting. But if there are flaws in published work who fixes them? Perhaps the most troublesome aspect of the striped nanoparticle controversy was highlighted by a comment left by Mathias Brust, a pioneer in the field of nanoparticle research, under an article in the Times Higher Education:

I have [talked to senior experts about this controversy] … and let me tell you what they have told me. About 80% of senior gold nanoparticle scientists don’t give much of a damn about the stripes and find it unwise that Levy engages in such a potentially career damaging dispute. About 10% think that … fellow scientists should be friendlier to each other. After all, you never know [who] referees your next paper. About 5% welcome this dispute, needless to say predominantly those who feel critical about the stripes. This now includes me. I was initially with the first 80% and did advise Raphael accordingly.”

[Disclaimer: I know Mathias Brust very well and have collaborated, and co-authored papers, with him in the past].

I am well aware that the plural of anecdote is not data but Brust’s comment resonates strongly with me. I have heard very similar arguments at times from colleagues in physics. The most troubling of all is the idea that critiquing published work is somehow at best unseemly, and, at worst, career-damaging.  Has science really come to this?

Douglas Adams, in an inspired passage in Life, The Universe, and Everything, takes the psychological concept known as “someone else’s problem (SEP)” and uses it as the basis of an invisibility ‘cloak’ in the form of an SEP-field. (Thanks to Dave Fernig, a fellow fan of Douglas Adams, for reminding me about the Someone Else’s Problem field.) As Adams puts it, instead of attempting the mind-bogglingly complex task of actually making something invisible, an SEP is much easier to implement. “An SEP is something we can’t see, or don’t see, or our brain doesn’t let us see, because we think that it’s somebody else’s problem…. The brain just edits it out, it’s like a blind spot”.

The 80% of researchers to which Brust refers are apparently of the opinion that flaws in the literature are someone else’s problem. We have enough to be getting on with in terms of our own original research, without repeating measurements that have already been published in the highest quality journals, right?

Wrong. This is not someone else’s problem. This is our problem and we need to address it.

Image: Paper pile. Credit: Flickr/Sebastien Wiertz

About Philip Moriarty

Philip Moriarty is a Professor of Physics at the University of Nottingham, where his research focuses on nanoscale science. He is a member of the Science Board of the Institute of Physics and coordinates the multi-partner ACRITAS European network. He has participated in a number of research council-funded public engagement projects, including Giants of the Infinitesimal, and was a member of the Programme Committee for the controversial “Circling the Square: Research, Politics, Media, and Impact” conference held in Nottingham in May 2014. He is also a regular contributor to the Sixty Symbols video series.

30 Responses to Are flaws in peer review someone else’s problem?

  1. Interesting, and I agree that it is all our responsibility that the literature in our field is (mostly) correct.

    There may be some good news here. I am not sure if you are aware of this as it is a different field but about 2 years ago a team of scientists claimed in a Science paper that a type of bacteria could use arsenic to replace phosphorus in their DNA. This result was effectively taken apart in blogs within days of publication – maybe social media such as blogs will allow rapid reaching of a consensus on results that look wrong, at least when they are published in prominent journals.

    For discussion of this see eg,
    http://www.scientificamerican.com/article.cfm?id=study-fails-to-confirm-existence
    or
    http://blogs.discovermagazine.com/loom/2012/10/03/weirdly-unweird-a-better-end-to-the-arseniclife-affair/#.UWL-PRNwbng

    • Hi, Richard.

      Great comment. I was indeed aware of the importance of social media in “dissecting” that particular piece of science. I have recently written an article for the Times Higher on this (it’ll appear in next week’s issue) and my next blog post for physicsfocus will ,err, focus on just how we might embed the benefits of blogging and, more broadly, Web 2.0 (or is it now Web 3.0?!) in scientific publishing.

      I feel just a little hypocritical suggesting that we should find a way to merge social media “commentary” with the primary scientific literature given that I don’t even have a Twitter account. As I’m Irish, however, I’m genetically/culturally unable to communicate in <= 140 character aphorisms…

      Philip

  2. Interesting…
    But one big problem is that we need metrics because we don’t have time to assess the validity and importance of each paper in our field, and in addition often not enough knowledge to assess the relevance of those papers well beyond our immediate expertise that may be valuable to read. The name of the journal, h-index, and so on are in general helpful metrics. They are far from perfect, but there is no perfect metric either. Better to have imperfect metrics than be inefficient at selecting what is worth spending time with.

    “And yet, if an expert – or, in this particular case, even a relative SPM novice – were to take a couple of minutes to read one of the ‘stripy nanoparticle’ papers, they’d be far from convinced by the conclusions reached by the authors. What was it that Colquhoun said again? “The only way to assess the merit of a paper is to ask a selection of experts in the field. Nothing else works. Nothing.””

    But haven’t experts reviewed those SPM papers? It it is so easy for novices to detect the flaws, how do you explain that these have slipped past expert reviewers? Maybe that’s why is a controversy, because it is far from clear that the critiques are valid?

    • Hello Pep.

      Good to see you make a comment here.

      The question, of course, is the extent to which those experts were appropriately chosen by the editors of the journals. One thing I argue for in the Times Higher article – proofs of which I’ve sent to Vincent Dusastre, so you could ask him for a copy if you’re interested – is that all referees’ reports (appropriately ‘anonymized’/identities ‘redacted’) should be posted online with each paper. This would enable researchers to see to what extent the editors weighted different criticisms of the paper.

      I assume that in the interests of making science as open as possible, this is something you would support?

      As regards the validity of the critiques in the “striped nanoparticle” controversy, we have discussed this at length at Raphael’s blog. As I said there, I find it remarkable that a Nature Materials editor would argue that reproducibility of features in scanning probe microscope images is not a prerequisite for publication in a NPG journal. If this really is the case, and your comments at Raphael’s blog would strongly suggest so, then I would argue that we need to open up scientific publications to rapid feedback via online debate as quickly as possible!

      Moreover, and as you may well be aware, despite NPG guidelines stating that authors should provide raw data in a timely fashion “without undue qualification”, I have yet to receive the raw data from the scientist in question (despite repeated requests over the course of the last few months). If those critiques really are invalid, as you suggest, then why doesn’t the scientist provide the raw data and address his critics once and for all?

      • Hi Philip,

        Yes, generally speaking it is possible that in particular cases a reviewer does not have the sufficient expertise to judge the technical details, or that even if the reviewer is a proper expert, the reviewing job is not carefully done. And as we all know it has happened and will happen that on rare occasions technical problems slip past the eyes of best experts. Peer review is, obviously, not perfect, and there are correction mechanisms in place when needed.

        But the words in your post above imply that all the reviewers in all those published papers you refer to have either not been proper experts and/or have not done a proper job. Clearly, the chances that this has happened are really fairly low. You may argue that even seasoned scientists may swallow without much scrutiny the claims of published papers, and therefore that reviewers may have taken previously published evidence for granted. But we scientists are a critical bunch. So, if as you say

        “even a relative SPM novice – were to take a couple of minutes to read one of the ‘stripy nanoparticle’ papers, they’d be far from convinced by the conclusions reached by the authors”,

        it is extremely hard to accept that some tens of reviewers may not have raised such concerns.

        As for your questions about transparency, this recent editorial of Nature Materials is relevant.

        As for your statement, “I find it remarkable that a Nature Materials editor would argue that reproducibility of features in scanning probe microscope images is not a prerequisite for publication in a NPG journal”, this must be an erroneous interpretation of this comment of mine. Clearly, no one with a sane mind would agree with such a ludicrous statement.

        Pep

        (Disclaimer: I work as an editor at Nature Materials, yet all these comments are strictly in personal capacity.)

        • Thanks for that comment, Pep. I haven’t time at the moment to give it the considered response it deserves – I’ll get back to you asap.

          For now, I notice that you didn’t address my point about posting reviewers’ reports online with the paper. Is this something you would agree with? If not, why not?

          Philip

          • Personally, I am all for transparency. I think that the benefits of publishing referee reports outweigh potential inconveniences. I also support both post-publication and open peer-review processes.

            However, I also recognize practical issues (which vary depending on the type of journal and field). For a start, for a journal to publish reviewer reports authors and reviewers would have to agree to that before they submit manuscripts and review them. See here and here for more information.

        • Pep.

          Thanks again for your comment. I don’t think it’s really appropriate to use physicsfocus as a forum to discuss the minutiae of the “striped nanoparticle” debate – Raphael’s blog is a much better forum for that debate. You said a while ago that you would prefer not to post comments there, but could I ask you to reconsider?

          That said, and as noted in the blog post, the nanoparticle controversy highlights a number of broader issues with the current peer review system. Let me deal with a couple of the points you raise:

          Peer review is, obviously, not perfect, and there are correction mechanisms in place when needed.

          Compare and contrast with Paul Brookes’ statement, quoted in an editorial in Lab Times last year:

          You can have all the heavy hitters on your side, but if you challenge something in [an NPG] journal, you will have a fight to even get in the door, followed by a pitched battle to get something published, with every possible curve-ball thrown at you during the review and revision process. NPG does not like it when you find mistakes that should have been found in peer review.

          Let’s just say that my experience of dealing with Nature Materials on the matter of adherence to the NPG ‘publication ethics’ guidelines is not entirely at odds with Brooke’s statement.

          You also say …it is extremely hard to accept that some tens of reviewers may not have raised such concerns.

          I agree entirely and this is why I state in the blog post that my confidence in peer review has been rather shaken by the striped nanoparticle controversy. But this is a very good reason to make sure that the referees’ reports (anonymized) are published online with the paper? I was delighted to read the Nature Materials editorial which supported this. Publication of the reports will provide a very interesting insight into whether those concerns were raised and the extent to which the editor(s) weighed up the reports of reviewers with different expertise (which, in this case, may not necessarily have been in scanning probe microscopy).

          “I find it remarkable that a Nature Materials editor would argue that reproducibility of features in scanning probe microscope images is not a prerequisite for publication in a NPG journal”, this must be an erroneous interpretation of this comment of mine. Clearly, no one with a sane mind would agree with such a ludicrous statement .

          Now this really is something that is best discussed at Raphael’s blog rather than here and I would much prefer to discuss the issue there. But I am afraid that, whether you meant it or not, your statements on SPM imaging at Raphael’s blog unambiguously support my interpretation. You explicitly argued that the inability to acquire consecutive SPM images showing the same features did not undermine Yu and Stellacci’s interpretation of their data (which, in the case of the Small paper, are definitively artefactual – I’ve got the raw data in that particular case!).

          Let me remind you of what you said: “noise is predominant and it is totally normal that in consecutive scans of the same sample the signal does not end up on the same nanoparticle.”

          I’m a scanning probe microscopist. I’ve acquired thousands of SPM images over the course of my career. SPM is a difficult technique and is highly prone to artifacts due to the tip. It is certainly not normal to publish STM images were features are not reproducible from scan to scan.

          Out of interest, have you used a scanning probe microscope yourself at any point in your career?

          Best wishes,

          Philip

          • Philip,

            Indeed, the signal may not end up on the same nanoparticle in consecutive scans of a sample because of prevalent noise, yet of course the features have to be reproducible after proper statistics and distinction between noise and signal are carried out. These are obvious things that I am sure you know well.

            I am not going to post on Raphaël’s blog because it is an unfair forum, where some arguments are bent and others are ignored to fit a story that is only convincing to the gullible (please excuse my forthrightness). Yet this is no news to you.

            I have not carried out SPM measurements with my hands, but I have observed SPM measurements done by others, and have participated in multiple relevant technical discussions.

            Out of interest, have you investigated or studied the principles behind the organization-dependent conformational entropy of mixtures of ligands bound to curved space and having different length? I would suggest that you write a post about it to better inform your followers.

            Best,

            Pep

          • Pep,

            I’ll deal with your points one by one.

            … yet of course the features have to be reproducible after proper statistics and distinction between noise and signal are carried out.

            Except, in this particular case, no “proper statistics and distinction between noise and signal” have been carried out for the Yu and Stellacci paper in Small we were discussing. The analysis was done on the basis of individual images. While a (thoroughly unconvincing) statistical analysis was claimed to have been carried out for other papers, why has Prof. Stellacci yet to send me the raw data despite my requesting it four months ago? (See this post at Raphael’s blog for more details.)

            I could continue at length about the wealth of inconsistencies in the STM data but, again, this is not the appropriate forum.

            I am not going to post on Raphaël’s blog because it is an unfair forum, where some arguments are bent and others are ignored to fit a story that is only convincing to the gullible (please excuse my forthrightness).

            Wow – the “gullible”?! The critiques at Raphael’s blog are supported by detailed – at times, forensic – scientific analysis of the data. There are many informed and erudite comments at Raphael’s blog. You do Raphael’s readership a great disservice – and yourself no credit at all – by describing them as “gullible”. Am I also one of the “gullible”?!

            Raphael exerts no editorial control at his blog. There is no “censoring”/modification of comments -the entire debate/controversy is laid bare. I therefore do not understand how you can claim that “arguments are bent”. Your (and everyone else’s) arguments are there for everyone to see. The striped nanoparticle debate is a great exemplar of the benefits of post-publication review which, from the editorial you highlighted in a previous post, I thought you were strongly in favour of?

            Out of interest, have you investigated or studied the principles behind the organization-dependent conformational entropy of mixtures of ligands bound to curved space and having different length? I would suggest that you write a post about it to better inform your followers.

            As you are fully aware, I and others have commented on the simulations previously at Raphael’s blog. The question of how well the simulations represent the experimental results is a moot point and I am not going to pursue it here. Why would nanoparticles have “curved” surfaces in any case? As someone who has studied surfaces you will know that surface free energy depends critically on crystallographic orientation. But, again, that’s a technical argument best suited to Raphael’s blog.

            You, along with Prof. Stellacci, seem to believe that if stripes are found on nanoparticles (either in simulations or in reality) that this somehow vindicates the previously published STM work. This misses the point entirely. Even if credible high resolution images of stripes on particles (using, e.g. NC-AFM under UHV conditions at 5K) were to be observed, this does not in any way mean that the experimental protocols and data analysis methodology described in the earlier work somehow suddenly becomes valid.

            And, finally, my “followers”? I don’t have a Twitter account so this is just a little bit messianic, don’t you think?!

            All the best,

            Philip

    • Whoever ‘Curious scientist’ is, he/she is raising a question which is indeed puzzling. How did those 25+ papers pass peer review (4 in Nature Materials)?

      All SPM experts I have talked to, and all those (unfortunately very few maybe because of the SEP problem) who have expressed a view publicly share Philip assessment of the SPM data. See this post for a summary of the online discussions since the publication of ‘stripy revisited’ and the response:
      http://raphazlab.wordpress.com/2013/03/05/three-months-of-stripy-nanoparticles-controversy/

      One (unsatisfactory) scenario is that those articles have not been evaluated by SPM experts but by ‘nanoparticle’ expert and that once the first paper had been accepted those non-SPM experts have not questioned the SPM data.

      This scenario does not explain why claims completely unsubstantiated by any experimental evidence have been accepted in high impact journals. For one recent Nature Materials example, see:
      http://raphazlab.wordpress.com/2012/12/10/gaping-holes-in-the-gap/

      It also does not explain nor justify the effective refusal by the authors to share primary data or the 5 cases of data re-uses which have led to two corrections (one in Nature Materials, one in PNAS) to date.

  3. Interesting stuff. But I have to say there’s issues with “The only way to assess the merit of a paper is to ask a selection of experts in the field. Nothing else works. Nothing.” People are people. They are very good at thinking they’re impartial and have no self-interest. I fear it’s a rare man who will stand up to say “This groundbreaking brilliant paper demonstrates that what I’ve been saying for twenty years is wrong. I am no longer the expert in the field, this guy is”. Leave it to the experts and ignore the wisdom of crowds, and there’s a risk that your field turns into a dead pool. I wouldn’t want that to happen to CMP.

    http://www.freethunk.net/nickkim/peerreview.jpg

    • Hi, John.

      I agree entirely re. the “sociological” pressures influencing peer review. This is why we need to make the process as open as possible (see my comments to Pep Pamies aka “Curious Scientist” above). Why shouldn’t referees’ reports (without attribution) be posted online with the article?

      The important thing is to move from a culture in which “the medium is the message” and where citations are taken as a proxy for quality. We have to get back to actually *reading* the work and to realise that publication is the first, not the last, step in scientific debate.

      • Hi Philip,

        Great suggestion – that *referees’ reports (without attribution) be posted online with the article*.

        I’ve had a few interesting arguments with referees (and with authors), and I think that publishing these would sometimes (hopefully often!) really help other readers understand the key issues involved.

        Might also be a bit of a driver for improving refereeing standards!

        Merv

      • Sounds good Phil. But then a lot of things do. Leave it to the experts and scientific progress is at risk. Make it all open and journals are at risk. Overall I’d say it’s an imperfect world and a balancing act, and the right approach is to recognise it and take baby steps to make it less imperfect. Then maybe you’ll get to hear about the guy doing the electron spin stuff who demonstrates that it’s compound, at c and half c, and real.

  4. “Better to have imperfect metrics than be inefficient at selecting what is worth spending time with.”
    This is a route to disaster. Metrics are not imperfect. They are near useless and so selecting reading on metrics will waste far more time.
    I can rank journals in many different ways and obtain very different results. To take just one example, citation half life. Some journals such as Nature and J Biol Chem have half lives of around 9.5 years, whereas there are a host of journals with half lives >99 years. What does this mean? That the latter journals contain “facts” likely to remain current for a long time. That is all. This doesn’t make one set of journals “Better” or “More useful to read” than another. Just different, both useful.
    This is why RCUK have (once again) stated publicly that REF will not use metrics. If assessing the quality of UK science depends on reading, then it is obvious that the practice of science also should use this approach. It is what we teach our students.
    Take the example of a young scientist who makes a breakthrough, but doesn’t have the “name” to get into a so-called “major” journal. I want to read their paper. How do I find it? I scan, I swap information with colleagues, students and postdocs, I go to meetings that encourage people new to the field to present data.

    As for “Experts”, I would draw your attention to Athene Donald’s excellent posts on the Dramatis Personae in science and universities. Note that many of the committee stereotypes may be “experts”, but they will not deliver what you, as an editor require, an “expert opinion”. The problem you face as an editor is that you cannot easily distinguish who you are dealing with – it is much easier when one is sat in a room (real or virtual) with other people. Hence the importance, as ever, of “post publication” peer review, open access to data and reproduction of experiments. As soon as concerns are raised, we have a collective duty to double check that what is published is right. There is a steady stream of examples on Retraction Watch, which shows how poor we are at this, despite it being at the heart of the “scientific method”.

  5. Dave,

    If metrics were useless, the great majority of scientists would not use them.

    And as I argued in my recent editorial, the impact factor is a reasonably good, predictive metric for journals, but it does a lousy job at predicting future citations of individual papers or scientists.
    You can also find more on metrics and on post-publication review in this other editorial and in many recent blog posts, such as that of Stephen Curry (see also my comment on that blog post).

    As for your comment on journal half-lives, I posted a few days ago an interesting plot on journal half-lives and impact factors, and my interpretation of the data here.

    And by the way, many young scientists without a name who generate great results have also published them in prestigious journals. We scientists may be a tad conservative and protective of status, but we also recognize good work when we see it.

    • “We scientists may be a tad conservative and protective of status, but we also recognize good work when we see it.” I’d say that’s true of materials science Pep. But when it comes to particle physics and fundamental physics, it absolutely is not. If you don’t believe me ask around about how pair production works. Then analyse the consensus given answer very carefully. I’m afraid people are far more convictional and self-deceiving than they realise. When you recognise this, especially in yourself, it comes as a shock.

    • Pep.

      Inspired, in part, by your comment, my next post for physicsfocus is going to focus on the quality vs citations issue, and the importance of embedding post-publication peer review in the primary scientific literature. (The latter is the theme of the Times Higher article I mentioned in an earlier comment in this thread).

      Are citations really a proxy for scientific quality? I can – and will, in my next post – point to some truly beautiful and pioneering pieces of work which have picked up a relative dearth of citations.

      Just because One Direction has apparently outsold The Beatles (in the US) doesn’t necessarily mean that Lennon, McCartney, Harrison, and Starr are less important artistically…

      • Just because One Direction has apparently outsold The Beatles (in the US) doesn’t necessarily mean that Lennon, McCartney, Harrison, and Starr are less important artistically…

        Well, OK, maybe I shouldn’t have included Ringo in the list…

    • Pep,

      We are in agreement that metrics are not useless: they are in fact the best thing we currently have. However, they are also the result of 300 years of journal evolution — ie. they’re heavily path dependent. We as a community can do better.

      In our opinion the way forward is to generate more direct metrics during the peer-review process. That is Publons, in a nutshell.

      - Daniel Johnston
      Co-founder of Publons