We will start with the Douglas G. Altman Lecture, by Malcolm MacLeod (online presentation) –@maclomaclee.bsky.social, with “Does the Journal Article Have a Future?” MM is presenting online, because hard to enter US and out of protest to a hostile environment to scientists.
MM: Scientific publishing: The denuding of public assets to augment private or corporate wealth. Some scientific publishers make lots of profits. They are supposed to add value, but do they? [I have trouble hearing what the speaker is saying].
MM: It can take a long time between submission and publication; some of what gets published is biased (positive results) or fabricated (paper mills). Metaresearch papers often describe problems but do not offer solutions.
MM: Actions: * Require all research to have institutional sponsor, guarantee work was done as described * To be supported by public protocol * Conduct in-house statistics / methodological review * Facilitate, not obstruct, evidence synthesis efforts (and he will run the Berlin Marathon!)
Discussion: * More likely to review to/for non-profit publishers * With the current political situation in the US, where NIH researchers will now have to pay more to make their research open, is the time for a journal article over?
Bias, Study Outcomes, and Reporting Concerns
After today’s opening lecture, we will continue with the next session: “Bias, Study Outcomes, and Reporting Concerns“, moderated by Steven Goodman@stevengoodman.bsky.social – who is proud to say the Stanford Metrics institute is not making a lot of money 🙂
First: Jae Il Shin with ‘Bias, Study Outcomes, and Reporting Concerns Immortal Time Bias Prevalence and Effects on Estimates in Systematic Reviews and Meta-Analyses‘ Immortal time bias = ? (I cannot follow this talk because the introduction slide was shown too briefly).
Sorry, folks, if I can not grasp the topic, I cannot summarize this talk in posts here. It seems this paper might be helpful to introduce the topic: jamanetwork.com/journals/jam… “cognizant of ITB”? Anyway, here is the conclusion slide of the speaker, for those who are interested.
Some of the questions from the audience are so long….. more like stories, and when they have finally got to the point where they reach the question, they interrupt themselves with more comments.
Next: Yiwen Jiang (online) with ‘Effect Estimates for the Same Outcomes Designated as Primary vs Secondary in Randomized Clinical Trials: A Meta-Research Study‘ [I do not even understand the title – how can the same outcome be both primary and secondary?? – this session is clearly over my head]
This talk again is full of terms that I do not understand enough to put it into a post here. But here is the last slide of Dr. Jiang’s talk:
I skipped some of the morning talks to see this: Chicago as seen from the Chicago River, on a wonderful boat tour by the Chicago Architecture Center. Shout out to volunteer Bill, who did a great job making us look at all these tall buildings and their different styles, from a different perspective.
While we were doing the boat tour, we missed several talks, including a very spectacular, Star Wars-themed talk by Nihar Shah.
Nihar Shah absolutely winning at presentations with a musical fanfare intro and a costume change to discuss anonymizing reviewers to each other in peer review discussions (I have some bias as a Star Wars fan!) #PRC10
Back from lunch, we continue #PRC10 with ‘Editorial and Publishing Processes and Models‘, starting with a talk by Christos Kotanidis, with ‘Changes to Research Article Abstracts Between Submission and Publication‘ – CK: we looked at all original research articles submitted to @nejm.org in 2022.
CK: we matched them with papers not submitted to NEJM. We scored changes in RCTs based on Trial design, Primary outcome, adverse events, and conclusions, resulting in a TPAC score (+4 better published version; -4 better submission). High JIF journals had lower scores (0.6) than others (0.75)
CK: Most often, the abstract conclusions were changed. While on average, papers improved, that number was lowest among lower-impact journals. Read more here: www.acpjournals.org/doi/10.7326/…
Discussion: * Perhaps high impact factor journals have more resources to make editorial changes. * Your paper is behind a paywall!
Next: Aaron Clauset with ‘Manuscript Characteristics Associated With Editorial Review and Peer Review Outcomes at Science and Science Advances‘ – Elite journals have profound influence in science discourse/careers – a lot of submissions get desk-rejected.
Anonymous data set 110k submissions covering 2015-2020 –> What author / manuscript characteristics correlate with editorial / review success? Editorial review is a much stronger filter than peer review. Science sends only 17% of ms to review, and only 6% are accepted.
High institutional prestige, geography, topic, and large team size are strongly correlated with success. Small associations for author gender. Editors have stronger correlations than authors. Science (professional editors) and Science Advances (academic editors) have similar profiles.
Discussion: * Can you measure quality? Peer review might be the best measure. * Can authors appeal decisions? Yes, but that data is messy and we did not use it for this dataset. * Would blinding review for authors/institutions make a difference?
Next up: Nicola Adamson (who talks very fast) with ‘Investigating Changes in Common Vocabulary Terms in eLife Assessments Across Versions in a Publish, Review, Curate Model‘ – In Oct 2022 eLIfe announced a new model “Publish, Review, Curate”, papers sent out to peer review require a preprint.
NA: Manuscripts are assessed for significance and strength. using qualifying terms (landmark – useful). We looked at distributions of terms in first/final versions. Weaker papers received higher scores after revisions. (Even the transcription screen cannot keep up with this speaker).
NA: Conclusions: * Authors revise to improve work in most cases, particularly where it was first rated as incomplete or inadequate. * Most versions of record are declared after single round of revision. * Significance of findings terms change less often than strength of evidence terms.
Discussion: * Do papers still improve after the first round of revision? * The weakest score term was ‘useful’ – were there no papers that were not useful? – not useful papers have already been filtered out by an editor * Can you correlate metrics with scores of e.g. citations? Not yet.
There will now be a coffee break and a poster session of about an hour.
Coffee break and poster viewing
There is just one coffee pot (the other two are decaf and hot water) and TEN milk pots. This is not a good ratio 🤣
Posters!
Coffee has been ratioed even more 😜
Peer Review Times and Payment Incentives
Late afternoon session, moderated by Kirsten Bibbins-Domingo @kbibbinsdomingo.bsky.social: “Peer Review Times and Payment Incentives” with Emilie Gunn@emiliemgunn.bsky.social about “Results of Testing the Gold Standard 2-Week Reviewer Deadline” Editors at JCO Oncology Practice struggle to find reviewers (like all editors!).
EG: Would changing the review time to three weeks help? Offering the 21 day deadline actually reduced the reviewer conversion. Reviewers typically submit within 1-2 days of the deadline regardless of total time given. Should we shorten the deadline time then? Add EiC name?
Next, ‘Analysis of Decisions and Lead-Time in Ethical Review Boards in Sweden‘ from Emmanuel Zavalis. 20K applications 2021-2023: increasing over time, half of the submissions are amendments, mainly from healthcare /universities. 90% of applications are accepted (some need revisions).
EZ: Many applications are not approved in the required 60 days – delays have effects for postdocs with a 2-year project. They might switch to a different topic. We want to collect further data from other countries to compare. Open to suggestions.
Next: David Maslove with ‘Monetary Incentives for Peer Review at a Medical Journal: A Quasi-Randomized Experimental Study‘. Should we pay peer reviewers? We sent 2 letters, one offered USD $250 to review for Critical Care Medicine. Study: journals.lww.com/ccmjournal/a…
DM: Incentivized reviewers had slightly higher positive response, and were a bit faster in sending in reviews. The incentivized manuscript surprisingly had a slightly longer time-to-acceptance. The peer review time, however, is just a small fraction of the total editorial time.
Discussion: * Were those US or Canadian dollars? US * Should we then pay peer reviewers? Does not seem to have a lot of effect. * Effect on rejection rate? Not real. * Did you look at quality of review? Should we pay for a horrible review? * Might create jealousy of folks not paid.
Next: ‘Exploring Views on Remuneration for Review: A Survey of BMJ’s Patient and Public Reviewers‘ with Sara Schroter@bmj.com: All our reviews are open and pubilshed – we integrate patients into all the work we do. Patients are asked to review as well – get same subscription rewards.
SS: In Nov 2024, we offered a choice of 50 pounds or a 12-month subscription. Survey: Would you be more likely to review if we offered 50 pounds or a subscription? Patients like this idea and found it important that they were recognized, but did not think it would change their review.
SS: But negative comments as well: it would increase tax return work, might take away their benefits, or concerns about ethics or influence on quality. Conclusions: Mixed views – but important to provide flexible options.
Discussion: * We all are or will be patients. Would it be problematic if we offer different incentives to academic reviewers vs patients?
Journal Prestige Can and Should Be Earned
Last talk: Simine Vazire@simine.com, with ‘Journal Prestige Can and Should Be Earned‘. SV: I wear many hats. As Editor in Chief of Psychological Science I would love my journal name to mean anything, but on the other hand, should we value one journal over another?
SV: Does a journal’s prestige track its quality? – probably. But among top tier journals, it might be more messy. And a lot of peer review is a black box. We do not know what journals are doing in terms of quality. There should be more transparency about this process.
SV: The time has come to ask more of journals. Journals should state their goals, give us info to evaluate, catch and correct errors, biases, corruption. Nullius in Verba. Journals can do more: publish peer review history, require open data, declare CoI, policy for appeals, conduct audits.
SV: We can scrutinize journals’ claims of superiority. When journals fail us, they should lose prestige. Currently, there is no punishment for journals’ impact factor if they publish irreproducible or fraudulent papers. Errors are not a problem, but preventable errors are.
SV: Which journals should get prestige? Basic requirements: * Transparent submissions so reviewers can evaluate claims * Peer review checks accuracy * Peer review should be transparent so community can evaluate * Rigor, reproducibility, replicability, innovation, impact, novelty, etc.
SV: Transparency is now the default at Psychological Science. We hired our best critics as our editors. New policies: publish peer review history, require open data, declare CoI if editors publish as editors, and conduct reproducibility checks. journals.sagepub.com/doi/full/10….
SV: I would like to ask the scientific community to expect more of journals. We should expect them to be transparent and accountable, just as they expect us to be transparent and accountable. We can choose to review for journals that align with our vision.
Discussion: * With so many papers now being published, how can we focus on reproducibility? It’s extra effort. * Some ‘top’ journals make lots of profit – we should expect them to do more. * Top journals don’t have to do anything. We will still want to publish there.
Discussion: * One person’s (or journal’s, society’s) decision can sometimes change something meaningful. Ripple effect. * How many big / risky studies turn out to be the truth? Can we label studies as risky? Follow up years later? * How much does this cost? Lots of volunteer work!
Discussion: * If we can show it is important and works, it might become sustainable in the future.