The revised Principles and Implementation document for Plan S remains overall quite friendly to big commercial publishers, with one glaring exception. With its principle 10, Coalition S takes a bold stand: “The Funders commit that when assessing research outputs during funding decisions they will value the intrinsic merit of the work and not consider the publication channel, its impact factor (or other journal metrics), or the publisher”. While no mechanism for this to actually happen is outlined, the very fact of saying it is an enormous step in the right direction.

Principle 10 directly challenges the idea of journal brand, which is at the heart of the prestige economy of academic publishing, and which big commercial publishers have been cynically promoting and exploiting for decades. Commercial publishers have even adopted an explicit business model of tying Author Processing Charges (APC) to journal brand and prestige, based on the value that academics perceive in being published in a particular venue.

Principle 10 negates that business model by making journal brands irrelevant in the evaluation of scholarly productivity. It would mean, in theory at least, that journals don’t matter, apart from their ability to disseminate and archive. More precisely, it would mean that which journal, and which publisher, doesn’t matter.

It would mean that it makes no sense to pay a higher APC to be published in one journal rather than in another. It is possible that Plan S drafters see this merely as a supporting measure for their intention to control and cap APCs, but it has much broader implications for the long-term evolution of the academic publishing landscape.

A journal’s brand value (and consequently the subscription and APC rates it can charge) have long been tied to Journal Impact Factor (JIF), but even the big commercial publishers now realize that this will no longer fly, Principle 10 or not. Some current approaches to the problem are less radical than Principle 10.

In a recent comment published in Nature, for example, Wouters et al (2019) less ambitiously try to salvage the severely broken journal metrics system by proposing a more varied and broad-based set of metrics. Journal Impact factor, the main journal metric that academics and institutional bodies use as a shortcut to evaluate research and researchers, has long been known to be highly problematic, especially when used in isolation as an indicator of the quality of scholarship.

It is no longer controversial that JIF is a poor measure of scholarship quality, but Wooters et al. delve a bit deeper into the problem. “Indicators, once adopted for any type of evaluation, have a tendency to warp practice”. JIF as a narrow, single indicator, has certainly done a lot of warping.

It tends to encourage people to pursue projects likely to be published in high IF venues at the expense of potentially more innovative and more valuable research. As the relatively new joke goes, academics used to apply for grants so they could publish research results. Now they publish research results so they can get grants. Implied is that the getting of grants heavily depends on where those results are published.

Beyond perversely setting research policy, JIF is used as a currency that affects who can get into academia at all. It reinforces existing power structures and limits the diversity of ideas and people contributing to scholarship. I agree with Wouters et al. that at the very least, journal metrics should be much broader than JIF.

However, Principle 10 moots even their more nuanced approach to journal metrics and gets in front of it by specifying that the funders will not consider JIF “or other journal metrics”. It is high time that academics commit to evaluating academic contributions for their scholarly value rather than for the venue in which they appear.

Yes, it means more work when evaluating grant applications, job candidates, and tenure and promotion candidates. It does mean we have to do less of something else with our limited time. But for having done it for a while now, it is worth the investment. I have learned a great deal more than I would have learned by simply scanning down the list of journals on someone’s CV, looking for prestigious keywords. I have discovered some contributions I wouldn’t have. I have decided that others were not as interesting or significant as I hoped they would be.

It means that we should change some of the ways that we do things. We should ask applicants for anything to send us their 2 or 3 most relevant and significant contributions, no matter how many they have. We should even read them to evaluate them.

It remains to be seen whether the Coalition S funders will take Principle 10 seriously. I hope they do. More significantly, it remains to be seen whether they will convince the academics who form their evaluation committees to respect it. As a community, it is in our long-term interest to do so. But now that it is on paper, so to speak, we can point to it, and hold the funders accountable for its implementation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s