A Q&A with Vincent Larivière

Vincent Larivière

Vincent Larivière

Credit: Amélie Philibert

In 5 seconds

In a new book, the metrics specialist takes a critical look at the output of researchers worldwide and how they measure up.

Vincent Larivière is a professor at UdeM's School of Information Science and the co-author, with Cassidy Sugimoto of Indiana University Bloomington, of Measuring Research, a new book published in January by Oxford University Press. It's part of OUP's popular series What Everyone Needs to Know.

What does 'measuring research' mean, exactly?

The book is about the ecosystem of indicators that are increasingly used to measure research. There's an entire spectrum of researchers and administrators who use indicators to understand research and evaluate it. The book provides a very comprehensive and accessible overview of all of the indicators, the data sources and their limitations, and in what contexts they should be applied. There's a lot about what you should do with indicators, but also, and I would say even more of, what you should not do.

Such as?

Such as not evaluate individuals. In order to assess individuals, we already have a very good methodology called peer review, which has been widely used for decades. However, when you want to assess the research of an entire university, you would need quite a lot of peers, and so there's a sweet spot there for the quantitative analysis that allows you to measure research activity through publications and also measure scientific impact through citations the papers of these groups receive.

So there's a difference between activity and impact. How is measuring both of these important for the advancement of science?

Measuring activity allows you to measure the intensity of research on a given topic by the various units. You can see its impact in terms of the reception it gets from the scientific community, and in that way you're actually able to understand what type of research gets more visibility or is a hot topic. Of course there are a lot of limitations to these things; that's why we call them indicators. They're not the reality, they're a social construct.

In a recent interview in Nature, your American co-author talks about how "indicators can be extremely useful in countries with high degrees of cronyism, as objective measurements to counter the old boy network." How so?

In some places in the world, rewards are given based on reputation rather than on actual achievements. With indicators, you can more objectivly measure the degree to which someone or some group has contributed to the advancement of science – not based on reputation but on things that have appeared in print and that have been used in the scientific community. These indicators act as a kind of counterbalance to reputation-based rewards.

You also advocate in favour of more open data. How does measuring research help achieve that?

Right now, most of the databases that are used to measure research are privately owned, and as a result if you don't have access to them you're stuck. I argue that just as research papers should be open, so should the references and the metadata in them, so that scholars can create their own databases and not just be stuck with the low-quality data they have now. Scholarly communication needs to take back the control of the indicators that are used to evaluate it.

So you wouldn't have to rely on Web of Science and others to measure the benefits?

Exactly. We would actually rely on ourselves with tools that we develop ourselves.

Scholars are often much happier to publish in international journals than in journals closer to home. You think that's a mistake?

Here, we have to make a big distinction between natural sciences and the social sciences and humanities. In physics, the electron behaves in the same manner in all countries, so it makes sense to publish in international journals. But in most of the fields of social sciences and humanities, research is grounded locally and is often linked to language, so in those fields it's quite important to publish locally, because that's actually where your community is. Some of the bibliometric indicators that are now used actually push scholars to publish in international journals, despite the fact that their object might not be international. We need to counterbalance that with very strong national journals and include them in the various bibliometric evaluations, so that there are incentives for scholars to continue to publish there and work on those national topics.

How would you encourage them to do that?

Well, they need to be included in the databases. It's basically an issue of a coverage. Most of the databases that exist right now have, let's say, an Anglo-American bias. That means that if you want to be measured, you need to be published in English in those journals. It also means you have to work on topics that are more international in scope.

As a Quebec researcher, that concerns you?

Absolutely. There are actually a lot of scholars in Quebec who are mis-measured by most of the databases, especially in the social sciences and humanities.

Because of French, the language they publish in?

Because of language, yes, and because they publish books. Books are quite often excluded from the databases, as well.

About the book

Measuring Research: What Everyone Needs to Know

by Vincent Larivière and Cassidy R. Sugimoto

Oxford University Press: 2018.

Media contact