# Quantify everything, all of the time

Contents

I recently read the article by Wu et al in Nature Biotechnology (you can also find similar articles in pretty much all of the Nature journals) which analysed data on participants at some virtual meetings over the past couple of years, and came to the conclusion that ‘Virtual meetings promise to eliminate geographical and administrative barriers and increase accessibility, diversity and inclusivity’. Which sounds great!

Of course there are certainly some good things to come out of virtual meetings, and many unresolved issues with in person conferences. When the issues include lack of equality, contributing to climate change, and giving more funding dubious publishers it’s certainly a lot easier to write in opposition to these events than support. Though pre-COVID a lot of articles criticising conferences talked about methods for reform, newer articles seem to err more on the side of totally abolishing them.

Personally, I’ve pretty much given up on virtual meetings. I’ve ‘been’ to a number of fully virtual conferences and while I certainly got something out of all of these meetings, the amount has been declining. The issues I have are:

• They’re no fun, and just the same stuff as day-to-day work/zoom meetings (which I’m pretty sure we’re all sick of).
• There’s no real way to meet people and talk about their research.
• Question and answer sessions are often chaired in a more controlling way – rarely are the critical or difficult questions asked.
• The demands on speakers seem to have become higher, with organisers both demanding a pre-recording well ahead of the deadline, as well as a live talk.
• They’re still really expensive!

## Quality not quantity

The main analysis of the paper compares attendance between 2019 and 2020 to the RECOMB conference, which became fully virtual (and also free) in 2020. Attendance grows by about 10-fold. The authors then look at ethnicity (not home country) and gender by analysing participant names. Numbers increase from all regions, the proportions change a little. The number of countries with at least one registration was greater.

However none of this more qualitative evidence which would tell us more about the quality of people’s conference experiences over the past couple of years makes it into these analyses, which are all relentlessly quantitative. All registrations are treated as equal, doubtless these range from watching a single talk to being a conference organiser.

I have some other issues with this analysis and to what extent it supports the conclusions, but really what I want to comment on is: why only do this (somewhat complex) quantitative analysis and ignore participant experiences? Why not interview some people across geographies and career levels; those who had been at conferences for a while, those new to conferences; and ask them a few questions about the virtual conference experience?

Just treating all registrations as a positive experience is bound to make free, virtual conferences look more accessible.

I did think that the discussion section of the article however was really quite reasonable:

Although we strongly believe that in-person conferences have their own benefits, and that no online communication tool can mimic the in-person experience completely, we cannot neglect the multiple advantages that online conferences offer: in addition to providing opportunities to previously under-represented groups to attend global conferences, use of a hybrid format will contribute toward decarbonizing conference travel after the pandemic.

I feel that we as scientists can sometimes stick a little too closely to numbers and counts and shy away from sources of information which are harder to enter into R/python, and by doing so we can make our analysis less complete, but retain a veneer of respectability.