Looking for the data entry portal? It has closed for ASHG 2017; contact me about data entry or adapting the portal for your own use via Twitter (@NatalieTelis), email (first name dot last name at gmail dot com), or the form on my About page.
Recently I’ve had the opportunity to present my work on a project asking questions about question-answer behavior at conferences. I’ve been asking two big questions — who’s here, and who’s asking? — but I also get asked a lot of questions about the project, so I wanted to write more about its history.
The first question
The beginning was simple. The first conference I ever went to, I participated. I asked questions. And at the end of the day, I realized I was the only woman who had.
“That’s weird, isn’t it?” I remember remarking to a friend. She had won the same fellowship I had, the one that sent me to the conference. “I don’t know,” she told me. “It seems nervewracking to ask a question.”
Not to me, I thought. I saw questions as a way of learning. I wrote down questions every talk I went to, though I didn’t ask them all – sometimes they were answered, or sometimes I decided I wasn’t as interested in knowing. It was a device to get myself to engage and participate.
But was I alone?
Figure 1: Visualization of attendance at Biology of Genomes. Attendees were split out by degree status. While the proportion of women (dark blue) versus men (light blue) is different by degree status, it’s not an extreme difference and the meeting settles out at slightly above 30% female.
Being in the room
Ultimately what I was asking was did gendered participation differ from representation. In essence, was the population of people participating in the meeting (in this one, very simple way) a random sample of the population there?
It wasn’t simple to decide if the answer was yes or no. My background is in math, I reasoned, and sometimes I was the only woman. But was that still true here? I couldn’t answer the question quantitatively without knowing more.
And of course I wanted a well-powered quantitative answer. Without a quantitative answer, I reasoned, I’d miss out on being able to measure the effect. And that meant I couldn’t understand whether it was present, or more interestingly, whether it was perturbable. If participation and representation weren’t the same along this axis, could I make them the same?
Quantification meant I needed data. And data was simple to come by. When you sit at a meeting, you observe every question — as long as you aren’t always rushing for coffee between talks. Coffee safely in hand, I was free to record whatever I wanted.
So I started recording the answer to my first question:
Who was asking?
** I’ll say here the biggest caveat of this work is that, without asking people to identify themselves and gender themselves (problematic for many reasons!), I’m limited to the constructs society leaves us to make assumptions about speakers and askers. These characteristics (like names) are flawed for obvious reasons (as is any binary simplification of a spectrum). Although they provide statistical power at scale, they don’t capture any individual person’s truth.
But I also recorded auxiliary information I was interested in. For example, was the question-asker a moderator? (Moderators are supposed to ask questions, but I reasoned their questions could affect non-moderator questions, so I record them too.) And what did the question-asker actually say?
But in order to analyze the information I was collecting, I needed to know more about the audience.
And that’s different from meeting to meeting, so take note – from here on out, I’m talking about the American Society of Human Genetics meeting.
Knock knock; who’s there?
Figure 2: Number of abstracts from each state in the Bioinformatics category at ASHG2014-2016. Remarkably, the information we collect to know “who’s there” includes abstracts and affiliations, and in addition to learning gender proportions, this information gives us a surprising amount of information about fields all across the US.
To know who was there, I made a big but simple assumption: people presenting talks or posters were definitely there.
This makes a lot of sense on principle since they’ll obviously present. This guarantees they’ll be present at the meeting at some point. Unlike their co-authors or their last author, who might not attend as many talks, I reasoned that the authors of abstracts and posters were a good representation of what an audience might be like at a talk in that same field. They’ll literally be there to attend talks, and so might actually ask questions; most importantly, they represent the gender ratios across their field.
Figure 3: Proportion of women presenting posters in each session. These proportions influence our expectations about the number of women attending a given talk, and therefore our expected audience that we draw questions from. The proportions are extremely variable, with a range from 28% – 69% of attendees being female; this variance is extremely high and quite statistically significant (p < 1e-4).
There is no one answer to “who’s there.”
When looking across the poster sessions, many of them differ significantly from the overall proportion of women, which is around 45-49% (depending on the year). But then we have things like bioinformatics, clocking in at 28%; on the polar opposite spectrum we have genetic counseling, ELSI, education, and health services research, coming in at 69% female.
But the fact that the variation was so broad meant I could ask who was asking questions in a few contexts. I collected data in person, in bioinformatics and statistical genetics sessions (my subfields). And my collaborator Emily Glassberg joined the project and collected data from invited sessions across those specialties.
We set our expectation for questions based on the women in our estimated audience. At an ELSI session, we’d expect 69% of questions to come from women. At a bioinformatics only session, we’d expect 30% of questions to come from women. At a statistical genetics session, we’d expect 40% of questions to come from women.
But we were wrong.
Figure 4: Proportion of questions from women (blue bars), relative to the audience expected (red dotted lines). Regardless of the expectation — over all sessions recorded, or male-biased, female-biased, or very close to parity — there is a statistically significant dearth of female questions.
Present, but silent
We found that overall, women ask 2/3rds of the questions we’d expect. But we were able to ask even more nuanced questions:
Did women ask fewer questions when they were underrepresented? They did, as demonstrated by the Stat Gen / Bioinformatics bar.
But it wasn’t enough to increase representation. They still asked fewer questions than expected even in the most female-biased sessions.
And we tested our assumptions about audience by studying plenary talks. The plenaries have no competing ASHG events, so theoretically, the 45% – 49% of women attending each year should all have the opportunity to attend the plenary talks. This meant we could be very certain the audience was nearly 50% female.
And yet, across each category, we found again and again: women ask fewer questions.
More nuance, more power
But we had a lot more than just who asked a question, and during what talk. We also knew a lot about the speakers. And as always, looking for the nuance paid off. We found, curiously, that women preferred to ask questions to women.
And men also preferred to ask questions to men.
Figure 5: Relative to the overall proportion of female speakers in our dataset (0.4), women prefer to ask questions to women, and men prefer to ask questions to men.
This significant difference is controlled for how many women and men (speakers) were available for women and men (askers) to ask questions to.
There are other trends we find – like trends about word use, and trends about follow-up questions, but most important to take away is that there’s a lot of nuance to participation and representation. It’s not as simple as saying 50% of the people here are women, we’re done. And it might not even be as simple to say women asked questions — when they might not be asking them to the same network of scientists.
Asking the crowd
One of our aims was quantification of the trend. But we also wanted to understand — how malleable is this trend?
I had the opportunity to present these results at ASHG 2017. And both to encourage a kind of democratic science that I really believe in (more on this later), and to measure the effects of a major presentation as intervention, we developed a crowd-sourcing platform to collect and record questions.
The platform was really successful — more successful than we were. To our six-hundred or so questions over 3 years, we got over 1000 questions from nearly 50% of the talks at ASHG 2017. There were almost a hundred different participants who recorded anywhere from one to almost one hundred questions.
Figure 6: All the crowdsourced questions recorded at ASHG. Each individual recorder has their own line across those questions. These questions cover just under 50% of the talks presented at ASHG.
This gives us incredible power to ask questions even wider-ranging than our initial ones. Of course, the data isn’t perfect. But with so much of it, we might have the power to look at even more:
- Questions were overwhelmingly recorded by raters staying in sessions. We know the order of talks and therefore how long each talk lasts, from when the previous questions end to when the next ones start. Do men present longer talks than women?
- This information also lets us ask whether men ask longer questions than women do, by looking at the gaps between questions.
- How do the ratios change after the plenary? Do they change at all?
Participation isn’t representation, and vice versa
We’re continuing to analyze the data from ASHG 2017 to understand how much impact talking about these trends and crowdsourcing data collection could have.
But what became really clear to us is that quantitatively, the group of people represented at a meeting is not always the same as the group of scientists participating at that meeting.
It is mathematically accurate to say that ASHG is nearly 50% female. But that’s not a sufficiently nuanced quantification of ASHG diversity. Overrepresentation in one field doesn’t change underpresentation in another field.
And even given the context of representation, we can tell that the people asking questions at a meeting aren’t the same** as the people attending the talks.
**We’ve thought about how they may differ and some of our detailed methods can be found here at our FAQ.
So a more nuanced quantification of demographics gives us the power to dig past summary, deeper into the statistics of representation. And along the way we find, regardless of the context: participation isn’t representation.
Which is great for us. We get to keep trying to ask questions about questions, and drilling down into the quantitative, measurable mechanics of these phenomena.
Want to know the answers? Well… stay tuned. =)