The price of a GRFP, part 1

I had some downtime a while back (literally; the cluster I work on was down) and so I cracked open some analysis I’ve been doing on the side for a while. I like to switch off analyses and work on some side projects to keep me working, but not burnt out, and so I picked up a dataset I’ve been working on for a little while: the NSF GRFP awardees. 

A dear colleague letter

About 2 years ago, the NSF made a policy change announcement, summarized here (capitalization & other emphasis mine):

NSF will limit graduate students to only one application to the GRFP, submitted either in the first year OR in the second year of graduate school. … GRFP continues to identify and to inspire the diverse scientists and engineers of the future, and especially encourages women, members of underrepresented minority groups, persons with disabilities, and veterans to apply…. This is a more diverse population than admitted graduate students.

Lots of great commentaries have touched on the diversity challenges of the GRFP and the potential effects of the policy change proposed. Others have even looked into some of the representation details themselves in their commentaries.

But I want to ask two questions in a data-driven way this year, the first year that the rule is fully in place**:

Is the NSF fulfilling its mission to inspire “diverse scientists of the future”?

and

How has this policy change impacted diversity?

**While it was announced a little while back, this is the first year that the rule will be fully in effect as last year students who’d applied as 1st years were allowed to take the 2nd shot they’d expected to get.

Note that the NSF’s mission isn’t to award the absolute best scientists regardless of diversity. It isn’t an award sheerly on academic merit, on the number of Science and Nature and Cell papers you’ve got (or even for stellar bioRxiv preprints). It’s aiming to identify and inspire a diverse future.

The NSF GRFP is not an R01. It doesn’t fund high-risk experiments and expect grand returns. For 2000 people a year, give or take a few, it provides a chance to do research without financial pressure, as it is — with a scant living wage and freedom from burden.

But is it providing those resources to people who genuinely don’t have them? Is it funding fully-funded scientists with star-studded credentials or people who had to take student loans through college?

I downloaded data about 28,106 NSF recipients from 2011-2017, and matched the 700+ institutions they hail from with the College Score Card, a resource from the US Department of Education which has extremely comprehensive** information on accredited colleges in the United states.

**When I say this I’m not kidding. They’ve got the percentage of Pell Grant recipients who died within 2 years of starting at the university. All kinds of intensely random stuff. It’s really cool.

And I used this to study only the undergraduate institutions of NSF GRFP recipients. I’d theorize that properties of these reflect their opportunities prior to the beginning of their scientific career as graduate students. While I don’t know the story of any individual NSF recipient, I can say a lot about how diverse their undergraduate school’s populations are.

1. The most expensive undergraduate schools have an extreme excess of recipients.school_cost_award_excess.png

The median school with any undergraduates who win the NSF has 2 winners in any given year. However, if you stratify schools by cost, the top most expensive schools have over a fourfold excess of winners. The cheapest schools have extremely few. These correspond to yearly tuition differences of nearly $10k at private schools and $5k at public schools – fairly large differences, compounded over 4 years. It could cost an undergraduate $40k extra to attend a school that has any shot at getting them an award.

How are they paying for that?

2. The schools GRFP undergrads go to have smaller Pell grant populations.

pell_schools_v_all.png

The Pell grant is a subsidy that the US Government provides for students who need it in order to afford tuition. They’re limited to first-time bachelors’ degree recipients in genuine financial need. And notably, although quite a few students get them across the entire College Score Card dataset, the proportion needing them at schools that GRFP recipients come from is strikingly lower, for public and private schools.

To get a sense of the starkest differences, I look at schools which had even 1 undergrad recieve an NSF from 2011 through 2017. 

3. There is a difference of nearly $30k between family incomes at schools with and without even a single NSF recipient.

Screenshot 2018-04-02 23.22.38.png

Each College Score Card school has a reported family income for dependent students (e.g., students being claimed as dependents by their parents). In both public and private schools, the difference in mean between schools with just one NSF recipient (to say nothing of those with outlandishly many) is $30k – ironically, that’s about the size of the extra graduate stipend they’re about to win.

4. There are 16% more first-generation undergraduate students at schools with no NSF GRFP undergraduate recipients.

Screenshot 2018-04-02 23.22.21.png

At both public and private schools, according to the College Score Card, there’s a difference of 12-14% in the number of first generation college students in the school population between the schools whose undergrads earned even 1 award during 2011-2017 and those who earned none at all.

“But Natalie, what if the students at these high-income schools who are winning awards are there on scholarships, and your plots don’t represent them?”

Sure, that’s obviously possible and a big caveat. I think there are an incredible amount of deserving, hardworking people of diverse backgrounds at top institutions.  Certainly everyone who gets the GRFP, and many people who don’t, deserve it.

So really we want to ask: Should we consider the NSF GRFP a success if it by and large gives resources to schools that already have the resources to recruit and inspire diversity? What about the incredible deserving people at other institutions who could truly be inspired by the opportunity to attend graduate school?

“These are honestly just a few measures of economic opportunity and equality. I’d rather see…”

I like your style! To placate you, check out this Shiny I built app where you can look at a lot more about the schools and compare award winners and non-winners.

GRFP Undergrad Institutions

“So then what are you saying?”

This part of the mission statement has stuck with me throughout this analysis:

GRFP continues to identify and to inspire the diverse scientists and engineers of the future, and especially encourages women, members of underrepresented minority groups, persons with disabilities, and veterans to apply.

This is a great and noble goal. But do I buy that the entire pool of outstanding diverse future scientists is hiding inside the same few halls of learning? No. There are graduate institutions who win an extreme excess amount of GRFPs where, as I know myself (since I trained at one!), students are already fully funded, mostly on RAships.

There the NSF GRFP becomes a cap feather, not a guarantor of stability the way it could be in another circumstance.

So does that fulfill its mission?

——–

But let’s get back to the bigger question. We may not be surprised to find that the NSF GRFP is not awarded to the most diverse, most needy group of future inspired scientists (regardless of its mission).

But how has the new policy affected that?

top20.png

For more that, stay tuned for part 2, as I crunch the numbers for the 2018 NSF GRFP, relative to the 2011-2017 classes.

How has this policy change impacted diversity?

Interactive data visualizations

As a scientist when I read and review, I don’t feel satisfied seeing the visuals in print. I pull up the paper on my computer and I have the urge to push an axis, to add a variable, to do my own discovery.

Of course, papers and publishing have a place in our scientific community and discourse that’s extremely important. They tell stories. But we’re all scientists and reading stories gives me (us?) the urge to make stories.

So I wanted to reflect that in my visuals, and give people the opportunity to see more of the story than you can share with a fixed image.

Were this a poster, I wouldn’t have that luxury. But this is the internet, after all. So try out some of my interactive visuals here, and write your own stories.


Want to interact with our data?

The data so far: Questions 2017

Metadata: All presenters, 2014-2017


Questions you might start with…

Pick any word. Who’s using it?

Where do presenters come from?

Who’s getting posters? Who’s getting talks?


We crowdsourced our data. Want to see how?

ASHG2017 Question Portal


Questions and answers about questions

Looking for the data entry portal? It has closed for ASHG 2017; contact me about data entry or adapting the portal for your own use via Twitter (@NatalieTelis), email (first name dot last name at gmail dot com), or the form on my About page.

Recently I’ve had the opportunity to present my work on a project asking questions about question-answer behavior at conferences. I’ve been asking two big questions — who’s here, and who’s asking? — but I also get asked a lot of questions about the project, so I wanted to write more about its history.

The first question

The beginning was simple. The first conference I ever went to, I participated. I asked questions. And at the end of the day, I realized I was the only woman who had.

“That’s weird, isn’t it?” I remember remarking to a friend. She had won the same fellowship I had, the one that sent me to the conference. “I don’t know,” she told me. “It seems nervewracking to ask a question.”

Not to me, I thought. I saw questions as a way of learning. I wrote down questions every talk I went to, though I didn’t ask them all – sometimes they were answered, or sometimes I decided I wasn’t as interested in knowing. It was a device to get myself to engage and participate.

But was I alone?

blog_0.png

Figure 1: Visualization of attendance at Biology of Genomes. Attendees were split out by degree status. While the proportion of women (dark blue) versus men (light blue) is different by degree status, it’s not an extreme difference and the meeting settles out at slightly above 30% female.

Being in the room

Ultimately what I was asking was did gendered participation differ from representation. In essence, was the population of people participating in the meeting (in this one, very simple way) a random sample of the population there?

It wasn’t simple to decide if the answer was yes or no. My background is in math, I reasoned, and sometimes I was the only woman. But was that still true here? I couldn’t answer the question quantitatively without knowing more.

And of course I wanted a well-powered quantitative answer. Without a quantitative answer, I reasoned, I’d miss out on being able to measure the effect. And that meant I couldn’t understand whether it was present, or more interestingly, whether it was perturbable. If participation and representation weren’t the same along this axis, could I make them the same?

Quantification meant I needed data. And data was simple to come by. When you sit at a meeting, you observe every question — as long as you aren’t always rushing for coffee between talks. Coffee safely in hand, I was free to record whatever I wanted.

So I started recording the answer to my first question:

Who was asking?

** I’ll say here the biggest caveat of this work is that, without asking people to identify themselves and gender themselves (problematic for many reasons!), I’m limited to the constructs society leaves us to make assumptions about speakers and askers. These characteristics (like names) are flawed for obvious reasons (as is any binary simplification of a spectrum). Although they provide statistical power at scale, they don’t capture any individual person’s truth.

But I also recorded auxiliary information I was interested in. For example, was the question-asker a moderator? (Moderators are supposed to ask questions, but I reasoned their questions could affect non-moderator questions, so I record them too.) And what did the question-asker actually say?

But in order to analyze the information I was collecting, I needed to know more about the audience.

And that’s different from meeting to meeting, so take note – from here on out, I’m talking about the American Society of Human Genetics meeting.

Knock knock; who’s there?

Screenshot 2017-11-02 16.14.23.png

Figure 2: Number of abstracts from each state in the Bioinformatics category at ASHG2014-2016. Remarkably, the information we collect to know “who’s there” includes abstracts and affiliations, and in addition to learning gender proportions, this information gives us a surprising amount of information about fields all across the US.

To know who was there, I made a big but simple assumption: people presenting talks or posters were definitely there.

This makes a lot of sense on principle since they’ll obviously present. This guarantees they’ll be present at the meeting at some point. Unlike their co-authors or their last author, who might not attend as many talks, I reasoned that the authors of abstracts and posters were a good representation of what an audience might be like at a talk in that same field. They’ll literally be there to attend talks, and so might actually ask questions; most importantly, they represent the gender ratios across their field.

blog_3-01.png

Figure 3: Proportion of women presenting posters in each session. These proportions influence our expectations about the number of women attending a given talk, and therefore our expected audience that we draw questions from. The proportions are extremely variable, with a range from 28% – 69% of attendees being female; this variance is extremely high and quite statistically significant (p < 1e-4).


There is no one answer to “who’s there.”

When looking across the poster sessions, many of them differ significantly from the overall proportion of women, which is around 45-49% (depending on the year). But then we have things like bioinformatics, clocking in at 28%; on the polar opposite spectrum we have genetic counseling, ELSI, education, and health services research, coming in at 69% female.

But the fact that the variation was so broad meant I could ask who was asking questions in a few contexts. I collected data in person, in bioinformatics and statistical genetics sessions (my subfields). And my collaborator Emily Glassberg joined the project and collected data from invited sessions across those specialties.

We set our expectation for questions based on the women in our estimated audience. At an ELSI session, we’d expect 69% of questions to come from women. At a bioinformatics only session, we’d expect 30% of questions to come from women. At a statistical genetics session, we’d expect 40% of questions to come from women.

But we were wrong.

blog_1.png

Figure 4: Proportion of questions from women (blue bars), relative to the audience expected (red dotted lines). Regardless of the expectation — over all sessions recorded, or male-biased, female-biased, or very close to parity — there is a statistically significant dearth of female questions.

Present, but silent

We found that overall, women ask 2/3rds of the questions we’d expect. But we were able to ask even more nuanced questions:

Did women ask fewer questions when they were underrepresented? They did, as demonstrated by the Stat Gen / Bioinformatics bar.

But it wasn’t enough to increase representation. They still asked fewer questions than expected even in the most female-biased sessions.

And we tested our assumptions about audience by studying plenary talks. The plenaries have no competing ASHG events, so theoretically, the 45% – 49% of women attending each year should all have the opportunity to attend the plenary talks. This meant we could be very certain the audience was nearly 50% female. 

And yet, across each category, we found again and again: women ask fewer questions.

More nuance, more power

But we had a lot more than just who asked a question, and during what talk. We also knew a lot about the speakers. And as always, looking for the nuance paid off. We found, curiously, that women preferred to ask questions to women.

And men also preferred to ask questions to men.

blog_5.png

Figure 5: Relative to the overall proportion of female speakers in our dataset (0.4), women prefer to ask questions to women, and men prefer to ask questions to men.

This significant difference is controlled for how many women and men (speakers) were available for women and men (askers) to ask questions to.

There are other trends we find – like trends about word use, and trends about follow-up questions, but most important to take away is that there’s a lot of nuance to participation and representation. It’s not as simple as saying 50% of the people here are women, we’re done. And it might not even be as simple to say women asked questions — when they might not be asking them to the same network of scientists.

Asking the crowd

One of our aims was quantification of the trend. But we also wanted to understand — how malleable is this trend?

I had the opportunity to present these results at ASHG 2017. And both to encourage a kind of democratic science that I really believe in (more on this later), and to measure the effects of a major presentation as intervention, we developed a crowd-sourcing platform to collect and record questions.

The platform was really successful — more successful than we were. To our six-hundred or so questions over 3 years, we got over 1000 questions from nearly 50% of the talks at ASHG 2017. There were almost a hundred different participants who recorded anywhere from one to almost one hundred questions.

blog_4.png

Figure 6: All the crowdsourced questions recorded at ASHG. Each individual recorder has their own line across those questions. These questions cover just under 50% of the talks presented at ASHG.

This gives us incredible power to ask questions even wider-ranging than our initial ones. Of course, the data isn’t perfect. But with so much of it, we might have the power to look at even more:

  1. Questions were overwhelmingly recorded by raters staying in sessions. We know the order of talks and therefore how long each talk lasts, from when the previous questions end to when the next ones start. Do men present longer talks than women? 
  2. This information also lets us ask whether men ask longer questions than women do, by looking at the gaps between questions.
  3. How do the ratios change after the plenary? Do they change at all?

Participation isn’t representation, and vice versa

We’re continuing to analyze the data from ASHG 2017 to understand how much impact talking about these trends and crowdsourcing data collection could have.

But what became really clear to us is that quantitatively, the group of people represented at a meeting is not always the same as the group of scientists participating at that meeting.

It is mathematically accurate to say that ASHG is nearly 50% female. But that’s not a sufficiently nuanced quantification of ASHG diversity. Overrepresentation in one field doesn’t change underpresentation in another field.

And even given the context of representation, we can tell that the people asking questions at a meeting aren’t the same** as the people attending the talks.

**We’ve thought about how they may differ and some of our detailed methods can be found here at our FAQ. 

So a more nuanced quantification of demographics gives us the power to dig past summary, deeper into the statistics of representation. And along the way we find, regardless of the context: participation isn’t representation.

Which is great for us. We get to keep trying to ask questions about questions, and drilling down into the quantitative, measurable mechanics of these phenomena.

Want to know the answers? Well… stay tuned. =)

Online Methods (e.g., an FAQ)

There’s a wealth of incredibly interesting questions about questions, as you can imagine! We figured we’d take some of the most common ones we get, and condense them down into one big FAQ.


Do you record/account for question seniority?

The principle underlying this question is as follows: “Who’s in the room” varies along many axes outside of gender. These include things like academic seniority. Perhaps the population of question-askers is actually a smaller subset of who is literally in the room, along such an axis. For instance, maybe only faculty ask questions.
This is challenging for us to literally evaluate at ASHG on a per-question basis, as this would require identifying question-askers.
However, in smaller study environments, we’ve been able to do something which approximates this, which is to stratify “who’s in the room” along the axis of seniority. For instance, at the Biology of Genomes meeting, the abstract booklet contains PhD / non-PhD status. This means it’s possible to separate out faculty and postdocs, and look at both of those attendee proportions. As you can see, they are different (PhDs are less female), but not different enough to explain the observed efffect.
bog_15_demog_and_intervention_comparison-01.pngFigure: Question-askers (left) at Biology of Genomes 2015 (total qs: 147), as compared to proportions of attendees (right). BoG 2015 is chosen as this is prior to any publicization of data-collection or data gathering. See right that non-PhD holding attendees are somewhat more female than PhD-holding attendees; however, this is difference is substantially smaller than the difference required to explain the proportion of female questioners.)

Is your gender classifier accurate for names from other countries?

In short, yes, as much as possible. We use genderizer, available for both Python and R, which draws on hundreds of thousands of names from almost 100 countries. As a result, our classification is as complete as possible given this information, and we achieve a classification rate of about 70% (see below), which we use to estimate the proportions of women and men present.

How can you be sure your proportions estimated are correct?

Of course, we couldn’t be certain, unless we had a perfect ground truth. But luckily, we’re close! Since 2016, ASHG has internally allowed people to report gender on registration. We compare the inferred_v_reported genders for 2016 and 2017, and see that our pipeline estimates are extremely similar.

How are the people who ask questions chosen? Could the people choosing them be biased?

This question is undoubtedly informed by the large body of literature confirming that teachers in the classroom spend more time speaking to and interacting with male students. Correspondingly they also call on female students less and interrupt them more. (This is mostly the work of Sadker and Sadker and is described well in either David Sadker’s book or this broader textbook.

However, ASHG is remarkably equanimous, as there are self-selecting microphone lines. Admittedly, not during every session is there an opportunity for every person to ask all the questions they want. (However, we record these sessions.) We also record positioning of microphone and speakers at microphones. Since the lines are self-selecting, there’s no need for a moderator or any other potentially biased figure to be choosing hands amongst a crowd.

At the Biology of Genomes meeting, where we also collect data, the microphones are held by individuals and move. In this different scheme (which might be slightly more biased, as the individual with the microphone has to move towards someone soliciting it) we still record a similar magnitude of effect [binom(16,147,0.35), p=2e-11] prior to any intervention on our part.


How do you figure out that men ask men, and women ask women, if not all speakers and audiences are in the same room at the same time?

 In essence this question gets at the following idea. What if most women are in rooms with mostly female speakers, such as ELSI sessions. And what if most men are in rooms with mostly male speakers, such as Bioinformatics sessions.
Wouldn’t this create a (not-perfectly-symmetric) bias for women to ask questions towards women, and men to ask questions towards men?
Yes, that’s absolutely right, it would! To account for this, we wanted to test for consistent, within-category bias. In essence, imagine a contingency table for each category with frequencies, set up like this:
speaker // asker Male Female
Male p 1-p
Female 1-q q

What you’d expect is, regardless of the session, to see p and q (the male-to-male and female-to-female questions) having a little bit more weight than (1-q), (1-p).

In particular, you can measure this by looking at the difference, pq – (1-q)(1-p). This represents the difference between products of frequencies of same-to-same questions and different-to-different questions. Under a null, the expectation of this difference should be 0. If the difference was greater than zero, this would suggest there are more same-to-same questions.

To test this, we take the questions within each invited sub category, as follows, and re-assign them. So say you have 20 questions – we assign each one of them to come from a female asker to a female speaker, or female to male, etcetera and we do 10 thousand such permutations for each sub category. From this, we calculate a mean statistic and look at the distribution of those statistics.

Of course we calculate the same statistic for our own dataset, and as you can see, there’s a signficant skew observed in our data (pink) over the permuted sets (black)

Pasted image at 2017_10_11 01_31 PM

So we subsequently conclude there’s a session-stratification-controlled significant bias towards female-to-female and male-to-male (same-gender) questions, as opposed to female-to-male and male-to-female (cross-gender) questions (p=8.1e-5)

We verify the accuracy of this statistic by performing a similar test not on the frequencies but on the raw contingency tables of counts of questions in each category. We use the Mantel Haenszel (say that one three times, fast, out loud!) test to look at the combined odds ratio, and again, across sessions, we see the same consistent trend (p=0.004).


Since you’re crowdsourcing ASHG2017 collection, how do you know whether people are recording the same talks?

Great question! (Note: this answer pertains ONLY to the new crowdsourcing dataset). Participate at our crowdsourcing portal!

Each device that logs data into our database is anonymized and recorded (and controlled by a human, via CAPTCHA). This is how we build our question-entry leaderboard.


But wait. How do you match all the different recordings together for one talk?

(Note: this answer pertains ONLY to the new crowdsourcing dataset). Our entry-tracking means we can actually do a kind of string-alignment — something many of us geneticists should be familiar with — to ensure we’re matching questions correctly.

For example, imagine that user 1 records the whole question session. User 2 comes in for the next talk and starts recording midway, and user 3 leaves to go to another talk and stops early. As a result, you have something like this:

True String M M F M M F F M M M
User 1 M M F M M F F M M M
User 2 M M F M
User 3 F F M M M

You can even see that User 2 and User 3 don’t overlap at all!

However, in computational biology, we’ve developed a lot of methods to align strings and derive a consensus. And in fact, that’s actually what we do! We borrow standard Bioconductor packages to do a multiple sequence alignment and derive a “consensus” question string. As we continue