Linguists use machine-learning techniques for mining large text corpora to detect how the structure of a language lends meaning to its words. They work on the assumption that terms that appear in close proximity to one another may have similar connotations: dogs turn up near cats more often than canines appear close to bananas.
This same method of burrowing into texts—more formally called the search for distributional semantics—can also provide a framework for analyzing psychological attitudes, including gender stereotypes that contribute to the underrepresentation of women in scientific and technical fields. Studies in English have shown, for example, that the word “woman” often appears close to “home” and “family,” whereas “man” is frequently paired with “job” and “money.”
The way language fosters linguistic stereotypes intrigued Molly Lewis, a cognitive scientist and special faculty member at Carnegie Mellon University, who focuses on the subtle ways words convey meanings. Along with her colleague Gary Lupyan of University of Wisconsin–Madison, she decided to build on earlier work on gender stereotypes to explore how common these biases are throughout the world. In a study published on Monday in Nature Human Behaviour, the researchers find that such stereotypes are deeply embedded in 25 languages. Scientific American spoke with Lewis about the study’s findings.
[An edited transcript of the interview follows.]
How did you come up with the idea for the study?
There’s a lot of previous work showing that explicit statements about gender shape people’s stereotypes. For example, if you tell children that boys are better at being doctors than girls, they will develop a negative stereotype about female doctors. That’s called an explicit stereotype.
But there is little work exploring a different aspect of language looking at this question of gender stereotypes from the perspective of large-scale statistical relationships between words. This is intended to get at whether there is information in language that shapes stereotypes in a more implicit way. So you might not even be aware that you’re being exposed to information that could shape your gender stereotypes.
Could you describe your main findings?
In one case, as I mentioned, we were focusing on the large-scale statistical relationships between words. So to make that a little more concrete: we had a lot of text, and we trained machine-learning models on that text to look at whether words such as “man” and “career” or “man” and “professional” were more likely to co-occur with each other, relative to words such as “woman” and “career.” And we found that, indeed, they were [more likely to do so]—to varying degrees in different languages.
So in most languages, there’s a strong relationship between words related to a man and words related to a career—and, at the same time, words related to women and words related to family. We found that this relationship was present in nearly all the languages that we looked at. And so that gives us a measure of the extent to which there’s a gender stereotype in the statistics of the 25 different languages we looked at.
And then what we did was ask whether or not the speakers of those languages have the same gender stereotype when measured in a particular psychological task. We had a sample of more than 600,000 people with data collected by other researchers in a large crowdsourced study. The psychological task was called the Implicit Association Test (IAT). And the structure of that task was similar to the way we measured the statistical relationships between words in language. In the task, a study participant is presented with words such as “man” and “career” and “woman” and “career,” and the individual has to categorize them as being in the same or a different category as quickly as possible.
So that’s how people’s gender stereotypes are quantified. Critically, what we did then was compare these two measures. Speakers [who] have stronger gender stereotypes in their language statistics also have stronger gender stereotypes [themselves], as measured by the IAT. The fact that we found a strong relationship between those two is consistent with the hypothesis that the language that you’re speaking could be shaping your psychological stereotypes.
Wasn’t there also another measure you looked at?
The second finding is that languages vary in the extent to which they use different words to describe people of different genders in professions. So in English, we do this with “waiter” and “waitress” to describe people of different genders. What we found was that languages that make more of those kind of gender distinctions in occupations were more likely to have speakers with a stronger gender stereotype, as measured by the IAT.
Don’t some languages have these distinctions built into their grammar?
We also looked at whether or not languages that mark gender grammatically—such as French or Spanish—by putting a marker at the end of a word in an obligatory way [“enfermero” (masculine) versus “enfermera” (feminine) for “nurse” in Spanish, for example] have more gender bias. And there we didn’t find an effect.
Was that observation surprising?
It was surprising, because some prior work suggests that [the existence of a bias effect] might be the case—and so we sort of expected to find that, and we didn’t. I wouldn’t say our work is conclusive on that point. But it certainly provides one data point that suggests that [aspect of language is] not driving psychological bias.
Some of your findings about gender stereotypes had been studied in English before, hadn’t they?
What I would say is that our contribution here is to explore this question cross-linguistically and to directly compare the strength of the psychological gender bias to the strength of the statistical bias in language—the word patterns that reveal gender bias. What we did was show that there’s a systematic relationship between the strength of those two types of biases.
One of the points you make is that more work will be needed to prove a cause-and-effect relationship between languages and gender stereotypes. Can you talk about that?
I think this is really important. All of our work is correlational, and we really don’t have strong evidence for a causal claim. So I could imagine a couple ways that we can get stronger causal evidence. One would be to look at this longitudinally to find a way to measure bias and language over time—say, over the past 100 years. Does change in the strength of language bias predict later change in people’s gender stereotypes?
A more direct way to find evidence for the causal idea would be to do experiments in which we would statistically manipulate the kind of word patterns (linguistic statistics) that a person was being exposed to—and then measure their resulting psychological gender stereotypes. And if there were some sort of evidence for a relationship between the statistics of a language and stereotypes, that would provide stronger evidence for this causal idea.
If it does prove to be true that some of our gender stereotypes are shaped by language, will that effect in any way impede people’s ability to change them?
I think the opposite, actually. I think this work tells us one mechanism whereby stereotypes are formed. And I think this gives us a hint of how we could possibly intervene and, ultimately, change people’s stereotypes. So I have another body of work looking at children’s books and measuring the implicit stereotypes in [those] texts. And there we find that stereotypes are even larger than the ones that we report in our paper. One promising future direction is changing which books are being read to children—or which digital media are being given to children. And that might alter the stereotypes developed.