Are “a female” and “a male” used differently?

In this first part of this two-post series, I looked at how “a male” and “a female” were used on Twitter. I found that one part of speech tagger tagged “male” as a proper noun really frequently (which is weird, cause it isn’t one) and that overall the phrase “a female” was waaaay more frequent. Which is  interesting in itself, since my initial question was “are these terms used differently?” and these findings suggest that they are. But the second question is how are these terms used differently? To answer this, we’ll need to get a little more qualitative with it.

Using the same set of tweets that I collected last time, I randomly selected 100 tweets each from the “a male” and “a female” dataset. Then I hand tagged each subset of tweets for two things: the topic of the tweet (who or what was being referred to as “male” or “female”) and the part of speech of “male”  or “female”.

Who or what is being called “male” or “female”?

Rplot

Because there were so few tweets to analyze, I could do a content analysis. This is a methodology that is really useful when you don’t know for sure ahead of time what types of categories you’re going to see in your data. It’s like clustering that a human does.

Going into this analysis, I thought that there might be a difference between these datasets in terms of how often each term was used to refer to an animal, so I tagged tweets for that. But as I went through the tweets, I was floored by the really high number of tweets talking about trans people, especially Mack Beggs, a trans man from Texas who was forced to wrestle in the women’s division. Trans men were referred to as “a male” really, really often. While there wasn’t a reliable difference between how often “a female” and “a male” was used to refer to animals or humans, there was a huge difference in terms of how often they were  used to refer to trans people. “A male” was significantly more likely to be used to describe a trans person than “a female” (X2 (2, N = 200) = 55.33, p <.001.)

Part of Speech

Since the part of speech taggers I used for the first half of my analysis gave me really mixed results, I also hand tagged the part of speech of “male” or “female” in my samples. In line with my predictions during data collection, the only parts of speech I saw were nouns and adjectives.

When I looked at just the difference between nouns and adjectives, there was a little difference, but nothing dramatic. Then, I decided to break it down a little further. Rather than just looking at the differences in part of speech between “male” and “female”, I looked at the differences in part of speech and whether the tweet was about a trans person or a cis (not trans) person.

Rplot01

For tweets with “female”, it was used as a noun and an adjective at pretty much the same rates regardless of whether someone was talking about a trans person or a cis (non-trans) person. For tweets with “male”, though, when the tweet was about a trans person, it was used almost exclusively as a noun.

And there was a huge difference there. A large majority of tweets with “a male” and talking about a trans person used “male” as a noun. In fact, more than a third of my subsample of tweets using “a male” were using it as a noun to talk about someone who was trans.

So what’s going on here? This construction (using “male” or “female” as a noun to refer to a human) is used more often to talk about:

  1. Women. (Remember that in the first blog post looking at this, I found that “a female” is twice a common as “a male.)
  2. Trans men.

These both make sense if you consider the cultural tendency to think about cis men as, in some sense, the “default”. (Deborah Tannen has a really good discussion of this her article “Marked Women, Unmarked Men“. “Marked” is a linguistics term which gets used in a lot of ways, but generally means something like “not the default” or “the weird one”.) So people seem to be more likely to talk about a person being “a male” or “a female” when they’re talking about anyone but a cis man.

A note on African American English

giphy.gif

I should note that many of the tweets in my sample were African American English, which is not surprising given the large Black community on Twitter, and that use of “female” as a noun is a feature of this variety.  However, the parallel term used to refer to men in this variety is not “a man” or even “a male”, but rather “nigga”, with that spelling. This is similar to “dude” or “guy”: a nonspecific term for any man, regardless of race, as discussed at length by Rachel Jeantal here. You can see an example of this usage in speech above (as seen in the Netflix show “The Unbreakable Kimmy Schmidt“) or in this vine. (I will note, however, that it only has this connotation if used by a speaker of African American English. Borrowing it into another variety, especially if the speaker is white, will change the meaning.)

Now, I’m not a native user of African American English, so I don’t have strong intuitions about the connotation of this usage. Taylor Amari Little (who you may know from her TEDx talk on Revolutionary Self-Produced Justice) is, though, and tweeted this (quoted with permission):

If they call women “females” 24/7, leave em alone chile, run away

And this does square with my own intuitions: there’s something slightly sinister about someone who refers to women exclusively as “females”. As journalist Vonny Moyes pointed out in her recent coverage of ads offering women free rent in exchange for sexual favors, they almost refer to women as “girls or females – rarely ever women“. Personally, I find that very good motivation not to use “a male” or “a female” to talk about any human.

Advertisements

Can what you think you know about someone affect how you hear them?

I’ll get back to “a male/a female” question in my next blog post (promise!), but for now I want to discuss some of the findings from my dissertation research. I’ve talked about my dissertation research a couple times before, but since I’m going to be presenting some of it in Spain (you can read the full paper here), I thought it would be a good time to share some of my findings.

In my dissertation, I’m looking at how what you think you know about a speaker affects what you hear them say. In particular, I’m looking at American English speakers who have just learned to correctly identify the vowels of New Zealand English. Due to an on-going vowel shift, the New Zealand English vowels are really confusing for an American English speaker, especially the vowels in the words “head”, “head” and “had”.

tokensVowelPlot

This plot shows individual vowel tokens by the frequency of thier first and second formants (high-intensity frequency bands in the vowel). Note that the New Zealand “had” is very close to the US “head”, and the New Zealand “head” is really close to the US “hid”.

These overlaps can be pretty confusing when American English speakers are talking to New Zealand English speakers, as this Flight of the Conchords clip shows!

The good news is that, as language users, we’re really good at learning new varieties of languages we already know, so it only takes a couple minutes for an American English speaker to learn to correctly identify New Zealand English vowels. My question was this: once an American English speaker has learned to understand the vowels of New Zealand English, how do they know when to use this new understanding?

In order to test this, I taught twenty one American English speakers who hadn’t had much, if any, previous exposure to New Zealand English to correctly identify the vowels in the words “head”, “heed” and “had”. While I didn’t play them any examples of a New Zealand “hid”–the vowel in “hid” is said more quickly in addition to having different formants, so there’s more than one way it varies–I did let them say that they’d heard “hid”, which meant I could tell if they were making the kind of mistakes you’d expect given the overlap between a New Zealand “head” and American “hid”.

So far, so good: everyone quickly learned the New Zealand English vowels. To make sure that it wasn’t that they were learning to understand the one talker they’d been listening to, I tested half of my listeners on both American English and New Zealand English vowels spoken by a second, different talker. These folks I told where the talker they were listening to was from. And, sure enough, they transferred what they’d learned about New Zealand English to the new New Zealand speaker, while still correctly identifying vowels in American English.

The really interesting results here, though, are the ones that came from the second half the listeners. This group I lied to. I know, I know, it wasn’t the nicest thing to do, but it was in the name of science and I did have the approval of my institutional review board, (the group of people responsible for making sure we scientists aren’t doing anything unethical).

In an earlier experiment, I’d played only New Zealand English as this point, and when I told them the person they were listening to was from America, they’d completely changed the way they listened to those vowels: they labelled New Zealand English vowels as if they were from American English, even though they’d just learned the New Zealand English vowels. And that’s what I found this time, too. Listeners learned the New Zealand English vowels, but “undid” that learning if they thought the speaker was from the same dialect as them.

But what about when I played someone vowels from their own dialect, but told them the speaker was from somewhere else? In this situation, listeners ignored my lies. They didn’t apply the learning they’d just done. Instead, the correctly treated the vowels of thier own dialect as if they were, in fact, from thier dialect.

At first glance, this seems like something of a contradiction: I just said that listeners rely on social information about the person who’s talking, but at the same time they ignore that same social information.

So what’s going on?

I think there are two things underlying this difference. The first is the fact that vowels move. And the second is the fact that you’ve heard a heck of a lot more of your own dialect than one you’ve been listening to for fifteen minutes in a really weird training experiment.

So what do I mean when I say vowels move? Well, remember when I talked about formants above? These are areas of high acoustic energy that occur at certain frequency ranges within a vowel and they’re super important to human speech perception. But what doesn’t show up in the plot up there is that these aren’t just static across the course of the vowel–they move. You might have heard of “diphthongs” before: those are vowels where there’s a lot of formant movement over the course of the vowel.

And the way that vowels move is different between different dialects. You can see the differences in the way New Zealand and American English vowels move in the figure below. Sure, the formants are in different places—but even if you slid them around so that they overlapped, the shape of the movement would still be different.

formantDynamics

Comparison of how the New Zealand and American English vowels move. You can see that the shape of the movement for each vowel is really different between these two dialects.  

Ok, so the vowels are moving in different ways. But why are listeners doing different things between the two dialects?

Well, remember how I said earlier that you’ve heard a lot more of your own dialect than one you’ve been trained on for maybe five minutes? My hypothesis is that, for the vowels in your own dialect, you’re highly attuned to these movements. And when a scientist (me) comes along and tells you something that goes against your huge amount of experience with these shapes, even if you do believe them, you’re so used to automatically understanding these vowels that you can’t help but correctly identify them. BUT if you’ve only heard a little bit of a new dialect you don’t have a strong idea of what these vowels should sound like, so if you’re going to rely more on the other types of information available to you–like where you’re told the speaker is from–even if that information is incorrect.

So, to answer the question I posed in the title, can what you think you know about someone affect how you hear them? Yes… but only if you’re a little uncertain about what you heard in the first place, perhaps becuase it’s a dialect you’re unfamiliar with.

“Men” vs. “Females” and sexist writing

So, I have a confession to make. I actually set out to write a completely different blog post. In searching Wikimedia Commons for a picture, though, I came across something that struck me as odd. I was looking for pictures of people writing, and I noticed that there were two gendered sub-categories, one for men and one for women. Leaving aside the question of having only two genders, what really stuck out to me were the names. The category with pictures of men was called “Men Writing” and the category with pictures of women was called “Females Writing”.

Family 3

According to this sign, the third most common gender is “child”.

So why did that bother me? It is true that male humans are men and that women are female humans. Sure, a writing professor might nag about how the two terms lack parallelism, but does it really matter?

The thing is, it wouldn’t matter if this was just a one-off thing. But it’s not. Let’s look at the Category: Males and Category: Females*. At the top of the category page for men, it states “This category is about males in general. For human males, see Category:Male humans”. And the male humans category is, conveniently, the first subcategory. Which is fine, no problem there. BUT. There is no equivalent disclaimer at the top of Category: Females, and the first subcategory is not female humans but female animals. So even though “Females” is used to refer specifically to female humans when talking about writing, when talking about females in general it looks as if at least one editor has decided that it’s more relevant for referring to female animals. And that also gels with my own intuitions. I’m more like to ask “How many females?” when looking at a bunch of baby chickens than I am when looking at a bunch of baby humans. Assuming the editors responsible for these distinctions are also native English speakers, their intuitions are probably very similar.

So what? Well, it makes me uncomfortable to be referred to with a term that is primarily used for non-human animals while men are referred to with a term that I associate with humans. (Or, perhaps, women are being referred to as “female men”, but that’s equally odd and exclusionary.)

It took me a while to come to that conclusion. I felt that there was something off about the terminology, but I had to turn and talk it over with my officemate for a couple minutes before finally getting at the kernel of the problem. And I don’t think it’s a concious choice on the part of the editors–it’s probably something they don’t even realize they’re doing. But I definitely do think that it’s related to the gender imbalance of the editors of Wikimedia. According to recent statistics, over ninety percent (!) of Wikipedia editors are male. And this type of sexist language use probably perpetuates that imbalance. If I feel, even if it’s for reasons that I have a hard time articulating, that I’m not welcome in a community then I’m less likely to join it. And that’s not just me. Students who are presented with job descriptions in language that doesn’t match thier gender are less likely to be interested in those jobs. Women are less likely to respond to job postings if “he” is used to refer to both men and women. I could go on citing other studies, but we could end up being here all day.

My point is this: sexist language affects the behaviour and choices of those who hear it. And in this case, it makes me less likely to participate in this on-line community because I don’t feel as if I would be welcomed and respected there. It’s not only Wikipedia/Wikimedia, either. This particular usage pattern is also something I associate with Reddit (a good discussion here). The gender breakdown of Reddit? About 70% male.

For some reason, the idea that we should avoid sexist language usage seems to really bother people. I was once a TA for a large lecture class where, in the middle of discussions of the effects of sexist language, a male student interrupted the professor to say that he didn’t think it was a problem. I’ve since thought about it quite a bit (it was pretty jarring) and I’ve come to the conclusion that the reason the student felt that way is that, for him, it really wasn’t a problem. Since sexist language is almost always exclusionary to women, and he was not a woman, he had not felt that moment of discomfort before.

Further, I think he may have felt that, because this type of language tends to benefit men, he felt that we were blaming him. I want to be clear here: I’m not blaming anyone for thier unconscious biases. And I’m  not saying that only men use sexist language. The Wikimedia editors who made this choice may very well have been women. What I am saying is that we need to be aware of these biases and strive to correct them. It’s hard, and it takes constant vigilance, but it’s an important and relatively simple step that we can all take in order to help eliminate sexism.

*As they were on Wednesday, April 8 2015. If they’ve been changed, I’d recommend the Way Back Machine.

Great Ideas in Linguistics: Grammaticality Judgements

Today’s Great Idea in Linguistics comes to use from syntax. One interesting difference between syntax and other fields of linguistics is what is considered compelling evidence for a theory in syntax. The aim of transformational syntax is to produce a set of rules (originally phrase structure rules) that will let you produce all the grammatical sentences in a language and none of the ungrammatical ones.  So, if you’re proposing a new rule you need to show that the sentences it outputs are grammatical… but how do you do that?

Wessel smedbager04.jpg

I sentence you to ten hours of community service for ungrammatical utterances!

One way to test whether something is grammatical is to see whether someone’s said it before. Back in the day, before you had things like large searchable corpora–or, heck even the internet–this was  difficult, so say the least. Especially since the really interesting syntactic phenomena tend to be pretty rare. Lots of sentences have a subject and an object, but a lot fewer have things like wh-islands.

Another way is to see if someone will say it. This is a methodology that is often used in sociolinguistics research. The linguist interviews someone using questions that are specifically designed to elicit certain linguistic forms, like certain words or sounds. However, this methodology is chancy at best. Often times the person won’t produce whatever it is you’re looking for. Also it can be very hard to make questions or prompts to access very rare forms.

Another way to see whether something is grammatical is to see whether someone would say it. This is the type of evidence that has, historically, been used most often in syntax research. The concept is straightforward. You present a speaker of a language with a possible sentence and  they use thier intuition as a native speaker to determine whether it’s good (“grammatical”) or not (“ungrammatical”). These sentences are often outputs of a proposed structure and used to argue either for or against it.

However, in practice grammaticality judgements can occasionally be a bit more difficult. Think about the following sentences:

  • I ate the carrot yesterday.
    • This sounds pretty good to me. I’d say it’s “grammatical”.
  • *I did ate the carrot yesterday.
    • I put a star (*) in front of this sentence because it sounds bad to me, and I don’t think anyone would say it. I’d say it’s “ungrammatical”.
  • ? I done ate the carrot yesterday.
    • This one is a little more borderline. It’s actually something I might say, but only in a very informal context and I realize that not everyone would say it.

So if you were a syntactician working on these sentences, you’d have to decide whether your model should account for the last sentence or not. One way to get around this is by building probability into the syntactic structure. So I’m more likely to use a structure that produces the first example but there’s a small probability I might use the structure in the third example. To know what those probabilities are, however, you need to figure out how likely people are to use each of the competing structures (and whether there are other factors at play, like dialect) and for that you need either lots and lots of grammaticality judgements. It’s a new use of a traditional tool that’s helping to expand our understanding of language.

Great ideas in linguistics: Language acquisition

Courtesy of your friendly neighbourhood rng, this week’s great idea in linguistics is… language acquisition! Or, in other words, the process of learning  a language. (In this case, learning your first language when you’re a little baby, also known as L1 acquisition; second-language learning, or L2 acquisition, is a whole nother bag of rocks.) Which begs the question: why don’t we just call it language learning and call it a day? Well, unlike learning to play baseball, turn out a perfect soufflé or kick out killer DPS, learning a language seems to operate under a different set of rules. Babies don’t benefit from direct language instruction and it may actually hurt them.

In other words:

Language acquisition is process unique to humans that allows us to learn our first language without directly being taught it.

Which doesn’t sound so ground-breaking… until you realize that that means that language use is utterly unique among human behaviours. Oh sure, we learn other things without being directly taught them, even relatively complex behaviours like swallowing and balancing. But unlike speaking, these aren’t usually under concious control and when they are it’s usually because something’s gone wrong. Plus, as I’ve discussed before, we have the ability to be infinitely creative with language. You can learn to make a soufflé without knowing what happens when you combine the ingredients in every possible combination, but knowing a language means that you know rules that allow you to produce all possible utterances in that language.

So how does it work? Obviously, we don’t have all the answers yet, and there’s a lot of research going on on how children actually learn language. But we do know what it generally tends to look like, precluding things like language impairment or isolation.

  1. Vocal play. The kid’s figured out that they have a mouth capable of making noise (or hands capable of making shapes and movements) and are practising it. Back in the day, people used to say that infants would make all the sounds of all the world’s languages during this stage. Subsequent research, however, suggests that even this early children are beginning to reflect the speech patterns of people around them.
  2. Babbling. Kids will start out with very small segments of language, then repeat the same little chunk over and over again (canonical babbling), and then they’ll start to combine them in new ways (variegated babbling). In hearing babies, this tends to be syllables, hence the stereotypical “mamamama”. In Deaf babies it tends to be repeated hand motions.
  3. One word stage. By about 13 months, most children will have begun to produce isolated words. The intended content is often more than just the word itself, however. A child shouting “Dog!” at this point could mean “Give me my stuffed dog” or “I want to go see the neighbour’s terrier” or “I want a lion-shaped animal cracker” (since at this point kids are still figuring out just how many four-legged animals actually are dogs). These types of sentences-in-a-word are known as holophrases.
  4. Two word stage. By two years, most kids will have moved on to two-word phrases, combining words in way that shows that they’re already starting to get the hang of their language’s syntax. Morphology is still pretty shaky, however: you’re not going to see a lot of tense markers or verbal agreement.
  5. Sentences. At this point, usually around age four, people outside the family can generally understand the child. They’re producing complex sentences and have gotten down most, if not all, of the sounds in their language.

These general stages of acquisition are very robust. Regardless of the language, modality or even age of acquisition we still see these general stages. (Although older learners may never completely acquire a language due to, among other things, reduced neuroplasticity.) And the fact they do seem to be universal is yet more evidence that language acquisition is a unique process that deserves its own field of study.

Great ideas in linguistics: Sociolinguistics

I’ll be the first to admit: for a long time, even after I’d begun my linguistics training, I didn’t really understand what sociolinguistics was. I had the idea that it mainly had to do with discourse analysis, which is certainly a fascinating area of study, but I wasn’t sure it was enough to serve as the basis for a major discipline of linguistics. Fortunately, I’ve learned a great deal about sociolinguistics since that time.

Sociolinguistics is the sub-field of linguistics that studies language in its social context and derives explanatory principles from it. By knowing about the language, we can learn something about a social reality and vice versa.

Now, at first glance this may seem so intuitive that it’s odd someone would to the trouble of stating it directly. As social beings, we know that the behaviour of people around us is informed by their identities and affiliations. At the extreme of things it can be things like having a cultural rule that literally forbids speaking to your mother-in-law, or requires replacing the letters “ck” with “cc” in all written communication. But there are more subtle rules in place as well, rules which are just as categorical and predictable and important. And if you don’t look at what’s happening with the social situation surrounding those linguistic rules, you’re going to miss out on a lot.

Case in point: Occasionally you’ll here phonologists talk about sound changes being in free variation, or rules that are randomly applied. BUT if you look at the social facts of the community, you’ll often find that there is no randomness at all. Instead, there are underlying social factors that control which option a person makes as they’re speaking. For example, if you were looking at whether people in Montreal were making r-sounds with the front or back of the tongue and you just sampled a bunch of them you might find that some people made it one way most of the time and others made it the other way most of the time. Which is interesting, sure, but doesn’t have a lot of explanatory power.

However, if you also looked at the social factors associated with it, and the characteristics of the individuals who used each r-sound, you might notice something interesting, as Clermont and Cedergren did (see the illustration). They found that younger speakers preferred the back-of-the-mouth r-sound, while older people tended to use the tip of the tongue instead. And that has a lot more explanatory power. Now we can start asking questions to get at the forces underlying that pattern: Is this the way the younger people have always talked, i.e. some sort of established youthful style, or is there a language change going on and they newer form is going to slowly take over? What causes younger speakers to use the the form they do? Is there also an effect of gender, or who you hang out with?

changes

Figure one from Sankoff and Blondeau. 2007. (Click picture to look at the whole study.) As you can see, younger speakers are using [R] more than older speakers, and the younger a speaker is the more likely they are to use [R].

And that’s why sociolinguistics is all kinds of awesome. It lets us peel away and reveal some of the complexity surrounding language. By adding sociological data to our studies, we can help to reduce statistical noise and reveal new and interesting things about how language works, what it means to be a language-user, and why we do what we do.

Are television and mass media destroying regional accents?

One of the occupational hazards of linguistics is that you are often presented with spurious claims about language that are relatively easy to quantifiably disprove. I think this is probably partly due to the fact that there are multiple definitions of ‘linguist. As a result, people tend to equate mastery of a language with explicit knowledge of it’s workings. Which, on the one hand, is reasonable. If you know French, the idea is that you know how to speak French, but also how it works. And, in general, that isn’t the case. Partly because most language instruction is light on discussions of grammatical structures–reasonably so; I personally find inductive grammar instruction significantly more helpful, though the research is mixed–and partly because, frankly, there’s a lot that even linguists don’t know about how grammar works. Language is incredibly complex, and we’ve only begun to explore and map out that complexity. But there are a few things we are reasonably certain we know. And one of those is that your media consumption does not “erase” your regional dialect [pdf]. The premise is flawed enough that it begins to collapse under it’s own weight almost immediately. Even the most dedicated American fans of Dr. Who or Downton Abby or Sherlock don’t slowly develop British accents.

Christopher Eccleston Thor 2 cropped

Lots of planets have a North with a distinct accent that is not being destroyed by mass media.

So why is this myth so persistent? I think that the most likely answer is that it is easy to mischaracterize what we see on television and to misinterpret what it means. Standard American English (SAE), what newscasters tend to use, is a dialect. It’s not just a certain set of vowels but an entire, internally consistent grammatical system.  (Failing to recognize that dialects are more than just adding a couple of really noticeable sounds or grammatical structures is why some actors fail so badly at trying to portray a dialect they don’t use regularly.) And not only  is it a dialect, it’s a very prestigious dialect. Not only newscasters make use of it, but so do political figures, celebrities, and pretty much anyone who has a lot of social status. From a linguistic perspective, SAE is no better or worse than any other dialect. From a social perspective, however, SAE has more social capital than most other dialects. That means that being able to speak it, and speak it well, can give you opportunities that you might not otherwise have had access to. For example, speakers of Southern American English are often characterized as less intelligent and educated. And those speakers are very aware of that fact, as illustrated in this excrpt from the truely excellent PBS series Do You Speak American:

ROBERT:

Do you think northern people think southerners are stupid because of the way they talk?

JEFF FOXWORTHY:

Yes I think so and I think Southerners really don’t care that Northern people think that eh. You know I mean some of the, the most intelligent people I’ve ever known talk like I do. In fact I used to do a joke about that, about you know the Southern accent, I said nobody wants to hear their brain surgeon say, ‘Al’ight now what we’re gonna do is, saw the top of your head off, root around in there with a stick and see if we can’t find that dad burn clot.’

So we have pressure from both sides: there are intrinsic social rewards for speaking SAE, and also social consequences for speaking other dialects. There are also plenty of linguistic role-models available through the media, from many different backgrounds, all using SAE. If you consider these facts alone it seems pretty easy to draw the conclusion that regional dialects in America are slowly being replaced by a prestigious, homogeneous dialect.

Except that’s not what’s happening at all. Some regional dialects of American English are actually becoming more, rather than less, prominent. On the surface, this seems completely contradictory. So what’s driving this process, since it seems to be contradicting general societal pressure? The answer is that there are two sorts of pressure. One, the pressure from media, is to adopt the formal, standard style. The other, the pressure from family, friends and peers, is to retain and use features that mark you as part of your social network. Giles, Taylor and Bourhis showed that identification with a certain social group–in their case Welsh identity–encourages and exaggerates Welsh features. And being exposed to a standard dialect that is presented as being in opposition to a local dialect will actually increase that effect. Social identity is constructed through opposition to other social groups. To draw an example from American politics, many Democrats define themselves as “not Republicans” and as in opposition to various facets of “Republican-ness”. And vice versa.

Now, the really interesting thing is this: television can have an effect on speaker’s dialectal features But that effect tends to be away from, rather than towards, the standard. For example, some Glaswegian English speakers have begun to adopt features of Cockney English based on their personal affiliation with the  show EastendersIn light of what I discussed above, this makes sense. Those speakers who had adopted the features are of a similar social and socio-economic status as the characters in Eastenders. Furthermore, their social networks value the characters who are shown using those features, even though they are not standard. (British English places a much higher value on certain sounds and sound systems as standard. In America, even speakers with very different sound systems, e.g. Bill Clinton and George W. Bush, can still be considered standard.) Again, we see retention and re-invigoration of features that are not standard through a construction of opposition. In other words, people choose how they want to sound based on who they want to be seen as. And while, for some people, this means moving towards using more SAE, in others it means moving away from the standard.

One final note: Another factor which I think contributes to the idea that television is destroying accents is the odd idea that we all only have one dialect, and that it’s possible to “lose” it. This is patently untrue. Many people (myself included) have command of more than one dialect and can switch between them when it’s socially appropriate, or blend features from them for a particular rhetorical effect. And that includes people who generally use SAE. Oprah, for example, will often incorporate more features of African American English when speaking to an African American guest.  The bottom line is that television and mass media can be a force for linguistic change, but they’re hardly the great homogonizier that it is often claimed they are.

For other things I’ve written about accents and dialects, I’d recommend:

  1. Why do people  have accents? 
  2. Ask vs. Aks
  3. Coke vs. Soda vs. Pop

The Science of Speaking in Tongues

So I was recently talking with one of my friends, and she asked me what linguists know about speaking in tongues (or glossolalia, which is the fancy linguistical term for it). It’s not a super well-studied phenomenon, but there has been enough research done that we’ve reached some pretty confident conclusions, which I’ll outline below.

Bozen 1 (327)

More like speaking around tongues, in this guy’s case.

  • People don’t tend to use sounds that aren’t in their native language. (citation) So if you’re an English speaker, you’re not going to bust out some Norwegian vowels. This rather lets the air out of the theory that individuals engaged in glossolalia are actually speaking another language. It is more like playing alphabet soup with the sounds you already know. (Although not always all the sounds you know. My instinct is that glossolalia is made up predominately of the sounds that are the most common in the person’s language.)
  • It lacks the structure of language. (citation) So one of the core ideas of linguistics, which has been supported again and again by hundreds of years of inquiry, is that there are systems and patterns underlying language use: sentences are usually constructed of some sort of verb-like thing and some sort of noun-like thing or things, and it’s usually something on the verb that tells you when and it’s usually something on the noun that tells you things like who possessed what. But these patterns don’t appear in glossolalia. Plus, of course, there’s not really any meaningful content being transmitted. (In fact, the “language” being unintelligible to others present is one of the markers that’s often used to identify glossolalia.) It may sort of smell like a duck, but it doesn’t have any feathers, won’t quack and when we tried to put it in water it just sort of dissolved, so we’ve come to conclusion that it is no, in fact, a duck.
  • It’s associated with a dissociative psychological state. (citation) Basically, this means that speakers are aware of what they’re doing, but don’t really feel like they’re the ones doing it. In glossolalia, the state seems to come and then pass on, leaving speakers relatively psychologically unaffected. Disassociation can be problematic, though; if it’s particularly extreme and long-term it can be characterized as multiple personality disorder.
  • It’s a learned behaviour. (citation) Basically, you only see glossolalia in cultures where it’s culturally expected and only in situations where it’s culturally appropriate. In fact, during her fieldwork, Dr. Goodman (see the citation) actually observed new initiates into a religious group being explicitly instructed in how to enter a dissociative state and engage in glossolalia.

So glossolalia may seem language-like, but from a linguistic standpoint it doesn’t seem to be actually be language.  (Which is probably why there hasn’t been that much research done on it.) It’s vocalization that arises as the result of a learned psychological stated that lacks linguistic systematicity.

The Acoustic Theory of Speech Perception

So, quick review: understanding speech is hard to model and the first model we discussed, motor theory, while it does address some problems, leaves something to be desired. The big one is that it doesn’t suggest that the main fodder for perception is the acoustic speech signal. And that strikes me as odd. I mean, we’re really used to thinking about hearing speech as a audio-only thing. Telephones and radios work perfectly well, after all, and the information you’re getting there is completely audio. That’s not to say that we don’t use visual, or, heck, even tactile data in speech perception. The McGurk effect, where a voice saying “ba” dubbed over someone saying “ga” will be perceived as “da” or “tha”, is strong evidence that we can and do use our eyes during speech perception. And there’s even evidence that a puff of air on the skin will change our perception of speech sounds. But we seem to be able to get along perfectly well without these extra sensory inputs, relying on acoustic data alone.

CPT-sound-physical-manifestation

This theory sounds good to me. Sorry, I’ll stop.

Ok, so… how do we extract information from acoustic data? Well, like I’ve said a couple time before, it’s actually a pretty complex problem. There’s no such thing as “invariance” in the speech signal and that makes speech recognition monumentally hard. We tend not to think about it because humans are really, really good at figuring out what people are saying, but it’s really very, very complex.

You can think about it like this: imagine that you’re looking for information online about platypuses. Except, for some reason, there is no standard spelling of platypus. People spell it “platipus”, “pladdypuss”, “plaidypus”, “plaeddypus” or any of thirty or forty other variations. Even worse, one person will use many different spellings and may never spell it precisely the same way twice. Now, a search engine that worked like our speech recognition works would not only find every instance of the word platypus–regardless of how it was spelled–but would also recognize that every spelling referred to the same animal. Pretty impressive, huh? Now imagine that every word have a very variable spelling, oh, and there are no spaces between words–everythingisjustruntogetherlikethisinonelongspeechstream. Still not difficult enough for you? Well, there is also the fact that there are ambiguities. The search algorithm would need to treat “pladypuss” (in the sense of  a plaid-patterned cat) and “palattypus” (in the sense of the venomous monotreme) as separate things. Ok, ok, you’re right, it still seems pretty solvable. So let’s add the stipulation that the program needs to be self-training and have an accuracy rate that’s incredibly close to 100%. If you can build a program to these specifications, congratulations: you’ve just revolutionized speech recognition technology. But we already have a working example of a system that looks a heck of a lot like this: the human brain.

So how does the brain deal with the “different spellings” when we say words? Well, it turns out that there are certain parts of a word that are pretty static, even if a lot of other things move around. It’s like a superhero reboot: Spiderman is still going to be Peter Parker and get bitten by a spider at some point and then get all moody and whine for a while. A lot of other things might change, but if you’re only looking for those criteria to figure out whether or not you’re reading a Spiderman comic you have a pretty good chance of getting it right. Those parts that are relatively stable and easy to look for we call “cues”. Since they’re cues in the acoustic signal, we can be even more specific and call them “acoustic cues”.

If you think of words (or maybe sounds, it’s a point of some contention) as being made up of certain cues, then it’s basically like a list of things a house-buyer is looking for in a house. If a house has all, or at least most, of the things they’re looking for, than it’s probably the right house and they’ll select that one. In the same way, having a lot of cues pointing towards a specific word makes it really likely that that word is going to be selected. When I say “selected”, I mean that the brain will connect the acoustic signal it just heard to the knowledge you have about a specific thing or concept in your head. We can think of a “word” as both this knowledge and the acoustic representation. So in the “platypuss” example above, all the spellings started with “p” and had an “l” no more than one letter away. That looks like a  pretty robust cue. And all of the words had a second “p” in them and ended with one or two tokens of “s”. So that also looks like a pretty robust queue. Add to that the fact that all the spellings had at least one of either a “d” or “t” in between the first and second “p” and you have a pretty strong template that would help you to correctly identify all those spellings as being the same word.

Which all seems to be well and good and fits pretty well with our intuitions (or mine at any rate). But that leaves us with a bit of a problem: those pesky parts of Motor Theory that are really strongly experimentally supported. And this model works just as well for motor theory too, just replace  the “letters” with specific gestures rather than acoustic cues. There seems to be more to the story than either the acoustic model or the motor theory model can offer us, though both have led to useful insights.

The Motor Theory of Speech Perception

Ok, so like I talked about in my previous two posts, modelling speech perception is an ongoing problem with a lot of hurdles left to jump. But there are potential candidate theories out there, all of which offer good insight into the problem. The first one I’m going to talk about is motor theory.

Clamp-Type 2C1.5-4 Motor

So your tongue is like the motor body and the other person’s ear are like the load cell…

So motor theory has one basic premise and three major claims.  The basic premise is a keen observation: we don’t just perceive speech sounds, we also make them. Whoa, stop the presses. Ok, so maybe it seems really obvious, but motor theory was really the first major attempt to model speech perception that took this into account. Up until it was first posited in the 1960’s , people had pretty much been ignoring that and treating speech perception like the only information listeners had access to was what was in the acoustic speech signal. We’ll discuss that in greater detail, later, but it’s still pretty much the way a lot of people approach the problem. I don’t know of a piece of voice recognition software, for example, that include an anatomical model.

So what’s the fact that listeners are listener/speakers get you? Well, remember how there aren’t really invariant units in the speech signal? Well, if you decide that what people are actually perceiving aren’t actually a collection of acoustic markers that point to one particular language sound but instead the gestures needed to make up that sound, then suddenly that’s much less of a problem. To put it in another way, we’re used to thinking of speech being made up of a bunch of sounds, and that when we’re listening speech we’re deciding what the right sounds are and from there picking the right words. But from a motor theory standpoint, what you’re actually doing when you’re listening to speech is deciding what the speaker’s doing with their mouth and using that information to figure out what words they’re saying. So in the dictionary in your head, you don’t store words as strings of sounds but rather as strings of gestures

If you’re like me when I first encountered this theory, it’s about this time that you’re starting to get pretty skeptical. I mean, I basically just said that what you’re hearing is the actual movement of someone else’s tongue and figuring out what they’re saying by reverse engineering it based on what you know your tongue is doing when you say the same word. (Just FYI, when I say tongue here, I’m referring to the entire vocal tract in its multifaceted glory, but that’s a bit of a mouthful. Pun intended. 😉 ) I mean, yeah, if we accept this it gives us a big advantage when we’re talking about language acquisition–since if you’re listening to gestures, you can learn them just by listening–but still. It’s weird. I’m going to need some convincing.

Well, let’s get back to the those three principles I mentioned earlier, which are taken from Galantucci, Flower and Turvey’s excellent review of motor theory.

  1. Speech is a weird thing to perceive and pretty much does its own thing. I’ve talked about this at length, so let’s just take that as a given for now.
  2. When we’re listening to speech, we’re actually listening to gestures. We talked about that above. 
  3. We use our motor system to help us perceive speech.

Ok, so point three should jump out at you a bit. Why? Of these three points, its the easiest one to test empirically. And since I’m a huge fan of empirically testing things (Science! Data! Statistics!) we can look into the literature and see if there’s anything that supports this. Like, for example, a study that shows that when listening to speech, our motor cortex gets all involved. Well, it turns out that there  are lots of studies that show this. You know that term “active listening”? There’s pretty strong evidence that it’s more than just a metaphor; listening to speech involves our motor system in ways that not all acoustic inputs do.

So point three is pretty well supported. What does that mean for point two? It really depends on who you’re talking to. (Science is all about arguing about things, after all.) Personally, I think motor theory is really interesting and address a lot of the problems we face in trying to model speech perception. But I’m not ready to swallow it hook, line and sinker. I think Robert Remez put it best in the proceedings of Modularity and The Motor Theory of Speech Perception:

I think it is clear that Motor Theory is false. For the other, I think the evidence indicates no less that Motor Theory is essentially, fundamentally, primarily and basically true. (p. 179)

On the one hand, it’s clear that our motor system is involved in speech perception. On the other, I really do think that we use parts of the acoustic signal in and of themselves. But we’ll get into that in more depth next week.