Who all studies language? 🤔 A brief disciplinary tour

red and yellow bus photo
Buckle up friends, we’re going on a tour!

One of the nice things about human language is that no matter what your question about it might be, someone, somewhere has almost certainly already asked the same thing… and probably found at least part of an answer! The downside of this wealth of knowledge is that, even if you restrict yourself to just looking at the Western academic tradition, 1) there’s a lot of it and 2) it’s scattered across a lot of disciplines which can make it very hard to find.

An academic discipline is a field of study but also a social network of scholars with shared norms and vocabulary. While people do do “interdisciplinary” work that draws on more than one discipline, the majority of academic life is structured around working in a single discipline. This is reflected in everything from departments to journals and conferences to how research funding is divided.

As a result, even if you study human language in some capacity yourself it can be very hard to form a good idea of where else people are doing related work if it falls into another discipline you don’t have contact with. You won’t see them at your conferences, you probably won’t cite each other in your papers and even if you are studying the exact same thing you’ll probably use different words to describe it and have different reserach goals. As a result, even many researchers working in language may not know what’s happening in the discipline next door.

For better or worse, though, I’ve always been very curious about disciplinary boundaries and talk and read to a lot of folks and, as a result, have ended up learning a lot about different disciplines. (Note: I don’t know that I’d recommend this to other junior scholars. It made me a bit of a “neither fish nor fowl” when I was on the faculty job market. I did have fun though. 😉 The upside of this is that I’ve had at least three discussions with people where the gist of it was “here are the academic fields that are relevant to your interest” and so I figured it was time to write it up as a blog post to save myself some time in the future.

Disciplines where language is the main focus

These fields study language itself. While people working in these fields may use different tools and have different goals, these are fields where people are likely to say that language is their area of study.

Linguistics

This is the field that studies Language and how it works. Sometimes you’ll hear people talk about “capital L language” to distinguish it from the study of a specific language. Whatever tools or methods or theoretical linguists use, their main object of study is language itself. There a lot of fields within linguistics and they vary a lot, but generally if a field has “linguistics” on the end, they’re going to be focusing on language itself.

For more information about linguistics, check out the Linguistic Society of America or my friend Gretchen’s blog.

Language-specific disciplines (classics, English, literature, foreign language departments etc.)

This is a collection of disciplines that study particular languages and specific instances of language use (like specific documents or pieces of oral literature). These fields generally focus on language teaching or applying frameworks like critical theory to better understand texts. Oh, or they produce new texts themselves. If you ask someone in one of these fields what they study, they’ll probably say the name of the specific language or family of languages they work on.

There are a lot of different fields that fall under this umbrella, so I’d recommend searching for “[whatever language you what to know about ] studies” and taking it from there.

Speech language pathology/Audiology/Speech and hearing

I’m grouping these disciplines together because they generally focus on language in a medical context. The main focus of researchers in this field is studying how the human body produces and receives language input. A lot of the work here focus on identifying and treating instances when these processes break down.

A good place to learn more is the American Speech-Language-Hearing Association.

Computer science (Specifically natural language processing, computational linguistics)

This field (more likely to be called NLP these days) focuses on building and understanding computational systems where language data, usually text, is part of either the input or output. Currently the main focus on the field (in terms of press coverage and $$ at any rate) is in applying machine learning methods to various problems. A lot of work in NLP is focused around particular tasks which generally have an associated dataset and shared metric and where the aim is to outperform other systems on the same problem. NLP does use some methods from other fields of machine learning (like computer vision) but the majority of the work uses techniques specific to, or at least developed for, language data.

To learn more, I’d check out the Association for Computational Linguistics. (Note that “NLP” is also an acronym for a pseudoscienience thing so I’d recommend searching #NLProc or “Natural Language Processing” instead.)

For reference, I would say that currently my main field is in applied NLP, but my background is primarily in linguistics and sprinkling of language-specific studies, especially English and American Sign Language. (Although I’ve taken course work and been a co-author on papers in speech & hearing.)

Disciplines where language is sometimes studied

There are also a lot of related fields where language data is used, or language is used as a tool to study a different object of inquiry.

  • Data Science. You would you shocked how much of data science is working with text data (or maybe you’re a data scientist and you wouldn’t be). Pretty much every organization has some sort of text they would like to learn about without having to read it all.
  • Computational social science, which uses language data but also frequently other types of data produced by human interaction with computational system. The aim is usually more to model or understand society rather than language use.
  • Anthropology, where language data is often used to better understand humans. (As a note, early British anthropology in particular is straight up racist imperial apologism, so be ye warned. There have been massive changes in the field, thankfully.) A lot of language documentation used to happen in anthropology departments, although these days I think it tends to be more linguistics. The linguistic-focused subdisciplines are anthropological linguistics or linguistic anthropology (they’re slightly different).
  • Sociology, the study of society. Sociolinguistics is more sociologically-informed linguistics, and in the US historically has been slightly more macro focused.
  • Psychology/Cognitive science. Non-physical brain stuff, like the mind and behavior. The linguistic part is psycholinguistics. This is where a lot of the work on language learning goes on.
  • Neurology. Physical brain stuff. The linguistic part is neurolinguistics. They tend to do a lot of imaging.
  • Education. A lot of the literature on language learning is in education. (Language learning is not to be confused with language acquisition; that’s only for the process by which children naturally acquire a language without formal instruction.)
  • Electrical engineering (Signal processing). This is generally the field of folks who are working on telephony and automatic speech recognition. NLP historically hasn’t done as much with voices, that’s been in electrical engineering/signal processing.
  • Disability studies. A lot of work on signed languages will be in disability studies departments if they don’t have their own department.
  • Historians. While they aren’t primarily studying the changes in linguistic systems, historians interact with older language data a lot and provide context for things like language contact, shift and historical usage.
  • Informatics/information science/library science. Information science is broader than linguistics (including non-linguistic information all well) but often dovetails with it, especially in semantics (the study of meaning) and ontologies (a formal representation of categories and their relations).
  • Information theory. This field is superficially focused on how digital information is encoded. Usually linguistics draws from it rather than vice-versa because it’s lower level, but if you’ve heard of entropy, compression or source-channel theory those are all from information theory.
  • Philosophy. A lot of early linguistics scholars, like Ferdinand de Saussure, would probably have considered themselves primarily philosophers and there was this whole big thing in the early 1900’s. The language-specific branch is philosophy of language.
  • Semiotics. This is a field I haven’t interacted with too much (I get the impression that it’s more popular in Europe than the US) but they study “signs”, which as I understand it is any way of referring to a thing in any medium without using the actual thing, which by that definition does include language.
  • Design studies. Another field I’m not super familiar with, but my understanding is that it includes studying how users of a designed thing interact with it, which may include how they use or interpret language. Also: good design is so important and I really don’t think designers get enough credit/kudos.

Advertisement

Dance Your PhD: Modeling the Perceptual Learning of Novel Dialect Features

Today’s blog post is a bit different. It’s in dance!

If that wasn’t quite clear enough for you, you can check this blog post for a more detailed explanation.

Can what you think you know about someone affect how you hear them?

I’ll get back to “a male/a female” question in my next blog post (promise!), but for now I want to discuss some of the findings from my dissertation research. I’ve talked about my dissertation research a couple times before, but since I’m going to be presenting some of it in Spain (you can read the full paper here), I thought it would be a good time to share some of my findings.

In my dissertation, I’m looking at how what you think you know about a speaker affects what you hear them say. In particular, I’m looking at American English speakers who have just learned to correctly identify the vowels of New Zealand English. Due to an on-going vowel shift, the New Zealand English vowels are really confusing for an American English speaker, especially the vowels in the words “head”, “head” and “had”.

tokensVowelPlot
This plot shows individual vowel tokens by the frequency of thier first and second formants (high-intensity frequency bands in the vowel). Note that the New Zealand “had” is very close to the US “head”, and the New Zealand “head” is really close to the US “hid”.

These overlaps can be pretty confusing when American English speakers are talking to New Zealand English speakers, as this Flight of the Conchords clip shows!

The good news is that, as language users, we’re really good at learning new varieties of languages we already know, so it only takes a couple minutes for an American English speaker to learn to correctly identify New Zealand English vowels. My question was this: once an American English speaker has learned to understand the vowels of New Zealand English, how do they know when to use this new understanding?

In order to test this, I taught twenty one American English speakers who hadn’t had much, if any, previous exposure to New Zealand English to correctly identify the vowels in the words “head”, “heed” and “had”. While I didn’t play them any examples of a New Zealand “hid”–the vowel in “hid” is said more quickly in addition to having different formants, so there’s more than one way it varies–I did let them say that they’d heard “hid”, which meant I could tell if they were making the kind of mistakes you’d expect given the overlap between a New Zealand “head” and American “hid”.

So far, so good: everyone quickly learned the New Zealand English vowels. To make sure that it wasn’t that they were learning to understand the one talker they’d been listening to, I tested half of my listeners on both American English and New Zealand English vowels spoken by a second, different talker. These folks I told where the talker they were listening to was from. And, sure enough, they transferred what they’d learned about New Zealand English to the new New Zealand speaker, while still correctly identifying vowels in American English.

The really interesting results here, though, are the ones that came from the second half the listeners. This group I lied to. I know, I know, it wasn’t the nicest thing to do, but it was in the name of science and I did have the approval of my institutional review board, (the group of people responsible for making sure we scientists aren’t doing anything unethical).

In an earlier experiment, I’d played only New Zealand English as this point, and when I told them the person they were listening to was from America, they’d completely changed the way they listened to those vowels: they labelled New Zealand English vowels as if they were from American English, even though they’d just learned the New Zealand English vowels. And that’s what I found this time, too. Listeners learned the New Zealand English vowels, but “undid” that learning if they thought the speaker was from the same dialect as them.

But what about when I played someone vowels from their own dialect, but told them the speaker was from somewhere else? In this situation, listeners ignored my lies. They didn’t apply the learning they’d just done. Instead, the correctly treated the vowels of thier own dialect as if they were, in fact, from thier dialect.

At first glance, this seems like something of a contradiction: I just said that listeners rely on social information about the person who’s talking, but at the same time they ignore that same social information.

So what’s going on?

I think there are two things underlying this difference. The first is the fact that vowels move. And the second is the fact that you’ve heard a heck of a lot more of your own dialect than one you’ve been listening to for fifteen minutes in a really weird training experiment.

So what do I mean when I say vowels move? Well, remember when I talked about formants above? These are areas of high acoustic energy that occur at certain frequency ranges within a vowel and they’re super important to human speech perception. But what doesn’t show up in the plot up there is that these aren’t just static across the course of the vowel–they move. You might have heard of “diphthongs” before: those are vowels where there’s a lot of formant movement over the course of the vowel.

And the way that vowels move is different between different dialects. You can see the differences in the way New Zealand and American English vowels move in the figure below. Sure, the formants are in different places—but even if you slid them around so that they overlapped, the shape of the movement would still be different.

formantDynamics
Comparison of how the New Zealand and American English vowels move. You can see that the shape of the movement for each vowel is really different between these two dialects.  

Ok, so the vowels are moving in different ways. But why are listeners doing different things between the two dialects?

Well, remember how I said earlier that you’ve heard a lot more of your own dialect than one you’ve been trained on for maybe five minutes? My hypothesis is that, for the vowels in your own dialect, you’re highly attuned to these movements. And when a scientist (me) comes along and tells you something that goes against your huge amount of experience with these shapes, even if you do believe them, you’re so used to automatically understanding these vowels that you can’t help but correctly identify them. BUT if you’ve only heard a little bit of a new dialect you don’t have a strong idea of what these vowels should sound like, so if you’re going to rely more on the other types of information available to you–like where you’re told the speaker is from–even if that information is incorrect.

So, to answer the question I posed in the title, can what you think you know about someone affect how you hear them? Yes… but only if you’re a little uncertain about what you heard in the first place, perhaps becuase it’s a dialect you’re unfamiliar with.

Six Linguists of Color (who you can follow on Twitter!)

In the light of some recent white supremacist propaganda showing up on my campus, I’ve decided to spotlight a tiny bit of the amazing work being done around the country by linguists of color. Each of the scholars below is doing interesting, important linguistics research and has a Twitter account that I personally enjoy following. If you’re on this blog, you probably will as well! I’ll give you a quick intro to their research and, if it piques your interest, you can follow them on Twitter for all the latest updates.

(BTW, if you’re wondering why I haven’t included any grad students on this list, it’s becuase we generally don’t have as well developed of a research trajectory and I want this to be a useful resource for at least a few years.)

Montreal Twestival 2009 Cupcakes (3920802507)
These are some pretty sweet Twitter accounts! (Too much of a stretch?)
 

Anne Charity Hudley

Dr. Charity Hudley is professor at the College of William and Mary (Go Tribe!). Her research focuses on language variation, especially the use of varieties such as African American English, in the classroom. If you know any teachers, they might find her two books on language variation in the classroom a useful resource. She and Christine Mallinson have even released an app to go with them!

Michel DeGraff

Dr. Michel DeGraff is a professor at MIT. His research is on Haitian Creole, and he’s been very active in advocating for the official recognition of Haitian Creole as a distinct language. If you’re not sure what Haitian Creole looks like, go check out his Twitter; many of his tweets are in the language! He’s also done some really cool work on using technology to teach low-resource languages.

Nelson Flores

Dr. Nelson Flores is a professor at the University of Pennsylvania. His work focuses on how we create the ideas of race and language, as well as bilingualism/multilingualism and bilingual education. I really enjoy his thought-provoking discussions of recent events on his Twitter account. He also runs a blog, which is a good resource for more in-depth discussion.

Nicole Holliday

Dr. Nicole Holliday is (at the moment) Chau Mellon Postdoctoral Scholar at Pomona College. Her research focuses on language use by biracial speakers. I saw her talk on how speakers use pitch differently depending on who they’re talking to at last year’s LSA meeting and it was fantastic: I’m really looking forwards to seeing her future work! She’s also a contributor to Word., an online journal about African American English.

Rupal Patel

Dr. Rupal Patel is a professor at Northeastern University, and also the founder and CEO of VocaliD. Her research focuses on the speech of speakers with developmental  disabilities, and how technology can ease communication for them. One really cool project she’s working on that you can get involved with is The Human Voicebank. This is collection of voices from all over the world that is used to make custom synthetic voices for those who need them for day-to-day communication. If you’ve got a microphone and a quiet room you can help out by recording and donating your voice.

John R. Rickford

Last, but definitely not least, is Dr. John Rickford, a professor at Stanford. If you’ve taken any linguistics courses, you’re probably already familiar with his work. He’s one of the leading scholars working on African American English and was crucial in bringing a research-based evidence to bare on the Ebonics controversy. If you’re interested, he’s also written a non-academic book on African American English that I would really highly recommend; it even won the American Book Award!

Great ideas in linguistics: Language acquisition

Courtesy of your friendly neighbourhood rng, this week’s great idea in linguistics is… language acquisition! Or, in other words, the process of learning  a language. (In this case, learning your first language when you’re a little baby, also known as L1 acquisition; second-language learning, or L2 acquisition, is a whole nother bag of rocks.) Which begs the question: why don’t we just call it language learning and call it a day? Well, unlike learning to play baseball, turn out a perfect soufflé or kick out killer DPS, learning a language seems to operate under a different set of rules. Babies don’t benefit from direct language instruction and it may actually hurt them.

In other words:

Language acquisition is process unique to humans that allows us to learn our first language without directly being taught it.

Which doesn’t sound so ground-breaking… until you realize that that means that language use is utterly unique among human behaviours. Oh sure, we learn other things without being directly taught them, even relatively complex behaviours like swallowing and balancing. But unlike speaking, these aren’t usually under concious control and when they are it’s usually because something’s gone wrong. Plus, as I’ve discussed before, we have the ability to be infinitely creative with language. You can learn to make a soufflé without knowing what happens when you combine the ingredients in every possible combination, but knowing a language means that you know rules that allow you to produce all possible utterances in that language.

So how does it work? Obviously, we don’t have all the answers yet, and there’s a lot of research going on on how children actually learn language. But we do know what it generally tends to look like, precluding things like language impairment or isolation.

  1. Vocal play. The kid’s figured out that they have a mouth capable of making noise (or hands capable of making shapes and movements) and are practising it. Back in the day, people used to say that infants would make all the sounds of all the world’s languages during this stage. Subsequent research, however, suggests that even this early children are beginning to reflect the speech patterns of people around them.
  2. Babbling. Kids will start out with very small segments of language, then repeat the same little chunk over and over again (canonical babbling), and then they’ll start to combine them in new ways (variegated babbling). In hearing babies, this tends to be syllables, hence the stereotypical “mamamama”. In Deaf babies it tends to be repeated hand motions.
  3. One word stage. By about 13 months, most children will have begun to produce isolated words. The intended content is often more than just the word itself, however. A child shouting “Dog!” at this point could mean “Give me my stuffed dog” or “I want to go see the neighbour’s terrier” or “I want a lion-shaped animal cracker” (since at this point kids are still figuring out just how many four-legged animals actually are dogs). These types of sentences-in-a-word are known as holophrases.
  4. Two word stage. By two years, most kids will have moved on to two-word phrases, combining words in way that shows that they’re already starting to get the hang of their language’s syntax. Morphology is still pretty shaky, however: you’re not going to see a lot of tense markers or verbal agreement.
  5. Sentences. At this point, usually around age four, people outside the family can generally understand the child. They’re producing complex sentences and have gotten down most, if not all, of the sounds in their language.

These general stages of acquisition are very robust. Regardless of the language, modality or even age of acquisition we still see these general stages. (Although older learners may never completely acquire a language due to, among other things, reduced neuroplasticity.) And the fact they do seem to be universal is yet more evidence that language acquisition is a unique process that deserves its own field of study.

Are television and mass media destroying regional accents?

One of the occupational hazards of linguistics is that you are often presented with spurious claims about language that are relatively easy to quantifiably disprove. I think this is probably partly due to the fact that there are multiple definitions of ‘linguist. As a result, people tend to equate mastery of a language with explicit knowledge of it’s workings. Which, on the one hand, is reasonable. If you know French, the idea is that you know how to speak French, but also how it works. And, in general, that isn’t the case. Partly because most language instruction is light on discussions of grammatical structures–reasonably so; I personally find inductive grammar instruction significantly more helpful, though the research is mixed–and partly because, frankly, there’s a lot that even linguists don’t know about how grammar works. Language is incredibly complex, and we’ve only begun to explore and map out that complexity. But there are a few things we are reasonably certain we know. And one of those is that your media consumption does not “erase” your regional dialect [pdf]. The premise is flawed enough that it begins to collapse under it’s own weight almost immediately. Even the most dedicated American fans of Dr. Who or Downton Abby or Sherlock don’t slowly develop British accents.

Christopher Eccleston Thor 2 cropped
Lots of planets have a North with a distinct accent that is not being destroyed by mass media.
So why is this myth so persistent? I think that the most likely answer is that it is easy to mischaracterize what we see on television and to misinterpret what it means. Standard American English (SAE), what newscasters tend to use, is a dialect. It’s not just a certain set of vowels but an entire, internally consistent grammatical system.  (Failing to recognize that dialects are more than just adding a couple of really noticeable sounds or grammatical structures is why some actors fail so badly at trying to portray a dialect they don’t use regularly.) And not only  is it a dialect, it’s a very prestigious dialect. Not only newscasters make use of it, but so do political figures, celebrities, and pretty much anyone who has a lot of social status. From a linguistic perspective, SAE is no better or worse than any other dialect. From a social perspective, however, SAE has more social capital than most other dialects. That means that being able to speak it, and speak it well, can give you opportunities that you might not otherwise have had access to. For example, speakers of Southern American English are often characterized as less intelligent and educated. And those speakers are very aware of that fact, as illustrated in this excrpt from the truely excellent PBS series Do You Speak American:

ROBERT:

Do you think northern people think southerners are stupid because of the way they talk?

JEFF FOXWORTHY:

Yes I think so and I think Southerners really don’t care that Northern people think that eh. You know I mean some of the, the most intelligent people I’ve ever known talk like I do. In fact I used to do a joke about that, about you know the Southern accent, I said nobody wants to hear their brain surgeon say, ‘Al’ight now what we’re gonna do is, saw the top of your head off, root around in there with a stick and see if we can’t find that dad burn clot.’

So we have pressure from both sides: there are intrinsic social rewards for speaking SAE, and also social consequences for speaking other dialects. There are also plenty of linguistic role-models available through the media, from many different backgrounds, all using SAE. If you consider these facts alone it seems pretty easy to draw the conclusion that regional dialects in America are slowly being replaced by a prestigious, homogeneous dialect.

Except that’s not what’s happening at all. Some regional dialects of American English are actually becoming more, rather than less, prominent. On the surface, this seems completely contradictory. So what’s driving this process, since it seems to be contradicting general societal pressure? The answer is that there are two sorts of pressure. One, the pressure from media, is to adopt the formal, standard style. The other, the pressure from family, friends and peers, is to retain and use features that mark you as part of your social network. Giles, Taylor and Bourhis showed that identification with a certain social group–in their case Welsh identity–encourages and exaggerates Welsh features. And being exposed to a standard dialect that is presented as being in opposition to a local dialect will actually increase that effect. Social identity is constructed through opposition to other social groups. To draw an example from American politics, many Democrats define themselves as “not Republicans” and as in opposition to various facets of “Republican-ness”. And vice versa.

Now, the really interesting thing is this: television can have an effect on speaker’s dialectal features But that effect tends to be away from, rather than towards, the standard. For example, some Glaswegian English speakers have begun to adopt features of Cockney English based on their personal affiliation with the  show EastendersIn light of what I discussed above, this makes sense. Those speakers who had adopted the features are of a similar social and socio-economic status as the characters in Eastenders. Furthermore, their social networks value the characters who are shown using those features, even though they are not standard. (British English places a much higher value on certain sounds and sound systems as standard. In America, even speakers with very different sound systems, e.g. Bill Clinton and George W. Bush, can still be considered standard.) Again, we see retention and re-invigoration of features that are not standard through a construction of opposition. In other words, people choose how they want to sound based on who they want to be seen as. And while, for some people, this means moving towards using more SAE, in others it means moving away from the standard.

One final note: Another factor which I think contributes to the idea that television is destroying accents is the odd idea that we all only have one dialect, and that it’s possible to “lose” it. This is patently untrue. Many people (myself included) have command of more than one dialect and can switch between them when it’s socially appropriate, or blend features from them for a particular rhetorical effect. And that includes people who generally use SAE. Oprah, for example, will often incorporate more features of African American English when speaking to an African American guest.  The bottom line is that television and mass media can be a force for linguistic change, but they’re hardly the great homogonizier that it is often claimed they are.

For other things I’ve written about accents and dialects, I’d recommend:

  1. Why do people  have accents? 
  2. Ask vs. Aks
  3. Coke vs. Soda vs. Pop

Why do I really, really love West African languages?

So I found a wonderful free app that lets you learn Yoruba, or at least Yoruba words,  and posted about it on Google plus. Someone asked a very good question: why am I interested in Yoruba? Well, I’m not interested just in Yoruba. In fact, I would love to learn pretty much any western African language or, to be a little more precise, any Niger-Congo language.

Niger-Congo-en
This map’s color choices make it look like a chocolate-covered ice cream cone.
Why? Well, not to put too fine a point on it, I’ve got a huge language crush on them. Whoa there, you might be thinking, you’re a linguist. You’re not supposed to make value judgments on languages. Isn’t there like a linguist code of ethics or something? Well, not really, but you are right. Linguists don’t usually make value judgments on languages. That doesn’t mean we can’t play favorites!  And West African languages are my favorites. Why? Because they’re really phonologically and phonetically interesting. I find the sounds and sound systems of these languages rich and full of fascinating effects and processes. Since that’s what I study within linguistics, it makes sense that that’s a quality I really admire in a language.

What are a few examples of Niger-Congo sound systems that are just mind blowing? I’m glad you asked.

  • Yoruba: Yoruba has twelve vowels. Seven of them are pretty common (we have all but one in American English) but if you say four of them nasally, they’re different vowels. And if you say a nasal vowel when you’re not supposed to, it’ll change the entire meaning of a word. Plus? They don’t have a ‘p’ or an ‘n’ sound. That is crazy sauce! Those are some of the most widely-used sounds in human language. And Yoruba has a complex tone system as well. You probably have some idea of the level of complexity that can add to a sound system if you’ve ever studied Mandarin, or another East Asian language. Seriously, their sound system makes English look childishly simplistic.
  • Akan: There are several different dialects of Akan, so I’ll just stick to talking about Asante, which is the one used in universities and for official business. It’s got a crazy consonant system. Remember how  Yoruba didn’t have an “n” sound? Yeah, in Akan they have nine. To an English speaker they all  pretty much sound the same, but if you grew up speaking Akan you’d be able to tell the difference easily. Plus, most sounds other than “p”, “b”, “f” or “m” can be made while rounding the lips (linguists call this “labialized” and are completely different sounds). They’ve also got a vowel harmony system, which means you can’t have vowels later in a word that are completely different from vowels earlier in the word. Oh, yeah, and tones and a vowel nasalization distinction and some really cool tone terracing. I know, right? It’s like being a kid in a candy store.

But how did these language get so cool? Well, there’s some evidence that these languages have really robust and complex sound systems because the people speaking them never underwent large-scale migration to another Continent. (Obviously, I can’t ignore the effects of colonialism or the slave trade, but it’s still pretty robust.) Which is not to say that, say, Native American languages don’t have awesome sound systems; just just tend to be slightly smaller on average.

Now that you know how kick-ass these languages, I’m sure you’re chomping at the bit to hear some of them. Your wish is my command; here’s a song in Twi (a dialect of Akan) from one of my all-time-favorite musicians: Sarkodie. (He’s making fun of Ghanaian emigrants who forget their roots. Does it get any better than biting social commentary set to a sick beat?)

What’s the best way to teach grammar?

The night before last I had the good fortune to see Goeff Pullum, noted linguist and linguistics blogger, give a talk entitled: The scandal of English grammar teaching: Ignorance of grammar, damage to writing skills, and what we can do about it. It was an engaging talk and clearly showed that the basis for many of the “grammar rules” that are taught in English language and composition courses have little to no bearing on how the English language is actually used. Some of the bogeyman rules (his term) that he lambasted included the interdiction against ending a sentence in a preposition, the notion that “since” can only to refer to the passage of time and not causality and the claim that only “which” can begin a restrictive clause. Counterexamples for all of these “grammar rules” are easy to find, both in written and spoken language. (If you’re interested in learning more, check out Geoff Pullum on Language Log.)

Evaluarán las distintas estrategias para enseñar a leer en los establecimientos subvencionados chilenos
“And then they python ate little Johnny because he had the gall to cheekily split his infinitives.”
So there’s a clear problem here. Rules that have no bearing on linguistic reality are being used as the backbone of grammar instruction, just as they have for over two hundred years. Meanwhile, the investigation of human language has advanced considerably. We know much more about the structure of language now than we did when E. B. White was writing his grammar guide. It’s linguistic inquiry that has lead to better speech therapy, speech recognition and synthesis programs and better foreign language teaching. Grammar, on the other hand, has led to little more than frustration and an unsettling elitism. (We all know at least one person who uses their “knowledge” of “correct” usage as a weapon.) So what can be done about it? Well, I propose that instead of traditional “grammar”, we teach “grammar” as linguists understand it. What’s the difference?

Traditional grammar: A variety of usage and style rules that are based on social norms and a series of historic accidents.

Linguistic grammar: The set of rules which can accurately discribe a native speaker’s knowaldge of their language.

I’m not the first person to suggest a linguistics education as a valuable addition to the pre-higher educational experience. You can read proposals and arguments from others herehere, and here, and an argument for more linguistics in higher education here.

So, why would you want to teach linguistic grammar? After all, by the time you’re five or six, you already have a pretty good grasp of your language. (Not a perfect one, as it turns out; things like the role of stress in determining the relationship between words in a phrase tend to come in pretty late in life.) Well, there are lots of reasons.

  • Linguistic grammar is the result of scientific inquiry and is empirically verifiable. This means that lessons on linguistic grammar can take the form of experiments and labs rather than memorizing random rules.
  • Linguistic grammar is systematic. This can appeal to students who are gifted at math and science but find studying language more difficult.
  • Linguistic grammar is a good way to gently introduce higher level mathematics. Semantics, for example, is a good way to introduce set theory or lambda calculus.
  • Linguistic grammar is immediately applicable for students. While it’s difficult to find applications for oceanology for students who live in Kansas, everyone uses language every day, giving students a multitude of opportunities to apply and observe what they’re learned.
  • Linguistic grammar shows that variation between different languages and dialects is systematic, logical and natural. This can help reduce the linguistic prejudice that speakers of certain languages or dialects face.
  • Linguistic grammar helps students in learning foreign languages.  For example, by increasing students’ phonetic awareness (that’s their awareness of language sounds) and teaching them how to accurately describe and produce sounds, we can avoid the frustration of not knowing what sound they’re attempting to produce and its relation to sounds they already know.
  • Knowledge of linguistic grammar, unlike traditional grammar, is relatively simple to evaluate. Since much of introductory linguistics consists of looking at data sets and constructing rules that would generate that data set, and these rules are either correct or not, it is easier to determine whether or not the student has mastered the concepts.

I could go on, but I think I’ll leave it here for now. The main point is this: teaching linguistics is a viable and valuable way to replace traditional grammar education. What needs to happen for linguistic grammar to supplant traditional grammar? That’s a little thornier. At the very least, teachers need to receive linguistic training and course materials appropriate  for various ages need to be developed. A bigger problem, though, is a general lack of public knowledge about linguistics. That’s part of why I write this blog; to let you know about what’s going on in a small but very productive field. Linguistics has a lot to offer, and I hope that in the future more and more people will take us up on it.

 

How can you realistically imitate a French accent?

So, my main area on interest within linguistics is the study of the individual sound systems of different languages and the rules governing them. It may sound pretty dry, but it can lead to some pretty impressive party tricks. For example, by knowing about the sound systems of different languages you can emulate them. In other words, you can have a pretty convincing fake accent. In fact, accent coaches, who work with actors to create accents and other to reduce them, tend to have linguistic backgrounds with a focus on studying the sounds of language. So I thought with this post I’d go over how to imitate a French accent by looking at the individual sounds that are different between the two languages.

Just to be clear: I’m using English as a target language here because English is my native language and everyone who’s asked me about it has spoken English natively. I’m in no way implying that English is the “best” language, or that English speakers don’t have accents. (You should hear how I butcher Mandarin. It’s pretty atrocious.) If you have any other languages you’d like me to write posts for, let me know in the comments. 🙂

Marcel Marceau (square)
Marcel Marceau can’t help you on this one, sorry. Mostly because you’ll have a hard time finding examples of authentic French in his performances for some reason… 
I’m going to assume that you want to sound like you’re from Paris and not Quebec (Not that Quebec isn’t great! Man, now I’m jonesing for some President’s Choice snacks.). There are a couple sounds you’re going to have to learn:

  1. Instead of the English “r”, as in “rat”, you’re going to have to use what’s called the “gutteral r”. (Okay, it’s actually called the voiced uvular fricative, but that’s a little bit harder to say.) Basically, when you say the sound, you want to vibrate your uvula, that little punching-bag-looking thing  at the back of your throat. Try doing it in front of a well-lit mirror with your mouth open until you can figure out what it feels like.
  2. Instead of the English “ng”, as in “cling”, you can use a “ny”, as in “nyan cat“. No, seriously. This will be a little difficult, since  we only really use that sound at the end of words, but practice a bit and you should be able to pick it up. Or you can just go with go with a regular “n” sound.

Now the good news! There’s also a couple of sounds we have in English that don’t exist in French, and they’re the one’s that are slightly harder to say, so you can save yourself some time and trouble by switching them out.

  1. The “th” sound, like at the begining of “thin” or “the” is actually really rare in world languages. French speakers tend to replace it with “z”.
  2. The sounds at the beginning of “church” and “judge” are also not a thing in French. You can use the sound at the beginning of “sheep” for the sound at the beginning of “church” and the “s” in “vision” for the “j” in “judge”.
So that’s the consonants.
The vowels are significantly different than they are in English. You’ve got all sorts of things like nasalization and rounding in places where you, as an English speaker, are just not expecting it. And, frankly, unless you’ve got a really good ear, you’re going to have a hard time picking up on the differences. Long story short: I’m weaseling out of explaining the vowels entirely and using a Youtube video. (I’m also doing it so you can get some native speaker data, which I think you’ll find helpful.)

That does give me space to discuss intonation, however. Intonation is probably the single biggest difference in the way English and French sounds. In fact, intonation is one of the very first things that babies pick up, before they even start experimenting with individual sounds. Unfortunately, it’s also one of  the most difficult things to learn. Here’s a few pointers, though:

  • French intonation isn’t as concerned with individual syllables. Rather, you tend to get whole phrases (rather than individual words) in the same intonation pattern. This is what gives French its sort of smooth, musical quality.
  • Instead of a slow rise and slow fall, like we get in English, pitch in French tends to rise slowly until the very final syllable of a sentence, where it drops suddenly. It looks more like the graph of an absolute value than polynomial, in other words.

There’s a ton more to be said about French phonology, and a lot of it has already been said, but this should be enough to get you started on approximating a French accent. Good luck!

Can you really learn a language in ten days?

I’m not the only linguist in my family. My father has worked as a professional linguist his whole life… but with a slightly different definition of “linguist”. His job is to use his specialist knowledge of a language (specifically Mandarin Chinese, Mongolian or one of the handful of other languages he speaks relatively well) to solve a problem. And one problem that he’s worked on a lot is language learning.

There’s no doubt that knowing more than one language is very, very useful. It opens up job opportunities, makes it easier to travel and can even improve brain function. But unless you were lucky enough to be raised bilingual you’re going to have to do it the hard way. And, if you live in America, like I do, you’re not very likely to do that: Only about 26% of the American population speaks another language well enough to hold a basic converstaion in it, and only 9% are fluent in another language. Compare that to Europe, where around 50% of the population is bilingual.

Japanese language class in Zhenjiang02
“Now that you’ve learned these characters, you only need to learn and retain one a day for the next five years to be considered literate.”
Which makes the lure of easily learning a language on your own all the more compelling. I recently saw an ad that I found particularly enticing; learn a language in just ten days. Why, that’s less time than it takes to hand knit a pair of socks. The product in this = case was the oh-so-famous (at least in linguistic circles) Pimsleur Method (or approach, or any of a number of other flavors of delivery). I’ve heard some very good things about the program, and thought I’d dig a little deeper into the method itself and evaluate its claims from a scientific linguistics perspective.

I should mention that Dr. Pimsleur was an academic working in second language acquisition from an applied linguistics stand point. That is, his work (published mainly in the 1960’s)  tended to look at how older people learn a second language in an educational setting. I’m not saying this makes him unimpeachable–if a scientific argument can’t stand up to scrutiny it shouldn’t stand at all–but it does tend to lend a certain patina of credibility to his work. Is it justified? Let’s find out.

First things first: it is not possible to become fluent in a language in just ten days. There are lots of reasons why this is true. The most obvious is that being a fluent speaker is more than just knowing the grammar and vocabulary; you have to understand the cultural background of the language you’re studying. Even if your accent is flawless (unlikely, but I’ll deal with that later), if you unwittingly talk to your mother-in-law  and become a social pariah that’s just not going to do you much. Then there are just lots of little linguistic things that it’s so very easy to get wrong. Idioms, for example, particularly choosing which preposition to use. Do you get “in the bus” or “on the bus”? And then there’s even more subtle things like producing a list of adjectives in the right order. “Big red apple” sounds fine, but “red big apple”? Not so much. A fluent speaker knows all this, and it’s just too much information to acquire in ten days.

That said, if you were plopped down in a new country without any prior knowledge of the language, I’d bet within ten days you’d be carrying on at least basic conversations. And that’s pretty much what the Pimsleur method is promising. I’m not really concerend with whether it works or not… I’m more concerned with how it works (or doesn’t). There are four basic principals that the Pimsleur technique is based on.

  1.  Anticipation. Basically, this boils down to posing questions that the learner is expected answer. These can be recall tasks, asking you to remember something you heard before, or tasks where the learner needs to extrapolate based on the knowledge they currently have of the language.
  2. Graduated-interval recall. Instead of repeating a word or word list three or four time right after each other, they’re repeated at specific intervals. This is based on the phonological loop part of a model of working memory that was really popular when Pimsleur was doing his academic work.
  3. Core Vocabulary. The learner is just exposed to basic vocabulary, so the total number of words learned is less. They’re chosen (as far as I can tell, it seems to vary based on method) based on frequency.
  4. “Organic learning”. Basically, you learn by listening and there’s a paucity of reading and writing. (Sorry about that; paucity was my word of the day today 😛 ).

So let’s evaluate these claims.

  1. Anticipation. So the main benefit of knowing that you’ll be tested on something is that you actually pay attention. In fact, if you ask someone to listen to pure tones, their brain consumes more oxygen (which you can tell because circulation to that area increases) if you tell them they’ll be tested.  Does this help with language learning? Well. Maybe. I don’t really have as much of a background in psycholinguistics, but I do know that language learning tends to entail the creation of new neural networks and connections, which requires oxygen. On the other hand, a classroom experience uses the same technique. Assessment: Reasonable, but occurs in pretty much every language-learning method. 
  2. Graduated-interval recall: So this is based on the model I mentioned above. You’ve got short term and long term memory, and the Pimsleur technique is designed to pretty much seed your short term memory, then wait for a bit, then grab at the thing you heard and pull it to the forefront again, ideally transferring it to long-term memory. Which is peachy-keen… if the model’s right. And there’s been quite a bit of change and development in our understanding of how memory works since the 1970’s. Within linguistics, there’s been the rise of Exemplar Theory, which posits that it’s the number of times you hear things, and the similarity of the sound tokens, that make them easier to remember. (Kinda. It’s complicated.) So… it could be helpful, assuming the theory’s right. Assessment: Theoretical underpinnings outdated, but still potentially helpful. 
  3. Core Vocabulary. So this one is pretty much just cheating. Yes, it’s true, you only need about 2000 words to get around most days, and, yes, those are probably the words you  should be learning first in a language course. But at some point, to achieve full fluency, you’ll have to learn more words, and that just takes time. Nothing you can do about it. Assesment:  Legitimate, but cheating. 
  4. “Organic learning”: So this is in quotation marks mainly because it sounds like it’s opposed to “inorganic learning”, and no one learns language from  rocks. Basically, there are two claims here. One is the auditory learning is preferable, and the other is that it’s preferable because it’s how children learn. I have fundamental problems with claims that adults and children can learn using the same processes. That said, if your main goal is to learn how to speak and hear a given language, learning writing will absolutely slow you down. I can tell you from experience: once you learn the tones, speaking Mandarin is pretty straightforward. Writing Mandarin remains one of the most frustrating things I’ve ever attempted to do. Assessment: Reasonable, but claims that you can learn “like a baby” should be examined closely. 
  5. Bonus: I do agree that using native speakers of the target language as models is preferable. They can make all the sounds correctly, something that even trained linguists can sometimes have problems with–and if you never hear the sounds produced correctly, you’ll never be able to produce them correctly.

So, it does look pretty legitimate. My biggest concern is actually not with the technique itself, but with the delivery method. Language is inherently about communicating, and speaking to yourself in isolation is a great way to get stuck with some very bad habits. Being able to interact with a native speaker, getting guidance and correction, is something that I’d feel very uncomfortable recommending you do without.