Which are better, earphones or headphones?

As a phonetician, it’s part of my job to listen to sounds very closely. Plus, I like to listen to music while I work, enjoy listening to radio dramas and use a headset to chat with my guildies while I’m gaming.  As a result, I spend a lot of time with things on/in my ears. And, because of my background, I’m also fairly well informed about the acoustic properties of  earphones and headphones and how they interact with anatomy. All of which helps me answer the question: which is better? Or, more accurately, what are some of the pros and cons of each? There are a number of factors to consider, including frequency response, noise isolation, noise cancellation and comfort/fit. Before I get into specifics, however, I want to make sure we’re on the same page when we talk about “headphones” and “earphones”.

Earphones: For the purposes of this article, I’m going to use the term “earphone” to refer to devices that are meant to be worn inside the pinna (that’s the fancy term for the part of the ear you can actually see). These are also referred to as “earbuds”, “buds”, “in-ears”, “canalphones”, “in-ear moniters”, “IEM’s” and “in-ear headphones”. You can see an example of what I’m calling “earphones” below.

IPod Touch 2G Remote Mic
Ooo, so white and shiny and painful.

Headphones: I’m using this term to refer to devices that are not meant to rest in the pinna, whether they go around or on top of the ear. These are also called “earphones”, (apparently) “earspeakers” or, my favorites, “cans”. You can see somewhat antiquated examples of what I’m calling “headphones” below.

Club holds radio dance wearing earphones 1920
I mean, sure, it’s a wonder of modern technology and all, but the fidelity is just so low.

Alright, now that we’ve  cleared that up, let’s get down to brass tacks. (Or, you might say…. bass tacks.)

  1. Frequency response curve: How much distortion do they introduce? In an ideal world, ‘phones should responded equally well to all frequencies (or pitches), without transmitting one frequency rage more loudly than another. This desirable feature is commonly referred to as a “flat” frequency response. That means that the signal you’re getting out is pretty much the same one that was fed in, at all frequency ranges.
    1. Earphones: In general, earphones tend to have a worse frequency response.
    2. Headphones: In general, headphones tend to have better frequency response.
    3. WinnerHeadphones are probably the better choice if you’re really worried about distortion. You should read the specifications of the device you’re interested in, however, since there’s a large amount of variability.
  2. Frequency response: What is their pitch range? This term is sometimes used to refer to the frequency response curve I talked about above and sometimes used to refer to pitch range. I know, I know, it’s confusing. Pitch range is usually expressed as the lowest sound the ‘phones can transmit followed by the highest. Most devices on the market today can pretty much play anything between 20 and 20k Hz. (You can see what that sounds like here. Notice how it sounds loudest around 300Hz? That’s an artifact of your hearing, not the video. Humans are really good at hearing sounds around 300Hz which [not coincidentally] is about where the human voice hangs out.)
    1. Earphones: Earphones tend to have a smaller pitch range than headphones. Of course, there are always exceptions.
    2. Headphones: Headphones tend to have a better frequency range than earphones.
    3. Winner: In general, headphones have a better frequency range. That said, it’s not really that big of a deal. You can’t really hear very high or very low sounds that well because of the way your hearing system works regardless of how well your ‘phones are delivering the signal. Anything that plays sounds between 20Htz and 20,000Htz should do you just fine.
  3. Noise isolation: How well do they isolate you from sounds other than the ones you’re trying to listen to? More noise isolation is generally better, unless there’s some reason you need to be able to hear environmental sounds as well whatever you’re listening to. Better isolation also means you’re less likely to bother other people with your music.
    1. Earphones:  A properly fitted pair of in-ear earphones will give you the best noise isolation. It makes sense; if you’re wearing them properly they should actually form a complete seal with your ear canal. No sound in, no sound out, excellent isolation.
    2. Headphones: Even really good over-ear headphones won’t form a complete seal around your ear. (Well, ok, maybe if you’re completely bald and you make some creative use of adhesives, but you know what I mean.) As a result, you’re going to get some noise leakage .
    3. Winner: You’ll get the best noise isolation from well-fitting earphones that sit in the ear canal.
  4. Noise cancellation: How well can they correct for atmospheric sounds? So noise cancellation is actually completely different from noise isolation. Noise isolation is something that all ‘phones have. Noise-cancelling ‘phones, on the other hand, actually do some additional signal processing before you get the sound. They “listen” for atmospheric sounds, like an air-conditioner or a car engine. Then they take that waveform, reproduce it and invert it. When they play the inverted waveform along with your music, it exactly cancels out the sound. Which is awesome and space-agey, but isn’t perfect. They only really work with steady background noises. If someone drops a book, they won’t be able to cancel that sudden, sharp noise. They also tend not to work as well with really high-pitched noises.
    1. Earphones: Noise-cancelling earphones tend not be as effective as noise-cancelling headphones until you get to the high end of the market (think $200 plus).
    2. Headphones: Headphones tend to be slightly better at noise-cancellation than earphones of a similar quality, in my experience. This is partly due to the fact that there’s just more room for electronics in headphones.
    3. Winner: Headphones usually have a slight edge here. Of course, really expensive noise-cancelling devices, whether headphones or earphones, usually perform better than their bargain cousins.
  5. Comfort/fit: Is they comfy?
    1. Earphones: So this is where earphones tend to suffer. There is quite a bit of variation in the shape of the cavum conchæ, which is the little bowl shape just outside your ear canal. Earphone manufacturers have to have somewhere to put their magnets and drivers and driver support equipment and it usually ends up in the “head” of the earphone, nestled right in your concha cavum. Which is awesome if it’s a shape that fits your ear. If it’s not, though, it can quickly start to become irritating and eventually downright painful. Personally, this is the main reason I prefer over-ear headphones.
    2. Headphones: A nicely fitted pair of over-ear headphones that covers your whole ear is just incredibly comfortable. Plus, they keep your ears warm! I find on-ear headphones less comfortable in general, but a nice cushy pair can still feel awesome. There are other factors to take into account, though; wearing headphones and glasses with a thick frame can get really uncomfortable really fast.
    3. Winner: While this is clearly a matter of personal preference, I have a strong preference for headphones on this count.

So, for me at least, headphones are the clear winner overall. I find them more comfortable, and they tend to reproduce sound better than earphones. There are instances where I find earphones preferable, though. They’re great for travelling or if I really need an isolated signal. When I’m just sitting at my desk working, though, I reach for headphones 99% of the time.

One final caveat: the sound quality you get out of your ‘phones depends most on what files you’re playing. The best headphones in the world can’t do anything about quantization noise (that’s the noise introduced when you convert analog sound-waves to digital ones) or a background hum in the recording.

Is dyslexia genetic?

I’m a graduate student in linguistics. I have a degree in English literature. I love reading, writing and books with the fiery passion of a thousand suns. And I am dyslexic. While I was very successfully academically in secondary and higher education, let’s just say that primary school was… rough. I’ve failed enough spelling tests that I could wallpaper a small room with them. At this point I’m a fluent reader, mainly because no one’s asking me to read things without context. (For an interesting experimental look at the effects of world-knowledge and context on reading, I’d recommend Paul Kolers 1970 article, “Three stages of reading”.) These days, language processing problems tend to be flashes in the pan rather than a constant barrier I’m pushing against. I tend to confused “cloaca” and “cochlea”, for example, and I feel like I use “etymology” and “entomology” correctly at chance. But still, it would be nice to know that my years of suffering in primary school were due to genetic causes and not because I was “dumber” or “lazier” than other kids. And recent research does seem to support that: it looks like dyslexia probably is genetic.

Dyslexic words
They all look good to me.

First off, a couple caveats. Dyslexia is an educational diagnosis. There’s a pretty extensive battery of tests, any of which may be used to diagnose dyslexia. The International Dyslexia Association defines dyslexia thusly:

It is characterized by difficulties with accurate and / or fluent word recognition and by poor spelling and decoding abilities. These difficulties typically result from a deficit in the phonological component of language that is often unexpected in relation to other cognitive abilities and the provision of effective classroom instruction. Secondary consequences may include problems in reading comprehension and reduced reading experience that can impede growth of vocabulary and background knowledge.

Which sounds pretty standard, right? But! There are a number of underlying causes that might lead to this. One obvious one is an undiagnosed hearing problem. Someone who only has access to part of the speech signal will probably display all of these symptoms. Or someone in an English-only environment who speaks, say, Kola, as their home language. It’s hard to learn that ‘p’ means /p/ if your language doesn’t have that sound. Of course, educators know that these things affect reading ability. But there are also a number of underlying mental processes that might lead to a diagnosis of dyslexia, which may or may not be related to each other but are all almost certainly genetic. Let’s look at a couple of them.

  • Phonological processing. I’ve talked a little bit about phonology before. Basically, someone with phonological disorder has a hard time with language sounds specifically. For example, they may have difficulty with rhyming tasks, or figuring out how many sounds are in a word. And this does seem to have a neurological compontent. One study shows that, when children with dyslexia were asked to come up with letters that rhymed, they did not show activity in the Temporoparietal junction, unlike their non-dyslexic peers. Among other things, the Temporoparietal junction plays a role in interpreting sequences of events.
  • Auditory processing. Auditory processing difficulties aren’t necessarily linguistic in nature. Someone who has difficulty processing sounds may be tone deaf, for example, unable to tell whether two notes are the same or different. For dyslexics, this tends to surface as difficulty with sounds that occur very quickly. And there’s pretty much no sounds that humans need to process more quickly than speech sounds. A flap, for example, lasts around 20 milliseconds. To put that in perspective, that’s about 15 times slower than a fast blink. And it looks like there’s a genetic cause for these auditory processing problems: dyslexic brains have a localized asymmetry in their neurons. They also have more, and smaller neurons.
  • Sequential processing. For me, this is probably the most interesting. Sequential processing isn’t limited to language. It had to do with doing or perceiving things in the correct order. So, for example, if I gave you all the steps for baking a cake in the wrong order, you’d need to use sequential processing to put them in the correct order. And there’s been some really interesting work done, mainly by Beate Peter at the University of Washington (represent!) that suggests that there is a single genetic cause responsible for a number of rapid sequential processing task, and one of the effects of an abnormality in this gene is dyslexia. But people with this mutation also tend to be bad at, for example, touching each of their fingers to their thumb in order. 
  • Being a dude. Ok, this one is a little shakier, but depending on who you listen to, dyslexia is either equally common men and women, 4 to 5 times more common in men or 2 to 3 times more common in men. This may be due to structural differences, since it seems that male dyslexics have less gray matter in language processing centers, whereas females have less gray matter in sensing and motor processing areas. Or the difference could be due to the fact that estrogen does very good things for your brain, especially after traumatic injury. I include it here because sex is genetic (duh) and seems to (maybe, kinda, sorta) have an effect on dyslexia.

Long story short, there’s been quite a bit of work done on the genetics of dyslexia and the evidence points to a probably genetic common cause. Which in some ways is really exciting! That means that we can predict better who will have learning difficulties and work to provide them with additional tutoring and help. And it also means that some reading difficulties are due to anatomy and genes. If you’re dyslexic, it’s because you’re wired that way, and not because your parents did or didn’t do something (well, other than contribute your genetic material, obvi) or because you didn’t try hard enough. I really wish I could go back in time and tell that to my younger self after I completely failed yet another spelling test, even though I’d copied the words a hundred times each

But the genetic underpinning of dyslexia might also seem like a bad thing. After all, if dyslexia is genetic, does that mean that children with reading difficulties will just never get over them? Not at all! I don’t have space here to talk about the sort of interventions and treatments that can help dyslectics. (Perhaps I’ll write a future post on the subject.) Suffice to say, the dyslexic brain can learn to compensate and adapt over time. Like I said above, I’m currently a very fluent reader. And dyslexia can be a good thing. The same skills that can make learning to read hard can make you very, very good at picking out one odd thing in a large group, or at surveying a large quantity of visual information quickly — even if you only see it out of the corner of your eye. For example, I am freakishly good at finding four-leaf clovers. In high school, I collected thousands of them just while doing chores around the farm. And that’s not the only advantage. I’d recommend the Dyslexic Advantage (it’s written for a non-scientific audience) if you’re interested in learning more about the benefits of dyslexia. The authors point out that dyslectics are very good at making connections between things, and suggest that they enjoy an advantage when reasoning spatially, narratively, about related but unconnected things (like metaphors) or with incomplete or dynamic information.

The current research suggests pretty strongly that dyslexia is something you’re born with. And even though it might make some parts of your school career very difficult, it won’t stop you from thriving. It might even end up helping you later on in life.

Is English the best language for business?

The Economist recently published an article discussing the rise of English as a business lingua franca. This is an issue that I’ve come across quite a bit in my own life as someone who’s lived and traveled quite a bit overseas. And not just in professional settings: as an English speaker I’ve received an education in English in countries where  it’s not even an official language and I have quite a few friends, mainly Brazil and the Nordic countries, who I only ever talk to in English despite the fact that it’s not their native language. And I’m certainly not alone in this. Ethnologue estimates that there are approximately 335 million native English speakers, but over 430 million non-native speakers. English is emerging as the predominant global language, and many people see an English education as an investment.

2010年6月3日、長野県飯田市で商談を行う模様
I don’t care that we all speak the same language natively. We’re holding this meeting in English and that’s final!

But for me, at least, the more interesting question is why? There are a lot of languages in the world, and, in theory at least, any of them could be enjoying the preference currently shown for English. I’m hardly alone in asking these questions. The Economist article I mentioned above suggests a few:

There are some obvious reasons why multinational companies want a lingua franca. Adopting English makes it easier to recruit global stars (including board members), reach global markets, assemble global production teams and integrate foreign acquisitions. Such steps are especially important to companies in Japan, where the population is shrinking. There are less obvious reasons too. Rakuten’s boss, Hiroshi Mikitani, argues that English promotes free thinking because it is free from the status distinctions which characterise Japanese and other Asian languages. Antonella Mei-Pochtler of the Boston Consulting Group notes that German firms get through their business much faster in English than in laborious German. English can provide a neutral language in a merger: when Germany’s Hoechst and France’s Rhône-Poulenc combined in 1999 to create Aventis, they decided it would be run in English, in part to avoid choosing between their respective languages.

Let’s break this down a little bit. There seem to be two main arguments for using English. One is social. Using English makes it easier to collaborate with other companies or company offices in other countries and, if no one is a native English speaker, helps avoid conflict by choosing a working language that doesn’t unfairly benefit one group. The argument is linguistic: there is some special quality to English that makes it more suited for business. Let’s look at each of them in turn.

The social arguments seem perfectly valid to me. English education is widely available and, in many countries, a required part of primary and secondary education. There is a staggering quantity of resources available to help master English. Lots of people speak English, with varying degrees of frequency. As a result, there’s a pretty high likelihood that, given a randomly-selected group of people, you’ll be able to get your point across in English. While it might be more fair to use a language that no-one speaks natively, like Latin or Esperanto, English has high saturation and an instructional infrastructure already in place. Further, the writing system is significantly more accessible and computer-friendly than Mandarin’s, which actually has more speakers than English. (Around 847 million, in case you were wondering.) All practical arguments for using English as an international business language.

Now let’s turn to the linguistic arguments. These are, sadly, much less reasonable. As I’ve mentioned before, people have a distressing tendency to make testable claims about language without actually testing them. And both of the claims above–that honorifics confine thinking and that English is “faster” than German– have already been investigated.

  • Honorifics do appear to have an effect on cognition, but it seems to be limited to a spatial domain, i.e. higher status honorifics are associated with “up” and lower ones with “down”. Beyond subtle priming, I find it incredibly unlikely that a rich honorific system has any effect on individual cognition. A social structure which is reflected in language use, however, might make people less willing to propose new things or offer criticism. But that’s hardly language-dependent. Which sounds more likely: “Your idea is horrible, sir,” or “Your idea is horrible, you ninny”?
    • TL;DR: It’s not the language, it’s the social structure the language operates in. 
  • While it is true that different languages have different informational density, the rate of informational transmission is actually quite steady. It appears that, as informational density increases, speech rate decreases. As a result, it looks like, regardless of the language, humans tend to convey information at a pretty stable rate. This finding is cross-modal, too. Even though signs take longer to produce than spoken words, they are more dense and so the rate of information flow in signed and spoken languages seems to be about the same. Which makes sense: there’s a limit to how quickly the human brain can process new information, so it makes sense that we’d produce information at about that rate.
    • TL;DR: All languages convey information at pretty much the same rate. If there’s any difference in the amount of time meetings take, it’s more likely because people are using a language they’re less comfortable in (i.e. English). 

In conclusion, it very well may be the case that English is currently the best language to conduct business in. But that’s because of language-external social factors, not anything inherent about the language itself.

Are television and mass media destroying regional accents?

One of the occupational hazards of linguistics is that you are often presented with spurious claims about language that are relatively easy to quantifiably disprove. I think this is probably partly due to the fact that there are multiple definitions of ‘linguist. As a result, people tend to equate mastery of a language with explicit knowledge of it’s workings. Which, on the one hand, is reasonable. If you know French, the idea is that you know how to speak French, but also how it works. And, in general, that isn’t the case. Partly because most language instruction is light on discussions of grammatical structures–reasonably so; I personally find inductive grammar instruction significantly more helpful, though the research is mixed–and partly because, frankly, there’s a lot that even linguists don’t know about how grammar works. Language is incredibly complex, and we’ve only begun to explore and map out that complexity. But there are a few things we are reasonably certain we know. And one of those is that your media consumption does not “erase” your regional dialect [pdf]. The premise is flawed enough that it begins to collapse under it’s own weight almost immediately. Even the most dedicated American fans of Dr. Who or Downton Abby or Sherlock don’t slowly develop British accents.

Christopher Eccleston Thor 2 cropped
Lots of planets have a North with a distinct accent that is not being destroyed by mass media.
So why is this myth so persistent? I think that the most likely answer is that it is easy to mischaracterize what we see on television and to misinterpret what it means. Standard American English (SAE), what newscasters tend to use, is a dialect. It’s not just a certain set of vowels but an entire, internally consistent grammatical system.  (Failing to recognize that dialects are more than just adding a couple of really noticeable sounds or grammatical structures is why some actors fail so badly at trying to portray a dialect they don’t use regularly.) And not only  is it a dialect, it’s a very prestigious dialect. Not only newscasters make use of it, but so do political figures, celebrities, and pretty much anyone who has a lot of social status. From a linguistic perspective, SAE is no better or worse than any other dialect. From a social perspective, however, SAE has more social capital than most other dialects. That means that being able to speak it, and speak it well, can give you opportunities that you might not otherwise have had access to. For example, speakers of Southern American English are often characterized as less intelligent and educated. And those speakers are very aware of that fact, as illustrated in this excrpt from the truely excellent PBS series Do You Speak American:

ROBERT:

Do you think northern people think southerners are stupid because of the way they talk?

JEFF FOXWORTHY:

Yes I think so and I think Southerners really don’t care that Northern people think that eh. You know I mean some of the, the most intelligent people I’ve ever known talk like I do. In fact I used to do a joke about that, about you know the Southern accent, I said nobody wants to hear their brain surgeon say, ‘Al’ight now what we’re gonna do is, saw the top of your head off, root around in there with a stick and see if we can’t find that dad burn clot.’

So we have pressure from both sides: there are intrinsic social rewards for speaking SAE, and also social consequences for speaking other dialects. There are also plenty of linguistic role-models available through the media, from many different backgrounds, all using SAE. If you consider these facts alone it seems pretty easy to draw the conclusion that regional dialects in America are slowly being replaced by a prestigious, homogeneous dialect.

Except that’s not what’s happening at all. Some regional dialects of American English are actually becoming more, rather than less, prominent. On the surface, this seems completely contradictory. So what’s driving this process, since it seems to be contradicting general societal pressure? The answer is that there are two sorts of pressure. One, the pressure from media, is to adopt the formal, standard style. The other, the pressure from family, friends and peers, is to retain and use features that mark you as part of your social network. Giles, Taylor and Bourhis showed that identification with a certain social group–in their case Welsh identity–encourages and exaggerates Welsh features. And being exposed to a standard dialect that is presented as being in opposition to a local dialect will actually increase that effect. Social identity is constructed through opposition to other social groups. To draw an example from American politics, many Democrats define themselves as “not Republicans” and as in opposition to various facets of “Republican-ness”. And vice versa.

Now, the really interesting thing is this: television can have an effect on speaker’s dialectal features But that effect tends to be away from, rather than towards, the standard. For example, some Glaswegian English speakers have begun to adopt features of Cockney English based on their personal affiliation with the  show EastendersIn light of what I discussed above, this makes sense. Those speakers who had adopted the features are of a similar social and socio-economic status as the characters in Eastenders. Furthermore, their social networks value the characters who are shown using those features, even though they are not standard. (British English places a much higher value on certain sounds and sound systems as standard. In America, even speakers with very different sound systems, e.g. Bill Clinton and George W. Bush, can still be considered standard.) Again, we see retention and re-invigoration of features that are not standard through a construction of opposition. In other words, people choose how they want to sound based on who they want to be seen as. And while, for some people, this means moving towards using more SAE, in others it means moving away from the standard.

One final note: Another factor which I think contributes to the idea that television is destroying accents is the odd idea that we all only have one dialect, and that it’s possible to “lose” it. This is patently untrue. Many people (myself included) have command of more than one dialect and can switch between them when it’s socially appropriate, or blend features from them for a particular rhetorical effect. And that includes people who generally use SAE. Oprah, for example, will often incorporate more features of African American English when speaking to an African American guest.  The bottom line is that television and mass media can be a force for linguistic change, but they’re hardly the great homogonizier that it is often claimed they are.

For other things I’ve written about accents and dialects, I’d recommend:

  1. Why do people  have accents? 
  2. Ask vs. Aks
  3. Coke vs. Soda vs. Pop

Why does loud music hurt your hearing?

There was recently an excellent article by the BBC about the results of a survey put out by the non-profit organization Action on Hearing Loss. The survey showed that most British adults are taking dangerous risks with their hearing: almost a third played music above the recommended volume and a full two-thirds left noisy venues with ringing in their ears. It may seem harmless to enjoy noisy concerts, but it can and does irrecoverably damage your hearing. But how, exactly, does it do that? And how loud can you listen to music without being at risk?

Speakers lroad
Turn it up! Just… not too loud, ok?
Let’s start with the second question. You’re at risk of hearing loss if you subject yourself to sounds above 85 decibels. For reference, that’s about as loud as a food processor or blender, and most music players will warn if you try to play music much louder than that. They will, however, sometimes play music up to 110 dB, which is roughly the equivalent of someone starting a chainsaw in your ear and verges on painful. And that is absolutely loud enough to damage your hearing.

Hearing damage is permanent and progressive. Inside your inner ear are tiny, hair-shaped cells. These sway back and forth as the fluid of your inner ear is compressed by sound-waves. It’s a bit like seaweed being pulled back and forth by waves, but on a smaller scale and much, much faster. As these hair cells brush back and forth they create and transmit electrical impulses that are sent to the brain and interpreted as sound. The most delicate part of this process is those tiny hair cells. They’re very sensitive. Which is good, because it means that we’re able to detect noise well, but also bad, because they’re very easy to damage. In fish and birds, that damage can heal over time. In mammals, it can not. Once your hair cells are damaged, they can no longer transmit sound and you lose that part of the signal permanently. There’s a certain amount of unavoidable wear and tear on the system. Even if you do avoid loud noises, you’re still slowly losing parts of your hearing as you and your hair cells age, especially in the upper frequencies. But loud music will accelerate that process drastically. Listen to loud enough music long enough and it will slowly take away at your ability to hear it. 

But that doesn’t necessarily mean you should avoid loud environments altogether. As with all things, moderation is key. One noisy concert won’t leave you hard of hearing. (In fact, your body has limited defence mechanisms for sustained loud noises, including temporarily shifting the bones of your inner ear so that less acoustic energy is transmitted.) The best things you can do for your ears are to avoid exposure to very loud sounds and, if you have to be in a noisy environment, wear protection. It’s also possible that magnesium supplements might help to reduce the damage to the auditory system, but when it comes to protecting your hearing, the best treatment is prevention. 

Is there more than one sign language?

Recently, I’ve begun to explore a new research direction: signed languages. The University of Washington has an excellent American Sign Language program, including a minor, and I’ve been learning to sign and reading about linguistic research in ASL concurrently. I have to say, it’s probably my favourite language that I’ve studied so far. However, I’ve encountered two very prevalent misconceptions about signed languages when I chat with people about my studies, which are these:

  • Sign languages are basically iconic; you can figure out what’s being said without learning the language.
  • All sign languages are pretty much the same.

On the one hand, I can understand where these misconceptions spring from. On the other, they are absolutely false and I think it’s important that they’re cleared up.

Obama Madiba Memorial
You don’t want to end up like this guy. (No, no, not President Obama, fraudulent “sign language interpreter” Thamsanqa Jantjie.)
First of all, it’s important to distinguish between a visual language and gesture. A language, regardless of its modality (i.e. how it’s transmitted, whether it’s through compressed and rarefied particles or by light bouncing off of things) is arbitrary, abstract and has an internal grammar. Gesture, on the other hand, can be thought of the visual equivalent of non-linguistic vocalizations. Gestures and pantomime and more like squeals, screams, shrieks and raspberries than they are like telling a joke or giving someone directions. They don’t have a grammatical structure, they’re not arbitrary and you can, in fact, figure out what they mean without a whole lot of background information. That’s kind of the point, after all. And, yes, Deaf individuals will often use gesture, especially when communicating with non-signers. But this is distinct from signed language. Signed languages have all the same complexities and nuance of spoken languages, and you can no more understand them without training than you could wake up one morning suddenly speaking Kapampangan. Try to see how much of this ASL vlog entry you understand! (Subtitles not included.)

But that just shows that you signed languages are real languages that you really need to learn. (I’m looking at you Mr. Jantjie). What about the mutual intelligibly thing? Well, since we’ve already seen that signed languages are not iconic, this myth seems to be somewhat silly now. We might expect there to be some sort of pan-Deaf signing if there were constant and routine contact between Deaf communities in different countries. And, in fact, there is! Events such as meetings of the World Federation of the Deaf have fostered the creation of International Sign Language. It is, however, a constructed language that is rarely used outside of international gatherings. Instead, most signers tend to use the signed language is most popular in their home country, and these are vastly different.

For an example of the differences between sign languages, let’s look at the alphabet in two signed languages: American Sign Language and British Sign Language. These are both relatively mature sign languages which both exist as substrate languages in predominately English-speaking communities. (Substrate just means that it’s not the most socially-valued and widely-used language in a given community. In America, any language that’s not English is pretty much a substrate language, despite the fact that we don’t have a government-mandated official language.) So, if sign languages really are universal, we’d expect that these two communities would use basically the same signs. Instead, the two languages are completely unintelligible, as you can see below. (I picked these videos because for the first ten or so signs their pacing is close enough that you can play them simultaneously. You’re welcome. 🙂 )

The alphabet in British Sign Language:


The alphabet in American Sign Language:

As you can see, they’re completely different. Signed languages are a fascinating area of study, and a source of great pride and cultural richness in their respective Deaf communities. I highly recommend that you learn more about visual languages. Here are a couple resources to get you started.

The Science of Speaking in Tongues

So I was recently talking with one of my friends, and she asked me what linguists know about speaking in tongues (or glossolalia, which is the fancy linguistical term for it). It’s not a super well-studied phenomenon, but there has been enough research done that we’ve reached some pretty confident conclusions, which I’ll outline below.

Bozen 1 (327)
More like speaking around tongues, in this guy’s case.
  • People don’t tend to use sounds that aren’t in their native language. (citation) So if you’re an English speaker, you’re not going to bust out some Norwegian vowels. This rather lets the air out of the theory that individuals engaged in glossolalia are actually speaking another language. It is more like playing alphabet soup with the sounds you already know. (Although not always all the sounds you know. My instinct is that glossolalia is made up predominately of the sounds that are the most common in the person’s language.)
  • It lacks the structure of language. (citation) So one of the core ideas of linguistics, which has been supported again and again by hundreds of years of inquiry, is that there are systems and patterns underlying language use: sentences are usually constructed of some sort of verb-like thing and some sort of noun-like thing or things, and it’s usually something on the verb that tells you when and it’s usually something on the noun that tells you things like who possessed what. But these patterns don’t appear in glossolalia. Plus, of course, there’s not really any meaningful content being transmitted. (In fact, the “language” being unintelligible to others present is one of the markers that’s often used to identify glossolalia.) It may sort of smell like a duck, but it doesn’t have any feathers, won’t quack and when we tried to put it in water it just sort of dissolved, so we’ve come to conclusion that it is no, in fact, a duck.
  • It’s associated with a dissociative psychological state. (citation) Basically, this means that speakers are aware of what they’re doing, but don’t really feel like they’re the ones doing it. In glossolalia, the state seems to come and then pass on, leaving speakers relatively psychologically unaffected. Disassociation can be problematic, though; if it’s particularly extreme and long-term it can be characterized as multiple personality disorder.
  • It’s a learned behaviour. (citation) Basically, you only see glossolalia in cultures where it’s culturally expected and only in situations where it’s culturally appropriate. In fact, during her fieldwork, Dr. Goodman (see the citation) actually observed new initiates into a religious group being explicitly instructed in how to enter a dissociative state and engage in glossolalia.

So glossolalia may seem language-like, but from a linguistic standpoint it doesn’t seem to be actually be language.  (Which is probably why there hasn’t been that much research done on it.) It’s vocalization that arises as the result of a learned psychological stated that lacks linguistic systematicity.

Feeling Sound

We’re all familiar with the sensation of sound so loud we can actually feel it: the roar of a jet engine, the palpable vibrations of a loud concert, a thunderclap so close it shakes the windows. It may surprise you to learn, however, that that’s not the only way in which we “feel” sounds. In fact, recent research suggests that tactile information might be just as important as sound in some cases!

Touch Gently (3022697095)
What was that? I couldn’t hear you, you were touching too gently.
I’ve already talked about how we can see sounds, and the role that sound plays in speech perception before. But just how much overlap is there between our sense of touch and hearing? There is actually pretty strong evidence that what we feel can actually override what we’re hearing. Yau et. al. (2009), for example, found that tactile expressions of frequency could override auditory cues. In other words, you might hear two identical tones as different if you’re holding something that is vibrating faster or slower. If our vision system had a similar interplay, we might think that a person was heavier if we looked at them while holding a bowling ball, and lighter if we looked at them while holding a volleyball.

And your sense of touch can override your ears (not that they were that reliable to begin with…) when it comes to speech as well. Gick and Derrick (2013) have found that tactile information can override auditory input for speech sounds. You can be tricked into thinking that you heard a “peach” rather than “beach”, for example, if you’re played the word “beach” and a puff of air is blown over your skin just as you hear the “b” sound. This is because when an English speaker says “peach”, they aspirate the “p”, or say it with a little puff of air. That isn’t there when they say the “b” in “beach”, so you hear the wrong word.

Which is all very cool, but why might this be useful to us as language-users? Well, it suggests that we use a variety of cues when we’re listening to speech. Cues act as little road-signs that point us towards the right interpretation. By having access to a lots of different cues, we ensure that our perception is more robust. Even when we lose some cues–say, a bear is roaring in the distance and masking some of the auditory information–you can use the others to figure out that your friend is telling you that there’s a bear. In other words, even if some of the road-signs are removed, you can still get where you’re going. Language is about communication, after all, and it really shouldn’t be surprising that we use every means at our disposal to make sure that communication happens.

Why are some words untranslatable?

A question that has been posed to me with some frequency recently is why some things are untranslatable. A good example of this is posts such as this, which has the supposedly untranslatable word alongside—ironically—its translation. But I think that if we ask why certain words can’t be translated, we’re actually asking the wrong question. The right question is: why do we think anything at all can be translated?

Tsunajima Kamekichi, Fashionable melange of English words, 1887 (1)

Why is it that we shy away from trying to translate dépaysement but feel quite strongly that a pomme is the same thing as an apple? While a French speaker and and English speaker would probably use the those respective words to ask for the same piece of fruit from a fruit bowl, the phrase “the apple of my eye” is better tranlsted into French as “prunelle de mes yeux”. And if you asked for the prunelle from a fruit bowl, you’d be given something an English speaker would call a plum. So while we think of these two words as the same, on some level, it cannot be denied that they play different roles in their respective languages. No one claims either “apple” or “pomme” are untranslatable, though.

Well, let’s talk a little about what translation is. In linguistics, the standard when discussing languages that the reader is not familiar with (and, since descriptive linguists often work with languages that have a few dozen speakers, this is not uncommon) is to use three lines. The first is in the original language (usually in the International Phonetic Alphabet), the second line is a morpheme-by-morpheme translation and the third is  a ‘sense translation’, which is how an English speaker might have expressed the same thought. (Morphemes, you may be aware, are the smallest unit of language to contain meaning. So the single word “dogs” has two morphemes. “Dog”, which has the meaning of canis familiaris, and “-s”, which tells us that there’s more than one.)

While we tend to idealize translation as the first, a word-to-word correspondence. But even that’s a bit of a simplification, for you’ll often hear people referring to the “literal” translation of something like an idiom, while the “actual” translation is something that maintains the sense but not the wording. The idea that it’s the sense that translations should capture and not the exact wording can sometimes be taken to extremes. Consider FItzgerald’s “translation” of the The Rubáiyát of Omar Khayyám, which in places diverges wildly from the source material. It is true translation, in a morpheme-by-morpheme sense? No. But we still accept it as essentially the same material in two different languages.

And if we accept that on the level of the poem, then I feel like we also have to accept it on the level of the word. It may take more or fewer words to express the same idea in different languages, but if we believe that we’re capable of sharing thoughts between people (Richard Wright once called language  “a very inefficient means of telepathy”) then shuttling them between languages, no matter how difficult the transition, should also be possible. If anything is translatable, than everything has to be.

Of course, accounting for and explaining the cultural baggage associated with a certain term or replicating levels of meaning below the morpheme may pose a greater challenge. But that’s a post for another day.

Why do people have accents?

Since I’m teaching Language and Society this quarter, this is a question that I anticipate coming up early and often. Accents–or dialects, though the terms do differ slightly–are one of those things in linguistics that is effortlessly fascinating. We all have experience with people who speak our language differently than we do. You can probably even come up with descriptors for some of these differences. Maybe you feel that New Yorkers speak nasally, or that Southerners have a drawl, or that there’s a certain Western twang. But how did these differences come about and how are perpetuated?

Hyundai Accents
Clearly people have Accents because they’re looking for a nice little sub-compact commuter car.

First, two myths I’d like to dispel.

  1. Only some people have an accent or speak a dialect. This is completely false with a side of flat-out wrong. Every single person who speaks or signs a language does so with an accent. We sometimes think of newscasters, for example, as “accent-less”. They do have certain systematic variation in their speech, however, that they share with other speakers who share their social grouping… and that’s an accent. The difference is that it’s one that tends to be seen as “proper” or “correct”, which leads nicely into myth number two:
  2. Some accents are better than others. This one is a little more tricky. As someone who has a Southern-influenced accent, I’m well aware that linguistic prejudice exists. Some accents (such as the British “received pronunciation”) are certainly more prestigious than others (oh, say, the American South). However, this has absolutely no basis in the language variation itself. No dialect is more or less “logical” than any other, and geographical variation of factors such as speech rate has no correlation with intelligence. Bottom line: the differing perception of various accents is due to social, and not linguistic, factors.

Now that that’s done with, let’s turn to how we get accents in the first place. To begin with, we can think of an accent as a collection of linguistic features that a group of people share. By themselves, these features aren’t necessarily immediately noticeable, but when you treat them as a group of factors that co-varies it suddenly becomes clearer that you’re dealing with separate varieties. Which is great and all, but let’s pull out an example to make it a little clearer what I mean.

Imagine that you have two villages. They’re relatively close and share a lot of commerce and have a high degree of intermarriage. This means that they talk to each other a lot. As a new linguistic change begins to surface (which, as languages are constantly in flux, is inevitable) it spreads through both villages. Let’s say that they slowly lose the ‘r’ sound. If you asked a person from the first village whether a person from the second village had an accent, they’d probably say no at that point, since they have all of the same linguistic features.

But what if, just before they lost the ‘r’ sound, an unpassable chasm split the two villages? Now, the change that starts in the first village has no way to spread to the second village since they no longer speak to each other. And, since new linguistic forms pretty much come into being randomly (which is why it’s really hard to predict what a language  will sound like in three hundred years) it’s very unlikely that the same variant will come into being in the second village. Repeat that with a whole bunch of new linguistic forms and if, after a bridge is finally built across the chasm, you ask a person from the first village whether a person from the second village has an accent, they’ll probably say yes. They might even come up with a list of things they say differently: we say this and they say that. If they were very perceptive, they might even give you a list with two columns: one column the way something’s said in their village and the other the way it’s said in the second village.

But now that they’ve been reunited, why won’t the accents just disappear as they talk to each other again? Well, it depends, but probably not. Since they were separated, the villages would have started to develop their own independent identities. Maybe the first village begins to breed exceptionally good pigs while squash farming is all the rage in the second village. And language becomes tied that that identity. “Oh, I wouldn’t say it that way,” people from the first village might say, “people will think I raise squash.” And since the differences in language are tied to social identity, they’ll probably persist.

Obviously this is a pretty simplified example, but the same processes are constantly at work around us, at both a large and small scale. If you keep an eye out for them, you might even notice them in action.