How to Take Care of your Voice

Inflammation, polyps and nodules, oh my! Learn about some common problems that can affect your voice and how to avoid them, all in a shiny new audio format. For more tips about caring for your vocal folds and more information about rarer problems like tumours or paralysis, check out this page or this page.

About these ads

Why can’t dogs choke?

And, a related question, why can they bark but not speak? The answer has to do with one of the things that makes us both human and also significantly more vulnerable, right up there with brains so big they make birth potentially fatal. (Hardly a triumph of effective design.) You see, the human larynx, though in many ways very similar to that of other mammals, has a few key differences. It’s much further down in the neck and it’s pulled the tongue down with it. As a result, we have the unique and rather stupid ability to choke to death on our own food. Of course, an anatomical handicap of that magnitude must have been compensated for by something else, otherwise we wouldn’t be here. And the pay-off in this case was pretty awesome: speech.

Alfred Dedreux - Pug Dog in an Armchair.jpg

All the comforts of agriculture, air conditioning and medicine and he can still breathe and swallow at the same time, the smug pup.

But what does all this have to do with dogs? Well, dogs do have a larynx that looks very like humans’. In fact, they make sound in very similar ways: by forcing air through abducted vocal folds. But dogs have a very short vocal folds and they’re scrunched up right above the root of the tongue. This has two main effects:

  1. There’s a very limited number of possible tongue positions available to dogs during phonation. This means that dogs aren’t able to modulate air with the same degree of fine control that we humans are. (Which is why Scooby-Doo sounds like he really needs some elocution lessons.) We do have this control, and that’s what gives us the capacity to make so many different speech sounds.
  2. Dogs have their soft palate touching their epiglottis when they’re at rest. The soft palate is the spongy bit of tissue at the back of your mouth that separates your nasal and oral cavities. The epiglottis is a little piece of tongue-shaped or leaf-shaped cartilage in your throat that flips down to neatly cover your esophagus when you swallow. If they touch, then you’ve got a complete separation between your food-tube and your air-tube and choking becomes a non-issue.

Some humans have that same whole palate-epiglottis-kissing things going on: very young babies. You can see what I’m talking about here. That and how proportionally huge the tongue is is probably why babies can acquire sign quite a bit before they can start speaking; their vocal instruments just aren’t fully developed yet. The upside of this is that babies also don’t have to worry about choking to death.

But once the larynx drops, breathing and swallowing require a bit more coordination. For one thing, while you’re swallowing breathing is suppressed in the brain-stem, so that even if you’re unconscious you don’t try to breathe in your own saliva. We also have a very specific pattern of breath while we’re eating. Try paying attention next time you sit down to a meal: while you’re eating or drinking you tend to stick to a pattern of exhale — swallow — exhale. That way you avoid incoming air trying to carry little bits of food or water into your lungs. (Aspiration pneumonia ain’t no joke.)

So your dog doesn’t choke for the same reason it can’t strike up a conversation with you: its larynx is too high. Who knows? Maybe in a few hundred years, and with a bit of clever genetic engineering, dogs will be talking, and choking, along with us.

That doesn’t mean that something large or oddly-shaped can’t get stuck in esophagus, though.

Why do people talk in their sleep?

Sleep-talking (or “Somniloquy”, as us fancy-pants scientist people call it) is a phenomena where a sleeping person starts talking. For example, the internet sensation The Sleep Talkin Man. Sleep talking can range from grunts or moans to relatively clear speech. While most people know what sleep talking is (there was even a hit song about it that’s older than I am) fewer people know what causes it.

A.Cortina El sueño

Sure, she looks all peaceful, but you should hear her go on.

To explain what happens when someone’s talking in their sleep, we first need to talk about 1) what happens during sleep and 2) what happens when we talk normally.

  • Sleeping normally: One of the weirder things about sleep talking is that it happens at all. When you’re asleep normally, your muscles undergo atony during the stage of sleep called Rapid Eye Movement, or REM sleep. Basically, your muscles release and go into a state of relaxation or paralysis. If you’ve ever woken suddenly and been unable to move, it’s because your body is still in that state. This serves an important purpose: when we dream we can rehearse movements without actually moving around and hurting ourselves. Of course, the system isn’t perfect. When your muscles fail to “turn off” while you dream, you’ll end up acting out your dream and sleep walking. This is particularly problematic for people with narcolepsy.
  • Speaking while awake: So speech is an incredibly complex process. Between a tenth and a third of a second before you begin to speak you start brain activation in the insula. This is where you plan the movements you’ll need to successfully speak. These come in three main stages, that I like to call breathing, vibrating and tonguing. All speech comes from breath, so you need to inhale in preparation for speaking. Normal exhalation won’t work for speaking, though–it’s too fast–so you switch on your intercostal muscles, in the walls of your ribcage, to help your lungs empty more slowly. Next, you need to tighten your vocal folds as you force air through them. This makes them vibrate (like so) and gives you the actual sound of your voice. By putting different amounts of pressure on your vocal folds you can change your pitch or the quality of your voice. Finally, your mouth needs to manipulate the buzzing sound your vocal folds make to make the specific speech sounds you need. You might flick your tongue, bring your teeth to your lips, or open your soft palate so that air goes through your nose instead of your mouth. And voila! You’re speaking.

Ok, so, it seems like sleep talking shouldn’t really happen, then. When you’re asleep your muscles are all turned off and they certainly don’t seem up to the multi-stage process that is speech production. Besides, there’s no need for us to be making speech movements anyway, right? Wrong. You actually use your speech planning processes even if you’re not planning to speak aloud. I’ve already talked about the motor theory of speech perception, which suggests that we use our speech planning mechanisms to understand speech. And it’s not just speech perception. When reading silently, we still plan out the speech movements we’d make if we were to read out loud (though the effect is smaller with more fluent readers). So you sometimes do all the planning work even if you’re not going to say anything… and one of the times you do that is when you’re asleep. Usually, your muscles are all turned off when you’re asleep. But, sometimes, especially in young children or people with PTSD, the system will occasionally stop working as well. And if it happens to stop working when you’re dreaming that you’re talking and therefore planning out your speech movements? You start sleep talking.

Of course, all of this means that some of the things that we’ve all heard about about sleep talking are actually myths. Admissions of guilt while asleep, for example, aren’t reliable and not admissible in court. (Unless, of course, you really did put that purple beaver in the banana pudding.) It’s also very common; about 50% of children talk in their sleep. Unless it’s causing problems–like waking people you’re sleeping with–sleep talking isn’t generally problematic. But you can help reduce the severity by getting enough sleep (which is probably a good goal anyway), and avoiding alcohol and drugs.

Which are better, earphones or headphones?

As a phonetician, it’s part of my job to listen to sounds very closely. Plus, I like to listen to music while I work, enjoy listening to radio dramas and use a headset to chat with my guildies while I’m gaming.  As a result, I spend a lot of time with things on/in my ears. And, because of my background, I’m also fairly well informed about the acoustic properties of  earphones and headphones and how they interact with anatomy. All of which helps me answer the question: which is better? Or, more accurately, what are some of the pros and cons of each? There are a number of factors to consider, including frequency response, noise isolation, noise cancellation and comfort/fit. Before I get into specifics, however, I want to make sure we’re on the same page when we talk about “headphones” and “earphones”.

Earphones: For the purposes of this article, I’m going to use the term “earphone” to refer to devices that are meant to be worn inside the pinna (that’s the fancy term for the part of the ear you can actually see). These are also referred to as “earbuds”, “buds”, “in-ears”, “canalphones”, “in-ear moniters”, “IEM’s” and “in-ear headphones”. You can see an example of what I’m calling “earphones” below.

IPod Touch 2G Remote Mic

Ooo, so white and shiny and painful.

Headphones: I’m using this term to refer to devices that are not meant to rest in the pinna, whether they go around or on top of the ear. These are also called “earphones”, (apparently) “earspeakers” or, my favorites, “cans”. You can see somewhat antiquated examples of what I’m calling “headphones” below.

Club holds radio dance wearing earphones 1920

I mean, sure, it’s a wonder of modern technology and all, but the fidelity is just so low.

Alright, now that we’ve  cleared that up, let’s get down to brass tacks. (Or, you might say…. bass tacks.)

  1. Frequency response curve: How much distortion do they introduce? In an ideal world, ‘phones should responded equally well to all frequencies (or pitches), without transmitting one frequency rage more loudly than another. This desirable feature is commonly referred to as a “flat” frequency response. That means that the signal you’re getting out is pretty much the same one that was fed in, at all frequency ranges.
    1. Earphones: In general, earphones tend to have a worse frequency response.
    2. Headphones: In general, headphones tend to have better frequency response.
    3. WinnerHeadphones are probably the better choice if you’re really worried about distortion. You should read the specifications of the device you’re interested in, however, since there’s a large amount of variability.
  2. Frequency response: What is their pitch range? This term is sometimes used to refer to the frequency response curve I talked about above and sometimes used to refer to pitch range. I know, I know, it’s confusing. Pitch range is usually expressed as the lowest sound the ‘phones can transmit followed by the highest. Most devices on the market today can pretty much play anything between 20 and 20k Hz. (You can see what that sounds like here. Notice how it sounds loudest around 300Hz? That’s an artifact of your hearing, not the video. Humans are really good at hearing sounds around 300Hz which [not coincidentally] is about where the human voice hangs out.)
    1. Earphones: Earphones tend to have a smaller pitch range than headphones. Of course, there are always exceptions.
    2. Headphones: Headphones tend to have a better frequency range than earphones.
    3. Winner: In general, headphones have a better frequency range. That said, it’s not really that big of a deal. You can’t really hear very high or very low sounds that well because of the way your hearing system works regardless of how well your ‘phones are delivering the signal. Anything that plays sounds between 20Htz and 20,000Htz should do you just fine.
  3. Noise isolation: How well do they isolate you from sounds other than the ones you’re trying to listen to? More noise isolation is generally better, unless there’s some reason you need to be able to hear environmental sounds as well whatever you’re listening to. Better isolation also means you’re less likely to bother other people with your music.
    1. Earphones:  A properly fitted pair of in-ear earphones will give you the best noise isolation. It makes sense; if you’re wearing them properly they should actually form a complete seal with your ear canal. No sound in, no sound out, excellent isolation.
    2. Headphones: Even really good over-ear headphones won’t form a complete seal around your ear. (Well, ok, maybe if you’re completely bald and you make some creative use of adhesives, but you know what I mean.) As a result, you’re going to get some noise leakage .
    3. Winner: You’ll get the best noise isolation from well-fitting earphones that sit in the ear canal.
  4. Noise cancellation: How well can they correct for atmospheric sounds? So noise cancellation is actually completely different from noise isolation. Noise isolation is something that all ‘phones have. Noise-cancelling ‘phones, on the other hand, actually do some additional signal processing before you get the sound. They “listen” for atmospheric sounds, like an air-conditioner or a car engine. Then they take that waveform, reproduce it and invert it. When they play the inverted waveform along with your music, it exactly cancels out the sound. Which is awesome and space-agey, but isn’t perfect. They only really work with steady background noises. If someone drops a book, they won’t be able to cancel that sudden, sharp noise. They also tend not to work as well with really high-pitched noises.
    1. Earphones: Noise-cancelling earphones tend not be as effective as noise-cancelling headphones until you get to the high end of the market (think $200 plus).
    2. Headphones: Headphones tend to be slightly better at noise-cancellation than earphones of a similar quality, in my experience. This is partly due to the fact that there’s just more room for electronics in headphones.
    3. Winner: Headphones usually have a slight edge here. Of course, really expensive noise-cancelling devices, whether headphones or earphones, usually perform better than their bargain cousins.
  5. Comfort/fit: Is they comfy?
    1. Earphones: So this is where earphones tend to suffer. There is quite a bit of variation in the shape of the cavum conchæ, which is the little bowl shape just outside your ear canal. Earphone manufacturers have to have somewhere to put their magnets and drivers and driver support equipment and it usually ends up in the “head” of the earphone, nestled right in your concha cavum. Which is awesome if it’s a shape that fits your ear. If it’s not, though, it can quickly start to become irritating and eventually downright painful. Personally, this is the main reason I prefer over-ear headphones.
    2. Headphones: A nicely fitted pair of over-ear headphones that covers your whole ear is just incredibly comfortable. Plus, they keep your ears warm! I find on-ear headphones less comfortable in general, but a nice cushy pair can still feel awesome. There are other factors to take into account, though; wearing headphones and glasses with a thick frame can get really uncomfortable really fast.
    3. Winner: While this is clearly a matter of personal preference, I have a strong preference for headphones on this count.

So, for me at least, headphones are the clear winner overall. I find them more comfortable, and they tend to reproduce sound better than earphones. There are instances where I find earphones preferable, though. They’re great for travelling or if I really need an isolated signal. When I’m just sitting at my desk working, though, I reach for headphones 99% of the time.

One final caveat: the sound quality you get out of your ‘phones depends most on what files you’re playing. The best headphones in the world can’t do anything about quantization noise (that’s the noise introduced when you convert analog sound-waves to digital ones) or a background hum in the recording.

Is dyslexia genetic?

I’m a graduate student in linguistics. I have a degree in English literature. I love reading, writing and books with the fiery passion of a thousand suns. And I am dyslexic. While I was very successfully academically in secondary and higher education, let’s just say that primary school was… rough. I’ve failed enough spelling tests that I could wallpaper a small room with them. At this point I’m a fluent reader, mainly because no one’s asking me to read things without context. (For an interesting experimental look at the effects of world-knowledge and context on reading, I’d recommend Paul Kolers 1970 article, “Three stages of reading”.) These days, language processing problems tend to be flashes in the pan rather than a constant barrier I’m pushing against. I tend to confused “cloaca” and “cochlea”, for example, and I feel like I use “etymology” and “entomology” correctly at chance. But still, it would be nice to know that my years of suffering in primary school were due to genetic causes and not because I was “dumber” or “lazier” than other kids. And recent research does seem to support that: it looks like dyslexia probably is genetic.

Dyslexic words

They all look good to me.


First off, a couple caveats. Dyslexia is an educational diagnosis. There’s a pretty extensive battery of tests, any of which may be used to diagnose dyslexia. The International Dyslexia Association defines dyslexia thusly:

It is characterized by difficulties with accurate and / or fluent word recognition and by poor spelling and decoding abilities. These difficulties typically result from a deficit in the phonological component of language that is often unexpected in relation to other cognitive abilities and the provision of effective classroom instruction. Secondary consequences may include problems in reading comprehension and reduced reading experience that can impede growth of vocabulary and background knowledge.

Which sounds pretty standard, right? But! There are a number of underlying causes that might lead to this. One obvious one is an undiagnosed hearing problem. Someone who only has access to part of the speech signal will probably display all of these symptoms. Or someone in an English-only environment who speaks, say, Kola, as their home language. It’s hard to learn that ‘p’ means /p/ if your language doesn’t have that sound. Of course, educators know that these things affect reading ability. But there are also a number of underlying mental processes that might lead to a diagnosis of dyslexia, which may or may not be related to each other but are all almost certainly genetic. Let’s look at a couple of them.

  • Phonological processing. I’ve talked a little bit about phonology before. Basically, someone with phonological disorder has a hard time with language sounds specifically. For example, they may have difficulty with rhyming tasks, or figuring out how many sounds are in a word. And this does seem to have a neurological compontent. One study shows that, when children with dyslexia were asked to come up with letters that rhymed, they did not show activity in the Temporoparietal junction, unlike their non-dyslexic peers. Among other things, the Temporoparietal junction plays a role in interpreting sequences of events.
  • Auditory processing. Auditory processing difficulties aren’t necessarily linguistic in nature. Someone who has difficulty processing sounds may be tone deaf, for example, unable to tell whether two notes are the same or different. For dyslexics, this tends to surface as difficulty with sounds that occur very quickly. And there’s pretty much no sounds that humans need to process more quickly than speech sounds. A flap, for example, lasts around 20 milliseconds. To put that in perspective, that’s about 15 times slower than a fast blink. And it looks like there’s a genetic cause for these auditory processing problems: dyslexic brains have a localized asymmetry in their neurons. They also have more, and smaller neurons.
  • Sequential processing. For me, this is probably the most interesting. Sequential processing isn’t limited to language. It had to do with doing or perceiving things in the correct order. So, for example, if I gave you all the steps for baking a cake in the wrong order, you’d need to use sequential processing to put them in the correct order. And there’s been some really interesting work done, mainly by Beate Peter at the University of Washington (represent!) that suggests that there is a single genetic cause responsible for a number of rapid sequential processing task, and one of the effects of an abnormality in this gene is dyslexia. But people with this mutation also tend to be bad at, for example, touching each of their fingers to their thumb in order. 
  • Being a dude. Ok, this one is a little shakier, but depending on who you listen to, dyslexia is either equally common men and women, 4 to 5 times more common in men or 2 to 3 times more common in men. This may be due to structural differences, since it seems that male dyslexics have less gray matter in language processing centers, whereas females have less gray matter in sensing and motor processing areas. Or the difference could be due to the fact that estrogen does very good things for your brain, especially after traumatic injury. I include it here because sex is genetic (duh) and seems to (maybe, kinda, sorta) have an effect on dyslexia.

Long story short, there’s been quite a bit of work done on the genetics of dyslexia and the evidence points to a probably genetic common cause. Which in some ways is really exciting! That means that we can predict better who will have learning difficulties and work to provide them with additional tutoring and help. And it also means that some reading difficulties are due to anatomy and genes. If you’re dyslexic, it’s because you’re wired that way, and not because your parents did or didn’t do something (well, other than contribute your genetic material, obvi) or because you didn’t try hard enough. I really wish I could go back in time and tell that to my younger self after I completely failed yet another spelling test, even though I’d copied the words a hundred times each

But the genetic underpinning of dyslexia might also seem like a bad thing. After all, if dyslexia is genetic, does that mean that children with reading difficulties will just never get over them? Not at all! I don’t have space here to talk about the sort of interventions and treatments that can help dyslectics. (Perhaps I’ll write a future post on the subject.) Suffice to say, the dyslexic brain can learn to compensate and adapt over time. Like I said above, I’m currently a very fluent reader. And dyslexia can be a good thing. The same skills that can make learning to read hard can make you very, very good at picking out one odd thing in a large group, or at surveying a large quantity of visual information quickly – even if you only see it out of the corner of your eye. For example, I am freakishly good at finding four-leaf clovers. In high school, I collected thousands of them just while doing chores around the farm. And that’s not the only advantage. I’d recommend the Dyslexic Advantage (it’s written for a non-scientific audience) if you’re interested in learning more about the benefits of dyslexia. The authors point out that dyslectics are very good at making connections between things, and suggest that they enjoy an advantage when reasoning spatially, narratively, about related but unconnected things (like metaphors) or with incomplete or dynamic information.

The current research suggests pretty strongly that dyslexia is something you’re born with. And even though it might make some parts of your school career very difficult, it won’t stop you from thriving. It might even end up helping you later on in life.

Is English the best language for business?

The Economist recently published an article discussing the rise of English as a business lingua franca. This is an issue that I’ve come across quite a bit in my own life as someone who’s lived and traveled quite a bit overseas. And not just in professional settings: as an English speaker I’ve received an education in English in countries where  it’s not even an official language and I have quite a few friends, mainly Brazil and the Nordic countries, who I only ever talk to in English despite the fact that it’s not their native language. And I’m certainly not alone in this. Ethnologue estimates that there are approximately 335 million native English speakers, but over 430 million non-native speakers. English is emerging as the predominant global language, and many people see an English education as an investment.

2010年6月3日、長野県飯田市で商談を行う模様

I don’t care that we all speak the same language natively. We’re holding this meeting in English and that’s final!

But for me, at least, the more interesting question is why? There are a lot of languages in the world, and, in theory at least, any of them could be enjoying the preference currently shown for English. I’m hardly alone in asking these questions. The Economist article I mentioned above suggests a few:

There are some obvious reasons why multinational companies want a lingua franca. Adopting English makes it easier to recruit global stars (including board members), reach global markets, assemble global production teams and integrate foreign acquisitions. Such steps are especially important to companies in Japan, where the population is shrinking. There are less obvious reasons too. Rakuten’s boss, Hiroshi Mikitani, argues that English promotes free thinking because it is free from the status distinctions which characterise Japanese and other Asian languages. Antonella Mei-Pochtler of the Boston Consulting Group notes that German firms get through their business much faster in English than in laborious German. English can provide a neutral language in a merger: when Germany’s Hoechst and France’s Rhône-Poulenc combined in 1999 to create Aventis, they decided it would be run in English, in part to avoid choosing between their respective languages.

Let’s break this down a little bit. There seem to be two main arguments for using English. One is social. Using English makes it easier to collaborate with other companies or company offices in other countries and, if no one is a native English speaker, helps avoid conflict by choosing a working language that doesn’t unfairly benefit one group. The argument is linguistic: there is some special quality to English that makes it more suited for business. Let’s look at each of them in turn.

The social arguments seem perfectly valid to me. English education is widely available and, in many countries, a required part of primary and secondary education. There is a staggering quantity of resources available to help master English. Lots of people speak English, with varying degrees of frequency. As a result, there’s a pretty high likelihood that, given a randomly-selected group of people, you’ll be able to get your point across in English. While it might be more fair to use a language that no-one speaks natively, like Latin or Esperanto, English has high saturation and an instructional infrastructure already in place. Further, the writing system is significantly more accessible and computer-friendly than Mandarin’s, which actually has more speakers than English. (Around 847 million, in case you were wondering.) All practical arguments for using English as an international business language.

Now let’s turn to the linguistic arguments. These are, sadly, much less reasonable. As I’ve mentioned before, people have a distressing tendency to make testable claims about language without actually testing them. And both of the claims above–that honorifics confine thinking and that English is “faster” than German– have already been investigated.

  • Honorifics do appear to have an effect on cognition, but it seems to be limited to a spatial domain, i.e. higher status honorifics are associated with “up” and lower ones with “down”. Beyond subtle priming, I find it incredibly unlikely that a rich honorific system has any effect on individual cognition. A social structure which is reflected in language use, however, might make people less willing to propose new things or offer criticism. But that’s hardly language-dependent. Which sounds more likely: “Your idea is horrible, sir,” or “Your idea is horrible, you ninny”?
    • TL;DR: It’s not the language, it’s the social structure the language operates in. 
  • While it is true that different languages have different informational density, the rate of informational transmission is actually quite steady. It appears that, as informational density increases, speech rate decreases. As a result, it looks like, regardless of the language, humans tend to convey information at a pretty stable rate. This finding is cross-modal, too. Even though signs take longer to produce than spoken words, they are more dense and so the rate of information flow in signed and spoken languages seems to be about the same. Which makes sense: there’s a limit to how quickly the human brain can process new information, so it makes sense that we’d produce information at about that rate.
    • TL;DR: All languages convey information at pretty much the same rate. If there’s any difference in the amount of time meetings take, it’s more likely because people are using a language they’re less comfortable in (i.e. English). 

In conclusion, it very well may be the case that English is currently the best language to conduct business in. But that’s because of language-external social factors, not anything inherent about the language itself.

Are television and mass media destroying regional accents?

One of the occupational hazards of linguistics is that you are often presented with spurious claims about language that are relatively easy to quantifiably disprove. I think this is probably partly due to the fact that there are multiple definitions of ‘linguist. As a result, people tend to equate mastery of a language with explicit knowledge of it’s workings. Which, on the one hand, is reasonable. If you know French, the idea is that you know how to speak French, but also how it works. And, in general, that isn’t the case. Partly because most language instruction is light on discussions of grammatical structures–reasonably so; I personally find inductive grammar instruction significantly more helpful, though the research is mixed–and partly because, frankly, there’s a lot that even linguists don’t know about how grammar works. Language is incredibly complex, and we’ve only begun to explore and map out that complexity. But there are a few things we are reasonably certain we know. And one of those is that your media consumption does not “erase” your regional dialect [pdf]. The premise is flawed enough that it begins to collapse under it’s own weight almost immediately. Even the most dedicated American fans of Dr. Who or Downton Abby or Sherlock don’t slowly develop British accents.

Christopher Eccleston Thor 2 cropped

Lots of planets have a North with a distinct accent that is not being destroyed by mass media.

So why is this myth so persistent? I think that the most likely answer is that it is easy to mischaracterize what we see on television and to misinterpret what it means. Standard American English (SAE), what newscasters tend to use, is a dialect. It’s not just a certain set of vowels but an entire, internally consistent grammatical system.  (Failing to recognize that dialects are more than just adding a couple of really noticeable sounds or grammatical structures is why some actors fail so badly at trying to portray a dialect they don’t use regularly.) And not only  is it a dialect, it’s a very prestigious dialect. Not only newscasters make use of it, but so do political figures, celebrities, and pretty much anyone who has a lot of social status. From a linguistic perspective, SAE is no better or worse than any other dialect. From a social perspective, however, SAE has more social capital than most other dialects. That means that being able to speak it, and speak it well, can give you opportunities that you might not otherwise have had access to. For example, speakers of Southern American English are often characterized as less intelligent and educated. And those speakers are very aware of that fact, as illustrated in this excrpt from the truely excellent PBS series Do You Speak American:

ROBERT:

Do you think northern people think southerners are stupid because of the way they talk?

JEFF FOXWORTHY:

Yes I think so and I think Southerners really don’t care that Northern people think that eh. You know I mean some of the, the most intelligent people I’ve ever known talk like I do. In fact I used to do a joke about that, about you know the Southern accent, I said nobody wants to hear their brain surgeon say, ‘Al’ight now what we’re gonna do is, saw the top of your head off, root around in there with a stick and see if we can’t find that dad burn clot.’

So we have pressure from both sides: there are intrinsic social rewards for speaking SAE, and also social consequences for speaking other dialects. There are also plenty of linguistic role-models available through the media, from many different backgrounds, all using SAE. If you consider these facts alone it seems pretty easy to draw the conclusion that regional dialects in America are slowly being replaced by a prestigious, homogeneous dialect.

Except that’s not what’s happening at all. Some regional dialects of American English are actually becoming more, rather than less, prominent. On the surface, this seems completely contradictory. So what’s driving this process, since it seems to be contradicting general societal pressure? The answer is that there are two sorts of pressure. One, the pressure from media, is to adopt the formal, standard style. The other, the pressure from family, friends and peers, is to retain and use features that mark you as part of your social network. Giles, Taylor and Bourhis showed that identification with a certain social group–in their case Welsh identity–encourages and exaggerates Welsh features. And being exposed to a standard dialect that is presented as being in opposition to a local dialect will actually increase that effect. Social identity is constructed through opposition to other social groups. To draw an example from American politics, many Democrats define themselves as “not Republicans” and as in opposition to various facets of “Republican-ness”. And vice versa.

Now, the really interesting thing is this: television can have an effect on speaker’s dialectal features But that effect tends to be away from, rather than towards, the standard. For example, some Glaswegian English speakers have begun to adopt features of Cockney English based on their personal affiliation with the  show EastendersIn light of what I discussed above, this makes sense. Those speakers who had adopted the features are of a similar social and socio-economic status as the characters in Eastenders. Furthermore, their social networks value the characters who are shown using those features, even though they are not standard. (British English places a much higher value on certain sounds and sound systems as standard. In America, even speakers with very different sound systems, e.g. Bill Clinton and George W. Bush, can still be considered standard.) Again, we see retention and re-invigoration of features that are not standard through a construction of opposition. In other words, people choose how they want to sound based on who they want to be seen as. And while, for some people, this means moving towards using more SAE, in others it means moving away from the standard.

One final note: Another factor which I think contributes to the idea that television is destroying accents is the odd idea that we all only have one dialect, and that it’s possible to “lose” it. This is patently untrue. Many people (myself included) have command of more than one dialect and can switch between them when it’s socially appropriate, or blend features from them for a particular rhetorical effect. And that includes people who generally use SAE. Oprah, for example, will often incorporate more features of African American English when speaking to an African American guest.  The bottom line is that television and mass media can be a force for linguistic change, but they’re hardly the great homogonizier that it is often claimed they are.

For other things I’ve written about accents and dialects, I’d recommend:

  1. Why do people  have accents? 
  2. Ask vs. Aks
  3. Coke vs. Soda vs. Pop

Why does loud music hurt your hearing?

There was recently an excellent article by the BBC about the results of a survey put out by the non-profit organization Action on Hearing Loss. The survey showed that most British adults are taking dangerous risks with their hearing: almost a third played music above the recommended volume and a full two-thirds left noisy venues with ringing in their ears. It may seem harmless to enjoy noisy concerts, but it can and does irrecoverably damage your hearing. But how, exactly, does it do that? And how loud can you listen to music without being at risk?

Speakers lroad

Turn it up! Just… not too loud, ok?

Let’s start with the second question. You’re at risk of hearing loss if you subject yourself to sounds above 85 decibels. For reference, that’s about as loud as a food processor or blender, and most music players will warn if you try to play music much louder than that. They will, however, sometimes play music up to 110 dB, which is roughly the equivalent of someone starting a chainsaw in your ear and verges on painful. And that is absolutely loud enough to damage your hearing.

Hearing damage is permanent and progressive. Inside your inner ear are tiny, hair-shaped cells. These sway back and forth as the fluid of your inner ear is compressed by sound-waves. It’s a bit like seaweed being pulled back and forth by waves, but on a smaller scale and much, much faster. As these hair cells brush back and forth they create and transmit electrical impulses that are sent to the brain and interpreted as sound. The most delicate part of this process is those tiny hair cells. They’re very sensitive. Which is good, because it means that we’re able to detect noise well, but also bad, because they’re very easy to damage. In fish and birds, that damage can heal over time. In mammals, it can not. Once your hair cells are damaged, they can no longer transmit sound and you lose that part of the signal permanently. There’s a certain amount of unavoidable wear and tear on the system. Even if you do avoid loud noises, you’re still slowly losing parts of your hearing as you and your hair cells age, especially in the upper frequencies. But loud music will accelerate that process drastically. Listen to loud enough music long enough and it will slowly take away at your ability to hear it. 

But that doesn’t necessarily mean you should avoid loud environments altogether. As with all things, moderation is key. One noisy concert won’t leave you hard of hearing. (In fact, your body has limited defence mechanisms for sustained loud noises, including temporarily shifting the bones of your inner ear so that less acoustic energy is transmitted.) The best things you can do for your ears are to avoid exposure to very loud sounds and, if you have to be in a noisy environment, wear protection. It’s also possible that magnesium supplements might help to reduce the damage to the auditory system, but when it comes to protecting your hearing, the best treatment is prevention. 

Is there more than one sign language?

Recently, I’ve begun to explore a new research direction: signed languages. The University of Washington has an excellent American Sign Language program, including a minor, and I’ve been learning to sign and reading about linguistic research in ASL concurrently. I have to say, it’s probably my favourite language that I’ve studied so far. However, I’ve encountered two very prevalent misconceptions about signed languages when I chat with people about my studies, which are these:

  • Sign languages are basically iconic; you can figure out what’s being said without learning the language.
  • All sign languages are pretty much the same.

On the one hand, I can understand where these misconceptions spring from. On the other, they are absolutely false and I think it’s important that they’re cleared up.

Obama Madiba Memorial

You don’t want to end up like this guy. (No, no, not President Obama, fraudulent “sign language interpreter” Thamsanqa Jantjie.)

First of all, it’s important to distinguish between a visual language and gesture. A language, regardless of its modality (i.e. how it’s transmitted, whether it’s through compressed and rarefied particles or by light bouncing off of things) is arbitrary, abstract and has an internal grammar. Gesture, on the other hand, can be thought of the visual equivalent of non-linguistic vocalizations. Gestures and pantomime and more like squeals, screams, shrieks and raspberries than they are like telling a joke or giving someone directions. They don’t have a grammatical structure, they’re not arbitrary and you can, in fact, figure out what they mean without a whole lot of background information. That’s kind of the point, after all. And, yes, Deaf individuals will often use gesture, especially when communicating with non-signers. But this is distinct from signed language. Signed languages have all the same complexities and nuance of spoken languages, and you can no more understand them without training than you could wake up one morning suddenly speaking Kapampangan. Try to see how much of this ASL vlog entry you understand! (Subtitles not included.)

But that just shows that you signed languages are real languages that you really need to learn. (I’m looking at you Mr. Jantjie). What about the mutual intelligibly thing? Well, since we’ve already seen that signed languages are not iconic, this myth seems to be somewhat silly now. We might expect there to be some sort of pan-Deaf signing if there were constant and routine contact between Deaf communities in different countries. And, in fact, there is! Events such as meetings of the World Federation of the Deaf have fostered the creation of International Sign Language. It is, however, a constructed language that is rarely used outside of international gatherings. Instead, most signers tend to use the signed language is most popular in their home country, and these are vastly different.

For an example of the differences between sign languages, let’s look at the alphabet in two signed languages: American Sign Language and British Sign Language. These are both relatively mature sign languages which both exist as substrate languages in predominately English-speaking communities. (Substrate just means that it’s not the most socially-valued and widely-used language in a given community. In America, any language that’s not English is pretty much a substrate language, despite the fact that we don’t have a government-mandated official language.) So, if sign languages really are universal, we’d expect that these two communities would use basically the same signs. Instead, the two languages are completely unintelligible, as you can see below. (I picked these videos because for the first ten or so signs their pacing is close enough that you can play them simultaneously. You’re welcome. :) )

The alphabet in British Sign Language:

The alphabet in American Sign Language:

As you can see, they’re completely different. Signed languages are a fascinating area of study, and a source of great pride and cultural richness in their respective Deaf communities. I highly recommend that you learn more about visual languages. Here are a couple resources to get you started.