Why do people talk in their sleep?

Sleep-talking (or “Somniloquy”, as us fancy-pants scientist people call it) is a phenomena where a sleeping person starts talking. For example, the internet sensation The Sleep Talkin Man. Sleep talking can range from grunts or moans to relatively clear speech. While most people know what sleep talking is (there was even a hit song about it that’s older than I am) fewer people know what causes it.

A.Cortina El sueño
Sure, she looks all peaceful, but you should hear her go on.
To explain what happens when someone’s talking in their sleep, we first need to talk about 1) what happens during sleep and 2) what happens when we talk normally.

  • Sleeping normally: One of the weirder things about sleep talking is that it happens at all. When you’re asleep normally, your muscles undergo atony during the stage of sleep called Rapid Eye Movement, or REM sleep. Basically, your muscles release and go into a state of relaxation or paralysis. If you’ve ever woken suddenly and been unable to move, it’s because your body is still in that state. This serves an important purpose: when we dream we can rehearse movements without actually moving around and hurting ourselves. Of course, the system isn’t perfect. When your muscles fail to “turn off” while you dream, you’ll end up acting out your dream and sleep walking. This is particularly problematic for people with narcolepsy.
  • Speaking while awake: So speech is an incredibly complex process. Between a tenth and a third of a second before you begin to speak you start brain activation in the insula. This is where you plan the movements you’ll need to successfully speak. These come in three main stages, that I like to call breathing, vibrating and tonguing. All speech comes from breath, so you need to inhale in preparation for speaking. Normal exhalation won’t work for speaking, though–it’s too fast–so you switch on your intercostal muscles, in the walls of your ribcage, to help your lungs empty more slowly. Next, you need to tighten your vocal folds as you force air through them. This makes them vibrate (like so) and gives you the actual sound of your voice. By putting different amounts of pressure on your vocal folds you can change your pitch or the quality of your voice. Finally, your mouth needs to manipulate the buzzing sound your vocal folds make to make the specific speech sounds you need. You might flick your tongue, bring your teeth to your lips, or open your soft palate so that air goes through your nose instead of your mouth. And voila! You’re speaking.

Ok, so, it seems like sleep talking shouldn’t really happen, then. When you’re asleep your muscles are all turned off and they certainly don’t seem up to the multi-stage process that is speech production. Besides, there’s no need for us to be making speech movements anyway, right? Wrong. You actually use your speech planning processes even if you’re not planning to speak aloud. I’ve already talked about the motor theory of speech perception, which suggests that we use our speech planning mechanisms to understand speech. And it’s not just speech perception. When reading silently, we still plan out the speech movements we’d make if we were to read out loud (though the effect is smaller with more fluent readers). So you sometimes do all the planning work even if you’re not going to say anything… and one of the times you do that is when you’re asleep. Usually, your muscles are all turned off when you’re asleep. But, sometimes, especially in young children or people with PTSD, the system will occasionally stop working as well. And if it happens to stop working when you’re dreaming that you’re talking and therefore planning out your speech movements? You start sleep talking.

Of course, all of this means that some of the things that we’ve all heard about about sleep talking are actually myths. Admissions of guilt while asleep, for example, aren’t reliable and not admissible in court. (Unless, of course, you really did put that purple beaver in the banana pudding.) It’s also very common; about 50% of children talk in their sleep. Unless it’s causing problems–like waking people you’re sleeping with–sleep talking isn’t generally problematic. But you can help reduce the severity by getting enough sleep (which is probably a good goal anyway), and avoiding alcohol and drugs.

Feeling Sound

We’re all familiar with the sensation of sound so loud we can actually feel it: the roar of a jet engine, the palpable vibrations of a loud concert, a thunderclap so close it shakes the windows. It may surprise you to learn, however, that that’s not the only way in which we “feel” sounds. In fact, recent research suggests that tactile information might be just as important as sound in some cases!

Touch Gently (3022697095)
What was that? I couldn’t hear you, you were touching too gently.
I’ve already talked about how we can see sounds, and the role that sound plays in speech perception before. But just how much overlap is there between our sense of touch and hearing? There is actually pretty strong evidence that what we feel can actually override what we’re hearing. Yau et. al. (2009), for example, found that tactile expressions of frequency could override auditory cues. In other words, you might hear two identical tones as different if you’re holding something that is vibrating faster or slower. If our vision system had a similar interplay, we might think that a person was heavier if we looked at them while holding a bowling ball, and lighter if we looked at them while holding a volleyball.

And your sense of touch can override your ears (not that they were that reliable to begin with…) when it comes to speech as well. Gick and Derrick (2013) have found that tactile information can override auditory input for speech sounds. You can be tricked into thinking that you heard a “peach” rather than “beach”, for example, if you’re played the word “beach” and a puff of air is blown over your skin just as you hear the “b” sound. This is because when an English speaker says “peach”, they aspirate the “p”, or say it with a little puff of air. That isn’t there when they say the “b” in “beach”, so you hear the wrong word.

Which is all very cool, but why might this be useful to us as language-users? Well, it suggests that we use a variety of cues when we’re listening to speech. Cues act as little road-signs that point us towards the right interpretation. By having access to a lots of different cues, we ensure that our perception is more robust. Even when we lose some cues–say, a bear is roaring in the distance and masking some of the auditory information–you can use the others to figure out that your friend is telling you that there’s a bear. In other words, even if some of the road-signs are removed, you can still get where you’re going. Language is about communication, after all, and it really shouldn’t be surprising that we use every means at our disposal to make sure that communication happens.

Why do people have accents?

Since I’m teaching Language and Society this quarter, this is a question that I anticipate coming up early and often. Accents–or dialects, though the terms do differ slightly–are one of those things in linguistics that is effortlessly fascinating. We all have experience with people who speak our language differently than we do. You can probably even come up with descriptors for some of these differences. Maybe you feel that New Yorkers speak nasally, or that Southerners have a drawl, or that there’s a certain Western twang. But how did these differences come about and how are perpetuated?

Hyundai Accents
Clearly people have Accents because they’re looking for a nice little sub-compact commuter car.

First, two myths I’d like to dispel.

  1. Only some people have an accent or speak a dialect. This is completely false with a side of flat-out wrong. Every single person who speaks or signs a language does so with an accent. We sometimes think of newscasters, for example, as “accent-less”. They do have certain systematic variation in their speech, however, that they share with other speakers who share their social grouping… and that’s an accent. The difference is that it’s one that tends to be seen as “proper” or “correct”, which leads nicely into myth number two:
  2. Some accents are better than others. This one is a little more tricky. As someone who has a Southern-influenced accent, I’m well aware that linguistic prejudice exists. Some accents (such as the British “received pronunciation”) are certainly more prestigious than others (oh, say, the American South). However, this has absolutely no basis in the language variation itself. No dialect is more or less “logical” than any other, and geographical variation of factors such as speech rate has no correlation with intelligence. Bottom line: the differing perception of various accents is due to social, and not linguistic, factors.

Now that that’s done with, let’s turn to how we get accents in the first place. To begin with, we can think of an accent as a collection of linguistic features that a group of people share. By themselves, these features aren’t necessarily immediately noticeable, but when you treat them as a group of factors that co-varies it suddenly becomes clearer that you’re dealing with separate varieties. Which is great and all, but let’s pull out an example to make it a little clearer what I mean.

Imagine that you have two villages. They’re relatively close and share a lot of commerce and have a high degree of intermarriage. This means that they talk to each other a lot. As a new linguistic change begins to surface (which, as languages are constantly in flux, is inevitable) it spreads through both villages. Let’s say that they slowly lose the ‘r’ sound. If you asked a person from the first village whether a person from the second village had an accent, they’d probably say no at that point, since they have all of the same linguistic features.

But what if, just before they lost the ‘r’ sound, an unpassable chasm split the two villages? Now, the change that starts in the first village has no way to spread to the second village since they no longer speak to each other. And, since new linguistic forms pretty much come into being randomly (which is why it’s really hard to predict what a language  will sound like in three hundred years) it’s very unlikely that the same variant will come into being in the second village. Repeat that with a whole bunch of new linguistic forms and if, after a bridge is finally built across the chasm, you ask a person from the first village whether a person from the second village has an accent, they’ll probably say yes. They might even come up with a list of things they say differently: we say this and they say that. If they were very perceptive, they might even give you a list with two columns: one column the way something’s said in their village and the other the way it’s said in the second village.

But now that they’ve been reunited, why won’t the accents just disappear as they talk to each other again? Well, it depends, but probably not. Since they were separated, the villages would have started to develop their own independent identities. Maybe the first village begins to breed exceptionally good pigs while squash farming is all the rage in the second village. And language becomes tied that that identity. “Oh, I wouldn’t say it that way,” people from the first village might say, “people will think I raise squash.” And since the differences in language are tied to social identity, they’ll probably persist.

Obviously this is a pretty simplified example, but the same processes are constantly at work around us, at both a large and small scale. If you keep an eye out for them, you might even notice them in action.