Great ideas in linguistics: Language acquisition

Courtesy of your friendly neighbourhood rng, this week’s great idea in linguistics is… language acquisition! Or, in other words, the process of learning  a language. (In this case, learning your first language when you’re a little baby, also known as L1 acquisition; second-language learning, or L2 acquisition, is a whole nother bag of rocks.) Which begs the question: why don’t we just call it language learning and call it a day? Well, unlike learning to play baseball, turn out a perfect soufflé or kick out killer DPS, learning a language seems to operate under a different set of rules. Babies don’t benefit from direct language instruction and it may actually hurt them.

In other words:

Language acquisition is process unique to humans that allows us to learn our first language without directly being taught it.

Which doesn’t sound so ground-breaking… until you realize that that means that language use is utterly unique among human behaviours. Oh sure, we learn other things without being directly taught them, even relatively complex behaviours like swallowing and balancing. But unlike speaking, these aren’t usually under concious control and when they are it’s usually because something’s gone wrong. Plus, as I’ve discussed before, we have the ability to be infinitely creative with language. You can learn to make a soufflé without knowing what happens when you combine the ingredients in every possible combination, but knowing a language means that you know rules that allow you to produce all possible utterances in that language.

So how does it work? Obviously, we don’t have all the answers yet, and there’s a lot of research going on on how children actually learn language. But we do know what it generally tends to look like, precluding things like language impairment or isolation.

  1. Vocal play. The kid’s figured out that they have a mouth capable of making noise (or hands capable of making shapes and movements) and are practising it. Back in the day, people used to say that infants would make all the sounds of all the world’s languages during this stage. Subsequent research, however, suggests that even this early children are beginning to reflect the speech patterns of people around them.
  2. Babbling. Kids will start out with very small segments of language, then repeat the same little chunk over and over again (canonical babbling), and then they’ll start to combine them in new ways (variegated babbling). In hearing babies, this tends to be syllables, hence the stereotypical “mamamama”. In Deaf babies it tends to be repeated hand motions.
  3. One word stage. By about 13 months, most children will have begun to produce isolated words. The intended content is often more than just the word itself, however. A child shouting “Dog!” at this point could mean “Give me my stuffed dog” or “I want to go see the neighbour’s terrier” or “I want a lion-shaped animal cracker” (since at this point kids are still figuring out just how many four-legged animals actually are dogs). These types of sentences-in-a-word are known as holophrases.
  4. Two word stage. By two years, most kids will have moved on to two-word phrases, combining words in way that shows that they’re already starting to get the hang of their language’s syntax. Morphology is still pretty shaky, however: you’re not going to see a lot of tense markers or verbal agreement.
  5. Sentences. At this point, usually around age four, people outside the family can generally understand the child. They’re producing complex sentences and have gotten down most, if not all, of the sounds in their language.

These general stages of acquisition are very robust. Regardless of the language, modality or even age of acquisition we still see these general stages. (Although older learners may never completely acquire a language due to, among other things, reduced neuroplasticity.) And the fact they do seem to be universal is yet more evidence that language acquisition is a unique process that deserves its own field of study.

New series: 50 Great Ideas in Linguistics

As I’ve been teaching this summer (And failing to blog on a semi-regular basis like a loser. Mea culpa.) I’ll occasionally find that my students aren’t familiar with something I’d assumed they’d covered at some point already. I’ve also found that there are relatively few resources for looking up linguistic ideas that don’t require a good deal of specialized knowledge going in. SIL’s glossary of linguistic terms is good but pretty jargon-y, and the various handbooks tend not to have on-line versions. And even with a concerted effort by linguists to make Wikipedia a good resource, I’m still not 100% comfortable with recommending that my students use it.

Therefore! I’ve decided to make my own list of Things That Linguistic-Type People Should Know and then slowly work on expounding on them. I have something to point my students to and it’s a nice bite-sized way to talk about things; perfect for a blog.

Here, in no particular order, are 50ish Great Ideas of Linguistics sorted by sub-discipline. (You may notice a slightly sub-disciplinary bias.) I might change my mind on some of these–and feel free to jump in with suggestions–but it’s a start. Look out for more posts on them.

  • Sociolinguistics
    • Sociolinguistic variables
    • Social class and language
    • Social networks
    • Accommodation
    • Style
    • Language change
    • Linguistic security
    • Linguistic awareness
    • Covert and overt prestige
  • Phonetics
    • Places of articulation
    • Manners of articulation
    • Voicing
    • Vowels and consonants
    • Categorical perception
    • “Ease”
    • Modality
  • Phonology
    • Rules
    • Assimilation and dissimilation
    • Splits and mergers
    • Phonological change
  • Morphology
  • Syntax
  • Semantics
    • Pragmatics
    • Truth values
    • Scope
    • Lexical semantics
    • Compositional semantics
  • Computational linguistics
    • Classifiers
    • Natural Language Processing
    • Speech recognition
    • Speech synthesis
    • Automata
  • Documentation/Revitalization
    • Language death
    • Self-determination
  • Psycholinguistics

Feeling Sound

We’re all familiar with the sensation of sound so loud we can actually feel it: the roar of a jet engine, the palpable vibrations of a loud concert, a thunderclap so close it shakes the windows. It may surprise you to learn, however, that that’s not the only way in which we “feel” sounds. In fact, recent research suggests that tactile information might be just as important as sound in some cases!

Touch Gently (3022697095)
What was that? I couldn’t hear you, you were touching too gently.
I’ve already talked about how we can see sounds, and the role that sound plays in speech perception before. But just how much overlap is there between our sense of touch and hearing? There is actually pretty strong evidence that what we feel can actually override what we’re hearing. Yau et. al. (2009), for example, found that tactile expressions of frequency could override auditory cues. In other words, you might hear two identical tones as different if you’re holding something that is vibrating faster or slower. If our vision system had a similar interplay, we might think that a person was heavier if we looked at them while holding a bowling ball, and lighter if we looked at them while holding a volleyball.

And your sense of touch can override your ears (not that they were that reliable to begin with…) when it comes to speech as well. Gick and Derrick (2013) have found that tactile information can override auditory input for speech sounds. You can be tricked into thinking that you heard a “peach” rather than “beach”, for example, if you’re played the word “beach” and a puff of air is blown over your skin just as you hear the “b” sound. This is because when an English speaker says “peach”, they aspirate the “p”, or say it with a little puff of air. That isn’t there when they say the “b” in “beach”, so you hear the wrong word.

Which is all very cool, but why might this be useful to us as language-users? Well, it suggests that we use a variety of cues when we’re listening to speech. Cues act as little road-signs that point us towards the right interpretation. By having access to a lots of different cues, we ensure that our perception is more robust. Even when we lose some cues–say, a bear is roaring in the distance and masking some of the auditory information–you can use the others to figure out that your friend is telling you that there’s a bear. In other words, even if some of the road-signs are removed, you can still get where you’re going. Language is about communication, after all, and it really shouldn’t be surprising that we use every means at our disposal to make sure that communication happens.

Why is it hard to model speech perception?

So this is a kick-off post for a series of posts about various speech perception models. Speech perception models, you ask? Like, attractive people who are good at listening?

Romantic fashion model
Not only can she discriminate velar, uvular and pharyngeal fricatives with 100% accuracy, but she can also do it in heels.
No, not really. (I wish that was a job…) I’m talking about a scientific model of how humans perceive speech sounds. If you’ve ever taken an introductory science class, you already have some experience with scientific models. All of Newton’s equations are just a way of generalizing general principals generally across many observed cases. A good model has both explanatory and predictive power. So if I say, for example, that force equals mass times acceleration, then that should fit with any data I’ve already observed as well as accurately describe new observations. Yeah, yeah, you’re saying to yourself, I learned all this in elementary school. Why are you still going on about it? Because I really want you to appreciate how complex this problem is.

Let’s take an example from an easier field, say, classical mechanics. (No offense physicists, but y’all know it’s true.) Imagine we want to model something relatively simple. Perhaps we want to know whether a squirrel who’s jumping from one tree to another is going to make. What do we need to know? And none of that “assume the squirrel is a sphere and there’s no air resistance” stuff, let’s get down to the nitty-gritty. We need to know the force and direction of the jump, the locations of the trees, how close the squirrel needs to get to be able to hold on, what the wind’s doing, air resistance and how that will interplay with the shape of the squirrel, the effects of gravity… am I missing anything? I feel like I might be, but that’s most of it.

So, do you notice something that all of these things we need to know the values of have in common? Yeah, that’s right, they’re easy to measure directly. Need to know what the wind’s doing? Grab your anemometer. Gravity? To the accelerometer closet! How far apart the trees are? It’s yardstick time. We need a value , we measure a value, we develop a model with good predictive and explanatory power (You’ll need to wait for your simulations to run on your department’s cluster. But here’s one I made earlier so you can see what it looks like. Mmmm, delicious!) and you clean up playing the numbers on the professional squirrel-jumping circuit.

Let’s take a similarly simple problem from the field of linguistics. You take a person, sit them down in a nice anechoic chamber*, plop some high quality earphones on them and play a word that could be “bite” and could be “bike” and ask them to tell you what they heard. What do you need to know to decide which way they’ll go? Well, assuming that your stimuli is actually 100% ambiguous (which is a little unlikely) there a ton of factors you’ll need to take into account. Like, how recently and often has the subject heard each of the words before? (Priming and frequency effects.) Are there any social factors which might affect their choice? (Maybe one of the participant’s friends has a severe overbite, so they just avoid the word “bite” all together.) Are they hungry? (If so, they’ll probably go for “bite” over “bike”.) And all of that assumes that they’re a native English speaker with no hearing loss or speech pathologies and that the person’s voice is the same as theirs in terms of dialect, because all of that’ll bias the  listener as well.

The best part? All of this is incredibly hard to measure. In a lot of ways, human language processing is a black box. We can’t mess with the system too much and taking it apart to see how it works, in addition to being deeply unethical, breaks the system. The best we can do is tap a hammer lightly against the side and use the sounds of the echos to guess what’s inside. And, no, brain imaging is not a magic bullet for this.  It’s certainly a valuable tool that has led to a lot of insights, but in addition to being incredibly expensive (MRI is easily more than a grand per participant and no one has ever accused linguistics of being a field that rolls around in money like a dog in fresh-cut grass) we really need to resist the urge to rely too heavily on brain imaging studies, as a certain dead salmon taught us.

But! Even though it is deeply difficult to model, there has been a lot of really good work done on towards a theory of speech perception. I’m going to introduce you to some of the main players, including:

  • Motor theory
  • Acoustic/auditory theory
  • Double-weak theory
  • Episodic theories (including Exemplar theory!)

Don’t worry if those all look like menu options in an Ethiopian restaurant (and you with your Amharic phrasebook at home, drat it all); we’ll work through them together.  Get ready for some mind-bending, cutting-edge stuff in the coming weeks. It’s going to be [fʌn] and [fʌnetɪk]. 😀

*Anechoic chambers are the real chambers of secrets.

Why is studying linguistics useful? *Is* studying linguistics useful?

So I recently gave a talk at the University of Washington Scholar’s Studio. In it, I covered a couple things that I’ve already talked about here on my blog: the fact that, acoustically speaking, there’s no such thing as a “word” and that our ears can trick us. My general point was that our intuitions about speech, a lot of the things we think seem completely obvious, actually aren’t true at all from an acoustic perspective.

What really got to me, though, was that after I’d finished my talk (and it was super fast, too, only five minutes) someone asked why it mattered. Why should we care that our intuitions don’t match reality? We can still communicate perfectly well. How is linguistics useful, they asked. Why should they care?

I’m sorry, what was it you plan to spend your life studying again? I know you told me last week, but for some reason all I remember you saying is “Blah, blah, giant waste of time.”

It was a good question, and I’m really bummed I didn’t have time to answer it. I sometimes forget, as I’m wading through a hip-deep piles of readings that I need to get to, that it’s not immediately obvious to other people why what I do is important. And it is! If I didn’t believe that, I wouldn’t be in grad school. (It’s certainly not the glamorous easy living and fat salary that keep me here.) It’s important in two main ways. Way one is the way in which it enhances our knowledge and way two is the way that it helps people.

 Increasing our knowledge. Ok, so, a lot of our intuitions are wrong. So what? So a lot of things! If we’re perceiving things that aren’t really there, or not perceiving things that are really there, something weird and interesting is going on. We’re really used to thinking of ourselves as pretty unbiased in our observations. Sure, we can’t hear all the sounds that are made, but we’ve built sensors for that, right? But it’s even more pervasive than that. We only perceive the things that our bodies and sensory organs and brains can perceive, and we really don’t know how all these biological filters work. Well, okay, we do know some things (lots and lots of things about ears, in particular) but there’s a whole lot that we still have left to learn. The list of unanswered questions in linguistics is a little daunting, even just in the sub-sub-field of perceptual phonetics.

Every single one of us uses language every single day. And we know embarrassingly little about how it works. And, what we do know, it’s often hard to share with people who have little background in linguistics. Even here, in my blog, without time restraints and an audience that’s already pretty interested (You guys are awesome!) I often have to gloss over interesting things. Not because I don’t think you’ll understand them, but because I’d metaphorically have to grow a tree, chop it down and spends hours carving it just to make a little step stool so you can get the high-level concept off the shelf and, seriously, who has time for that? Sometimes I really envy scientists in the major disciplines  because everyone already knows the basics of what they study. Imagine that you’re a geneticist, but before you can tell people you look at DNA, you have to convince them that sexual reproduction exists. I dream of the day when every graduating high school senior will know IPA. (That’s the international phonetic alphabet, not the beer.)

Okay, off the soapbox.

Helping people. Linguistics has lots and lots and lots of applications. (I’m just going to talk about my little sub-field here, so know that there’s a lot of stuff being left unsaid.) The biggest problem is that so few people know that linguistics is a thing. We can and want to help!

  • Foreign language teaching. (AKA applied linguistics) This one is a particular pet peeve of mine. How many of you have taken a foreign language class and had the instructor tell you something about a sound in the language, like: “It’s between a “k” and a “g” but more like the “k” except different.” That crap is not helpful. Particularly if the instructor is a native speaker of the language, they’ll often just keep telling you that you’re doing it wrong without offering a concrete way to make it correctly. Fun fact: There is an entire field dedicated to accurately describing the sounds of the world’s languages. One good class on phonetics and suddenly you have a concrete description of what you’re supposed to be doing with your mouth and the tools to tell when you’re doing it wrong. On the plus side, a lot language teachers are starting to incorporate linguistics into their curriculum with good results.
  • Speech recognition and speech synthesis. So this is an area that’s a little more difficult. Most people working on these sorts of projects right now are computational people and not linguists. There is a growing community of people who do both (UW offers a masters degree in computational linguistics that feeds lots of smart people into Seattle companies like Microsoft and Amazon, for example) but there’s definite room for improvement. The main tension is the fact that using linguistic models instead of statistical ones (though some linguistic models are statistical) hugely increases the need for processing power. The benefit is that accuracy  tends to increase. I hope that, as processing power continues to be easier and cheaper to access, more linguistics research will be incorporated into these applications. Fun fact: In computer speech recognition, an 80% comprehension accuracy rate in conversational speech is considered acceptable. In humans, that’s grounds to test for hearing or brain damage.
  • Speech pathology. This is a great field and has made and continues to make extensive use of linguistic research. Speech pathologists help people with speech disorders overcome them, and the majority of speech pathologists have an undergraduate degree in linguistics and a masters in speech pathology. Plus, it’s a fast-growing career field with a good outlook.  Seriously, speech pathology is awesome. Fun fact: Almost half of all speech pathologists work in school environments, helping kids with speech disorders. That’s like the antithesis of a mad scientist, right there.

And that’s why you should care. Linguistics helps us learn about ourselves and help people, and what else could you ask for in a scientific discipline? (Okay, maybe explosions and mutant sharks, but do those things really help humanity?)

Language Games*

A lot of the games that we play as kids help us learn important life skills. “I spy”? Color recognition. “Peekaboo”? Object permanence. But what about language games? In English, you’ve got games like pig Latin, which has several versions. Most involve moving syllables or consonants from the front of a word to the end, and then adding “-ay”. It’s such a prevalent phenomena that there’s even a Google search in pig Latin.

And English isn’t alone in having language games like this. In fact, every language I’ve studied, including Nepali and Esperanto, has had some form of similar language game.

Codex Manesse 262v Herr Goeli
“Ekchay Atemay!”
“Roland, please stop being so infantile. This is backgammon and I know perfectly well you’re fluent in Liturgical Latin.”
The weird thing, though, is that it kinda looks like the only people that language games are really useful for is linguists.

Let’s look at syllables. If you’re a normal person, you only think about them when you’re forced to write a haiku for some reason. (Pro tip: In Japanese, it’s not the syllables that you count but the moras.) If you’re a linguist, though, you think about them all the time, and spend time arguing about whether or not they actually exist. One of the best arguments for syllables existing is that people can move them around relatively intuitively without even having a university degree in linguistics when language games require it. (I know, shocking, isn’t it?)

And you can use the existence of language games to argue that there’s a viable speaker community of any given language, a sort of measure of language health, like mayflies in streams; that’s a valuable indicator, since language death is a serious problem. Or you can even use them to argue that a language is alive in the first place.

The main use of language games for language users, however, seems to be the creation of smaller speech communities within larger communities.  But then, as a linguist, you probably already knew that. Keep an ear out for them in everyday life, however, and you might be surprised how often they tend to crop up–like the use of -izz  in early hip hop parlance.

*If you thought I was going to bring up Wittgenstein in a blog post meant for people with little to no background in linguistics you are a very silly person. Oh, alright, here. I hope you’re proud of yourself.

Can you really learn a language in ten days?

I’m not the only linguist in my family. My father has worked as a professional linguist his whole life… but with a slightly different definition of “linguist”. His job is to use his specialist knowledge of a language (specifically Mandarin Chinese, Mongolian or one of the handful of other languages he speaks relatively well) to solve a problem. And one problem that he’s worked on a lot is language learning.

There’s no doubt that knowing more than one language is very, very useful. It opens up job opportunities, makes it easier to travel and can even improve brain function. But unless you were lucky enough to be raised bilingual you’re going to have to do it the hard way. And, if you live in America, like I do, you’re not very likely to do that: Only about 26% of the American population speaks another language well enough to hold a basic converstaion in it, and only 9% are fluent in another language. Compare that to Europe, where around 50% of the population is bilingual.

Japanese language class in Zhenjiang02
“Now that you’ve learned these characters, you only need to learn and retain one a day for the next five years to be considered literate.”
Which makes the lure of easily learning a language on your own all the more compelling. I recently saw an ad that I found particularly enticing; learn a language in just ten days. Why, that’s less time than it takes to hand knit a pair of socks. The product in this = case was the oh-so-famous (at least in linguistic circles) Pimsleur Method (or approach, or any of a number of other flavors of delivery). I’ve heard some very good things about the program, and thought I’d dig a little deeper into the method itself and evaluate its claims from a scientific linguistics perspective.

I should mention that Dr. Pimsleur was an academic working in second language acquisition from an applied linguistics stand point. That is, his work (published mainly in the 1960’s)  tended to look at how older people learn a second language in an educational setting. I’m not saying this makes him unimpeachable–if a scientific argument can’t stand up to scrutiny it shouldn’t stand at all–but it does tend to lend a certain patina of credibility to his work. Is it justified? Let’s find out.

First things first: it is not possible to become fluent in a language in just ten days. There are lots of reasons why this is true. The most obvious is that being a fluent speaker is more than just knowing the grammar and vocabulary; you have to understand the cultural background of the language you’re studying. Even if your accent is flawless (unlikely, but I’ll deal with that later), if you unwittingly talk to your mother-in-law  and become a social pariah that’s just not going to do you much. Then there are just lots of little linguistic things that it’s so very easy to get wrong. Idioms, for example, particularly choosing which preposition to use. Do you get “in the bus” or “on the bus”? And then there’s even more subtle things like producing a list of adjectives in the right order. “Big red apple” sounds fine, but “red big apple”? Not so much. A fluent speaker knows all this, and it’s just too much information to acquire in ten days.

That said, if you were plopped down in a new country without any prior knowledge of the language, I’d bet within ten days you’d be carrying on at least basic conversations. And that’s pretty much what the Pimsleur method is promising. I’m not really concerend with whether it works or not… I’m more concerned with how it works (or doesn’t). There are four basic principals that the Pimsleur technique is based on.

  1.  Anticipation. Basically, this boils down to posing questions that the learner is expected answer. These can be recall tasks, asking you to remember something you heard before, or tasks where the learner needs to extrapolate based on the knowledge they currently have of the language.
  2. Graduated-interval recall. Instead of repeating a word or word list three or four time right after each other, they’re repeated at specific intervals. This is based on the phonological loop part of a model of working memory that was really popular when Pimsleur was doing his academic work.
  3. Core Vocabulary. The learner is just exposed to basic vocabulary, so the total number of words learned is less. They’re chosen (as far as I can tell, it seems to vary based on method) based on frequency.
  4. “Organic learning”. Basically, you learn by listening and there’s a paucity of reading and writing. (Sorry about that; paucity was my word of the day today 😛 ).

So let’s evaluate these claims.

  1. Anticipation. So the main benefit of knowing that you’ll be tested on something is that you actually pay attention. In fact, if you ask someone to listen to pure tones, their brain consumes more oxygen (which you can tell because circulation to that area increases) if you tell them they’ll be tested.  Does this help with language learning? Well. Maybe. I don’t really have as much of a background in psycholinguistics, but I do know that language learning tends to entail the creation of new neural networks and connections, which requires oxygen. On the other hand, a classroom experience uses the same technique. Assessment: Reasonable, but occurs in pretty much every language-learning method. 
  2. Graduated-interval recall: So this is based on the model I mentioned above. You’ve got short term and long term memory, and the Pimsleur technique is designed to pretty much seed your short term memory, then wait for a bit, then grab at the thing you heard and pull it to the forefront again, ideally transferring it to long-term memory. Which is peachy-keen… if the model’s right. And there’s been quite a bit of change and development in our understanding of how memory works since the 1970’s. Within linguistics, there’s been the rise of Exemplar Theory, which posits that it’s the number of times you hear things, and the similarity of the sound tokens, that make them easier to remember. (Kinda. It’s complicated.) So… it could be helpful, assuming the theory’s right. Assessment: Theoretical underpinnings outdated, but still potentially helpful. 
  3. Core Vocabulary. So this one is pretty much just cheating. Yes, it’s true, you only need about 2000 words to get around most days, and, yes, those are probably the words you  should be learning first in a language course. But at some point, to achieve full fluency, you’ll have to learn more words, and that just takes time. Nothing you can do about it. Assesment:  Legitimate, but cheating. 
  4. “Organic learning”: So this is in quotation marks mainly because it sounds like it’s opposed to “inorganic learning”, and no one learns language from  rocks. Basically, there are two claims here. One is the auditory learning is preferable, and the other is that it’s preferable because it’s how children learn. I have fundamental problems with claims that adults and children can learn using the same processes. That said, if your main goal is to learn how to speak and hear a given language, learning writing will absolutely slow you down. I can tell you from experience: once you learn the tones, speaking Mandarin is pretty straightforward. Writing Mandarin remains one of the most frustrating things I’ve ever attempted to do. Assessment: Reasonable, but claims that you can learn “like a baby” should be examined closely. 
  5. Bonus: I do agree that using native speakers of the target language as models is preferable. They can make all the sounds correctly, something that even trained linguists can sometimes have problems with–and if you never hear the sounds produced correctly, you’ll never be able to produce them correctly.

So, it does look pretty legitimate. My biggest concern is actually not with the technique itself, but with the delivery method. Language is inherently about communicating, and speaking to yourself in isolation is a great way to get stuck with some very bad habits. Being able to interact with a native speaker, getting guidance and correction, is something that I’d feel very uncomfortable recommending you do without.

Can Animals Talk?

There’s a long-running (and I really mean long-running: Plato chimed in on this one) debate about what language is. Now, as a linguist, you’d think I’d have the inside scoop on the subject. I mean, I pretty much sit around and think about language all day, so I should have this one down cold, right?

Well, as much as I hate to admit it, not really. And it’s not just me. Ask three different linguists what language is and you’ll probably get six to twelve answers. There is one point that I’m firm on, though: language is a human phenomena. Outside of the internet and fantasy worlds (which tend to overlap a lot, now that I think about it), animals don’t talk.

You talking to me
“…and then Mildred told Randolf that she thought his new haircut made him look like a basilisk. Well, you can imagine how he took *that*.”
I’m not claiming that animals don’t make communicative noises. Far from it! As someone who has bottle-fed more than one lamb, I can tell you that there’s a definite difference between cries that indicate genuine hunger and cries that are transparent ploys to get cookies.  But there are a lot of differences between communicative behavior and language.

  1. Lying is part of language.  By lying here, I mean a wide variety of linguistic expressions that express information that is counter to the truth, including joking. Language is separate from the things it describes (there’s nothing inherently tree-ish about the word tree, for example, ditto arbor, boom and träd, though there are inherent respect points in correctly identifying all three languages) and because of this can communicate abstract thought. Abstract thought, as evinced through lying, is in inherent part of language. There’s some evidence that Koko the gorilla is capable of lying, but one isolated incident really isn’t a sound basis for scientific argument.
  2. Language is generative. I’ve written an entire post about generativity, but it’s worth repeating. Language has to have underlying structures that can be used to produce new and novel utterances. Otherwise, you’re just saying random words.
  3. Langauge is communicative. This is part of the reason why music isn’t language, though it’s completely abstract and (at least in the Western tradition) generative. Abstraction is required, but so is a connection to thoughts and ideas. Tied to this is the fact that you have to have a community to speak in, even if it’s a community of two.
  4.  Language can communicate events at a temporal distance. This is a a biggie, and one of the main reasons that I really think that Koko and other talking animals are really using language. (Quick aside: Did you know that the Nazis attempted to train talking dogs as part of the war effort? True story.) It’s pretty easy to teach a dog to bark for a treat, but try teaching it to bark because you gave it a treat two days ago. You may think that a specific bark means “treat”, but without temporal distance and repeatability, it’s pretty much just pigeons in boxes.

Now, other linguists will take other positions (or, you know, the same position ;), but this is how I see it. So what do you think: can animals talk?

Why are tongue twisters tricky? (Part 2)

Oh, so in my last post I talked about the tongue part of tongue twisters. But, as I mentioned, the really interesting part of tongue twisters comes from the brain, not the tongue. It all has to do with lexical access.

Lexical access: The process through which a speaker or listener accesses their mental lexicon (i.e. the not-so-tiny brain dictionaries we all have and are all constantly changing).

If you were a computer, your mental entries on various words would be like files you needed to access. Like, if I write “kumquat”, you probably have some sort of mental entry for it. Even if you’ve never eaten a kumquat, you’ve probably seen them, so you have that mental image associated with the words–like a .txt file with a picture in it. So, once you hear or read “kumquat”, you need to rummage around until you find that file, then open it and access the information inside. And you do! In fact, you do it very, very quickly. You do it for every single word you ever read or hearand you do it in reverse for every word you ever write or say.

Brain Surface Gyri

This is your brain on language. Well, some of the bits that deal with language, at any rate.

So lexical access is a very important process. You need it for every single aspect of language use. The good news is, there’s a lot we know about lexical access! Remember when I talked about priming? That’s an aspect of lexical access. The bad news is, there’s a whole bunch we don’t know about lexical access.

(Is lexical access starting to sound like a fake word yet? That process, by the way, is known as semantic saturation.)

This is where tongue twisters come into play. (No, I hadn’t forgotten them.) Why? Well, it turns out that tongue twisters are a really good way to get at what happens during the lexical access process. Like many things that happen in the human brain, it can be difficult to study lexical access. Unlike physicists, linguists can’t break apart the mind to see what happens and figure out what’s going on. First, it would be deeply unethical. Second, when you break a brain open, it stops working. Sometimes, however, the brain does something weird. Like with tongue twisters. If you read my previous post, you know that the tongue itself can cause problems… but not enough to explain the most common errors, like saying “How can a clam cram in a clean cream can?” as “How clan a cam cram in a clean cleam can?”

In a 1999 study, Carolyn E. Wilshire found that there were two main contributors to making tongue twisters tricky. The first factor that made it easier to confuse sounds was whether the confuseable sounds were similar. There’s lots of technical reasons this is, but basically sounds can be grouped together and some sounds are more like other sounds. “t” and “d” are really similar, for example, whereas “k” and “m” are not really that alike. Unsurprisingly, sounds that are alike are easier to confuse. Basically, you reach for something that sounds similar, then realize that you made a mistake and try to correct your error.

The second factor was that it was easier to confuse sounds that were repeated. This is because you’re more likely to “reach for” something you’ve already gotten out once, even if it’s the wrong thing. Together, these factors make for some really awesome tongue twisters. Awesome for two reasons: the first is that they’re really, really hard to say. (Try moss knife noose muff!). The second is that we can use tongue twisters like these to help increase our understanding of the human mind. And that’s what it’s really all about.

Do subliminal messages (join the Illumanati) work?

Since I recently joined the Illumanati, I’ve been thinking a lot about subliminal messages.  A “subliminal stimulus” is literally something that’s below–“sub”–your perceptual limit– “liminal”. So, linguistically speaking, it would be a word that was presented to you to be read but removed before you could actually read it. Or a sound that’s too soft for you to hear.

Dollar Illuminatus
Holy crap, look at that subliminal owl! Oh wait, you see it? Well shoot. Guess it’s just a regular old liminal Illuminati owl.
So if you can’t really perceive subliminal messages, why are they even a thing?

Well, if there’s one thing I’ve learned by studying linguistics, it’s that language is complex and that there are huge gaps between what we know and what we think we know about language–at least on an individual level. (I’ve also learned that I can’t count, becuase that’s definitely two things.) We all do things that we have no idea we’re doing, and so quickly and easily that they just slip below our notice. Linguistics is all about figuring out what those things are.

One of those things is priming. Basically, when you hear or read a word it gets “warm”, like how you leave a heat signature on your sheets after you get out of bed. And if, later, you’re looking for a similar word, you’re more  likely to go back to your warm bed than another word you haven’t used lately. Of course, the effect fades over time, but it does fade very slowly. And priming effects are where you really see an effect of subliminal messages. (And, just to be clear, not really anywhere else… at least linguistically speaking. 🙂

For example, this study by Abrams, Klinger and Greewald found that, if participants had been exposed to a word earlier in their study, they were able to recognize it later when it was presented subliminally–but it only works really  well when participants had not only read the word before, but had had to think about it some by assigning it to a category. So the effect isn’t really strong enough to, say, help you lose weight or stop smoking.

The fact that it exists at all, though, does tell us something interesting about the human brain and how it uses language. For example, is our ability to interpret stimuli that are degraded tied to the pressure to understand conversational speech, even in noisy environments? What does it mean that the effect is also present with visual stimuli? Like all sciences, linguistics is all about asking the right questions, and research on subliminal stimuli opens up a whole barrel-full of questions.