Five tips for your first linguistics class*

So it’s getting to be the time of year when colleges start back up again (at least here in the US). And with that comes a whole batch of eager students taking their first linguistics class. If that’s you: congratulations! You’re going to embark on an academic journey that will forever change the way you look at language. But I’m sure you also have a lot of questions about what to expect and how to do well in a linguistics class. Well you’re in luck, because today I’m going to share my top five ten tips for doing well in an introductory linguistics class. These are drawn from my experiences both as a teacher and a student and I hope they help you as you begin to become a linguist.

  1. Expect rigour! Just to clarify here, by “rigour” I don’t mean difficulty. Rather, I mean rigour in the mathematical sense. Linguistics is a very exact discipline and part of learning how to be a linguist is learning how to carefully, precisely solve problems. There will be right and wrong answers. You may be expected to explain how you solved a problem. If you come from a background with a lot of mathematics or formal logic  linguistics problems will feel probably very familiar to you. (I have a friend, now a math PhD candidate, who really enjoyed phonology because, in his words, “It’s applied set theory!”.) A lot of students who have an interest in language from literary or foreign-language studies are often surprised by this aspect of linguistics courses, however.
  2. Be prepared for a little bit of memorization. Every introductory linguistics course I’m familiar with covers the International Phonetic Alphabet pretty early on in the class and students are expected to memorize at least part of it. I’m a fan of this, since knowing IPA is a pretty handy life skill and it allows you solve phonology problems much more quickly. But it can be a nasty surprise if you’re not ready for it and don’t set aside enough time for studying.
  3. Get ready to unlearn. You speak at least one language. You’re in college. You know a fair amount about how language works… right? Well, yes, but not in the way you think. You’re going to have to unlearn a lot of things you’ve been taught about language, especially about what you should do/write/say and a lot of the “grammar rules” you’ve been taught. Again, this can be frustrating for a lot of students. You’ve spent a long time laboriously learning about language, you’ve obviously developed enough of an interest in language to take a linguistics course, and in the first week of class we basically tell you you’ve been lied to! This can actually be a blessing in disguise, though. It lets the whole class start out at a similar place and you’ll be learning the basics of morphology and syntax right along with you classmates. Study group, anyone?
  4. Be patient with yourself. Introductory linguistics classes are always a bit of a whirlwind. You’re swept from subdispline to subdisipline and just as soon as you’re feeling comfortable with morphology suddenly it’s on to syntax with no chance to catch your breath. It’s just the nature of a introductory survey course, though; it’s a tasting menu, not a a la carte. Remember what catches your interest and pursue it in more coursework or readings later, don’t try to do it all just as you’re encountering ideas and methods for the first time.
  5. Ask for help. Don’t be afraid of asking for extra help! Go to office hours if you don’t understand something. Form a study group. (It’s even better if you can get people from different academic backgrounds.) There are also lots of great resources online. This blog post has a lot of great resources and this post gives a lot of great, really concrete advice about doing assignments in  intro linguistics courses.

But it’s also really important just to relax and have fun. You’ll cover a lot of material, granted, but that also means you’ll learn a lot! And introductory courses tend to be a great place to learn lots of fun facts and find the answers to language mysteries that have been niggling at you. Welcome to linguistics; I think you’re going to like it.

*Don’t worry, we’ll be getting back to the Great Ideas in Linguistics series after these short messages.

About these ads

Great ideas in linguistics: Language acquisition

Courtesy of your friendly neighbourhood rng, this week’s great idea in linguistics is… language acquisition! Or, in other words, the process of learning  a language. (In this case, learning your first language when you’re a little baby, also known as L1 acquisition; second-language learning, or L2 acquisition, is a whole nother bag of rocks.) Which begs the question: why don’t we just call it language learning and call it a day? Well, unlike learning to play baseball, turn out a perfect soufflé or kick out killer DPS, learning a language seems to operate under a different set of rules. Babies don’t benefit from direct language instruction and it may actually hurt them.

In other words:

Language acquisition is process unique to humans that allows us to learn our first language without directly being taught it.

Which doesn’t sound so ground-breaking… until you realize that that means that language use is utterly unique among human behaviours. Oh sure, we learn other things without being directly taught them, even relatively complex behaviours like swallowing and balancing. But unlike speaking, these aren’t usually under concious control and when they are it’s usually because something’s gone wrong. Plus, as I’ve discussed before, we have the ability to be infinitely creative with language. You can learn to make a soufflé without knowing what happens when you combine the ingredients in every possible combination, but knowing a language means that you know rules that allow you to produce all possible utterances in that language.

So how does it work? Obviously, we don’t have all the answers yet, and there’s a lot of research going on on how children actually learn language. But we do know what it generally tends to look like, precluding things like language impairment or isolation.

  1. Vocal play. The kid’s figured out that they have a mouth capable of making noise (or hands capable of making shapes and movements) and are practising it. Back in the day, people used to say that infants would make all the sounds of all the world’s languages during this stage. Subsequent research, however, suggests that even this early children are beginning to reflect the speech patterns of people around them.
  2. Babbling. Kids will start out with very small segments of language, then repeat the same little chunk over and over again (canonical babbling), and then they’ll start to combine them in new ways (variegated babbling). In hearing babies, this tends to be syllables, hence the stereotypical “mamamama”. In Deaf babies it tends to be repeated hand motions.
  3. One word stage. By about 13 months, most children will have begun to produce isolated words. The intended content is often more than just the word itself, however. A child shouting “Dog!” at this point could mean “Give me my stuffed dog” or “I want to go see the neighbour’s terrier” or “I want a lion-shaped animal cracker” (since at this point kids are still figuring out just how many four-legged animals actually are dogs). These types of sentences-in-a-word are known as holophrases.
  4. Two word stage. By two years, most kids will have moved on to two-word phrases, combining words in way that shows that they’re already starting to get the hang of their language’s syntax. Morphology is still pretty shaky, however: you’re not going to see a lot of tense markers or verbal agreement.
  5. Sentences. At this point, usually around age four, people outside the family can generally understand the child. They’re producing complex sentences and have gotten down most, if not all, of the sounds in their language.

These general stages of acquisition are very robust. Regardless of the language, modality or even age of acquisition we still see these general stages. (Although older learners may never completely acquire a language due to, among other things, reduced neuroplasticity.) And the fact they do seem to be universal is yet more evidence that language acquisition is a unique process that deserves its own field of study.

Great ideas in linguistics: Sociolinguistics

I’ll be the first to admit: for a long time, even after I’d begun my linguistics training, I didn’t really understand what sociolinguistics was. I had the idea that it mainly had to do with discourse analysis, which is certainly a fascinating area of study, but I wasn’t sure it would weighty enough to serve as the basis for a major discipline of linguistics. Fortunately, I’ve learned a great deal about sociolinguistics since that time.

Sociolinguistics is the sub-field of linguistics that studies language in its social context and derives explanatory principles from it. By knowing about the language, we can learn something about a social reality and vice versa.

Now, at first glance this may seem so intuitive that it’s odd someone would to the trouble of stating it directly. As social beings, we know that the behaviour of people around us is informed by their identities and affiliations. At the extreme of things it can be things like having a cultural rule that literally forbids speaking to your mother-in-law, or requires replacing the letters “ck” with “cc” in all written communication. But there are more subtle rules in place as well, rules which are just as categorical and predictable and important. And if you don’t look at what’s happening with the social situation surrounding those linguistic rules, you’re going to miss out on a lot.

Case in point: Occasionally you’ll here phonologists talk about sound changes being in free variation, or rules that are randomly applied. BUT if you look at the social facts of the community, you’ll often found that there is no randomness at all. Instead, there are underlying social factors that control which option a person makes as they’re speaking. For example, if you were looking at whether people in Montreal were making r-sounds with the front or back of the tongue and you just sampled a bunch of them you might find that some people made it one way most of the time and others made it the other way most of the time. Which is interesting, sure, but doesn’t have a lot of explanatory power.

However, if you also looked at the social factors associated with it, and the characteristics of the individuals who used each r-sound, you might notice something interesting, as Clermont and Cedergren did (see the illustration). They found that younger speakers preferred the back-of-the-mouth r-sound, while older people tended to use the tip of the tongue instead. And that has a lot more explanatory power. Now we can start asking questions to get at the forces underlying that pattern: Is this the way the younger people have always talked, i.e. some sort of established youthful style, or is there a language change going on and they newer form is going to slowly take over? What causes younger speakers to use the the form they do? Is there also an effect of gender, or who you hang out with?

changes

Figure one from Sankoff and Blondeau. 2007. (Click picture to look at the whole study.) As you can see, younger speakers are using [R] more than older speakers, and the younger a speaker is the more likely they are to use [R].

And that’s why sociolinguistics is all kinds of awesome. It lets us peel away and reveal some of the complexity surrounding language. By adding sociological data to our studies, we can help to reduce statistical noise and reveal new and interesting things about how language works, what it means to be a language-user, and why we do what we do.

 

 

 

New series: 50 Great Ideas in Linguistics

As I’ve been teaching this summer (And failing to blog on a semi-regular basis like a loser. Mea culpa.) I’ll occasionally find that my students aren’t familiar with something I’d assumed they’d covered at some point already. I’ve also found that there are relatively few resources for looking up linguistic ideas that don’t require a good deal of specialized knowledge going in. SIL’s glossary of linguistic terms is good but pretty jargon-y, and the various handbooks tend not to have on-line versions. And even with a concerted effort by linguists to make Wikipedia a good resource, I’m still not 100% comfortable with recommending that my students use it. 

Therefore! I’ve decided to make my own list of Things That Linguistic-Type People Should Know and then slowly work on expounding on them. I have something to point my students to and it’s a nice bite-sized way to talk about things; perfect for a blog.  

Here, in no particular order, are 50ish Great Ideas of Linguistics sorted by sub-discipline. (You may notice a slightly sub-disciplinary bias.) I might change my mind on some of these–and feel free to jump in with suggestions–but it’s a start. Look out for more posts on them. 

  • Sociolinguistics
    • Sociolinguistic variables 
    • Social class and language
    • Social networks
    • Accommodation
    • Style 
    • Language change 
    • Linguistic security
    • Linguistic awareness 
    • Covert and overt prestige 
  • Phonetics
    • Places of articulation 
    • Manners of articulation 
    • Voicing
    • Vowels and consonants 
    • Categorical perception  
    • “Ease”
    • Modality 
  • Phonology
    • Rules
    • Assimilation and dissimilation 
    • Splits and mergers 
    • Phonological change
  • Morphology
    • Paradigm levelling 
    • Case
    • Tense and aspect
    • Affixes  
  • Syntax
    • Hierarchical structure 
    • Competence vs. Performance 
    • Movement 
    • Grammaticality judgements 
  • Semantics 
    • Pragmatics
    • Truth values 
    • Scope 
    • Lexical semantics
    • Compositional semantics 
  • Computational linguistics 
    • Classifiers
    • Natural Language Processing 
    • Speech recognition 
    • Speech synthesis 
    • Automata 
  • Documentation/Revitalization 
    • Language death 
    • Self-determination 
  • Psycholinguistics 
    • Language acquisition 
    • Reading 

How to Take Care of your Voice

Inflammation, polyps and nodules, oh my! Learn about some common problems that can affect your voice and how to avoid them, all in a shiny new audio format. For more tips about caring for your vocal folds and more information about rarer problems like tumours or paralysis, check out this page or this page.

Why can’t dogs choke?

And, a related question, why can they bark but not speak? The answer has to do with one of the things that makes us both human and also significantly more vulnerable, right up there with brains so big they make birth potentially fatal. (Hardly a triumph of effective design.) You see, the human larynx, though in many ways very similar to that of other mammals, has a few key differences. It’s much further down in the neck and it’s pulled the tongue down with it. As a result, we have the unique and rather stupid ability to choke to death on our own food. Of course, an anatomical handicap of that magnitude must have been compensated for by something else, otherwise we wouldn’t be here. And the pay-off in this case was pretty awesome: speech.

Alfred Dedreux - Pug Dog in an Armchair.jpg

All the comforts of agriculture, air conditioning and medicine and he can still breathe and swallow at the same time, the smug pup.

But what does all this have to do with dogs? Well, dogs do have a larynx that looks very like humans’. In fact, they make sound in very similar ways: by forcing air through abducted vocal folds. But dogs have a very short vocal folds and they’re scrunched up right above the root of the tongue. This has two main effects:

  1. There’s a very limited number of possible tongue positions available to dogs during phonation. This means that dogs aren’t able to modulate air with the same degree of fine control that we humans are. (Which is why Scooby-Doo sounds like he really needs some elocution lessons.) We do have this control, and that’s what gives us the capacity to make so many different speech sounds.
  2. Dogs have their soft palate touching their epiglottis when they’re at rest. The soft palate is the spongy bit of tissue at the back of your mouth that separates your nasal and oral cavities. The epiglottis is a little piece of tongue-shaped or leaf-shaped cartilage in your throat that flips down to neatly cover your esophagus when you swallow. If they touch, then you’ve got a complete separation between your food-tube and your air-tube and choking becomes a non-issue.

Some humans have that same whole palate-epiglottis-kissing things going on: very young babies. You can see what I’m talking about here. That and how proportionally huge the tongue is is probably why babies can acquire sign quite a bit before they can start speaking; their vocal instruments just aren’t fully developed yet. The upside of this is that babies also don’t have to worry about choking to death.

But once the larynx drops, breathing and swallowing require a bit more coordination. For one thing, while you’re swallowing breathing is suppressed in the brain-stem, so that even if you’re unconscious you don’t try to breathe in your own saliva. We also have a very specific pattern of breath while we’re eating. Try paying attention next time you sit down to a meal: while you’re eating or drinking you tend to stick to a pattern of exhale — swallow — exhale. That way you avoid incoming air trying to carry little bits of food or water into your lungs. (Aspiration pneumonia ain’t no joke.)

So your dog doesn’t choke for the same reason it can’t strike up a conversation with you: its larynx is too high. Who knows? Maybe in a few hundred years, and with a bit of clever genetic engineering, dogs will be talking, and choking, along with us.

That doesn’t mean that something large or oddly-shaped can’t get stuck in esophagus, though.

Why do people talk in their sleep?

Sleep-talking (or “Somniloquy”, as us fancy-pants scientist people call it) is a phenomena where a sleeping person starts talking. For example, the internet sensation The Sleep Talkin Man. Sleep talking can range from grunts or moans to relatively clear speech. While most people know what sleep talking is (there was even a hit song about it that’s older than I am) fewer people know what causes it.

A.Cortina El sueño

Sure, she looks all peaceful, but you should hear her go on.

To explain what happens when someone’s talking in their sleep, we first need to talk about 1) what happens during sleep and 2) what happens when we talk normally.

  • Sleeping normally: One of the weirder things about sleep talking is that it happens at all. When you’re asleep normally, your muscles undergo atony during the stage of sleep called Rapid Eye Movement, or REM sleep. Basically, your muscles release and go into a state of relaxation or paralysis. If you’ve ever woken suddenly and been unable to move, it’s because your body is still in that state. This serves an important purpose: when we dream we can rehearse movements without actually moving around and hurting ourselves. Of course, the system isn’t perfect. When your muscles fail to “turn off” while you dream, you’ll end up acting out your dream and sleep walking. This is particularly problematic for people with narcolepsy.
  • Speaking while awake: So speech is an incredibly complex process. Between a tenth and a third of a second before you begin to speak you start brain activation in the insula. This is where you plan the movements you’ll need to successfully speak. These come in three main stages, that I like to call breathing, vibrating and tonguing. All speech comes from breath, so you need to inhale in preparation for speaking. Normal exhalation won’t work for speaking, though–it’s too fast–so you switch on your intercostal muscles, in the walls of your ribcage, to help your lungs empty more slowly. Next, you need to tighten your vocal folds as you force air through them. This makes them vibrate (like so) and gives you the actual sound of your voice. By putting different amounts of pressure on your vocal folds you can change your pitch or the quality of your voice. Finally, your mouth needs to manipulate the buzzing sound your vocal folds make to make the specific speech sounds you need. You might flick your tongue, bring your teeth to your lips, or open your soft palate so that air goes through your nose instead of your mouth. And voila! You’re speaking.

Ok, so, it seems like sleep talking shouldn’t really happen, then. When you’re asleep your muscles are all turned off and they certainly don’t seem up to the multi-stage process that is speech production. Besides, there’s no need for us to be making speech movements anyway, right? Wrong. You actually use your speech planning processes even if you’re not planning to speak aloud. I’ve already talked about the motor theory of speech perception, which suggests that we use our speech planning mechanisms to understand speech. And it’s not just speech perception. When reading silently, we still plan out the speech movements we’d make if we were to read out loud (though the effect is smaller with more fluent readers). So you sometimes do all the planning work even if you’re not going to say anything… and one of the times you do that is when you’re asleep. Usually, your muscles are all turned off when you’re asleep. But, sometimes, especially in young children or people with PTSD, the system will occasionally stop working as well. And if it happens to stop working when you’re dreaming that you’re talking and therefore planning out your speech movements? You start sleep talking.

Of course, all of this means that some of the things that we’ve all heard about about sleep talking are actually myths. Admissions of guilt while asleep, for example, aren’t reliable and not admissible in court. (Unless, of course, you really did put that purple beaver in the banana pudding.) It’s also very common; about 50% of children talk in their sleep. Unless it’s causing problems–like waking people you’re sleeping with–sleep talking isn’t generally problematic. But you can help reduce the severity by getting enough sleep (which is probably a good goal anyway), and avoiding alcohol and drugs.

Which are better, earphones or headphones?

As a phonetician, it’s part of my job to listen to sounds very closely. Plus, I like to listen to music while I work, enjoy listening to radio dramas and use a headset to chat with my guildies while I’m gaming.  As a result, I spend a lot of time with things on/in my ears. And, because of my background, I’m also fairly well informed about the acoustic properties of  earphones and headphones and how they interact with anatomy. All of which helps me answer the question: which is better? Or, more accurately, what are some of the pros and cons of each? There are a number of factors to consider, including frequency response, noise isolation, noise cancellation and comfort/fit. Before I get into specifics, however, I want to make sure we’re on the same page when we talk about “headphones” and “earphones”.

Earphones: For the purposes of this article, I’m going to use the term “earphone” to refer to devices that are meant to be worn inside the pinna (that’s the fancy term for the part of the ear you can actually see). These are also referred to as “earbuds”, “buds”, “in-ears”, “canalphones”, “in-ear moniters”, “IEM’s” and “in-ear headphones”. You can see an example of what I’m calling “earphones” below.

IPod Touch 2G Remote Mic

Ooo, so white and shiny and painful.

Headphones: I’m using this term to refer to devices that are not meant to rest in the pinna, whether they go around or on top of the ear. These are also called “earphones”, (apparently) “earspeakers” or, my favorites, “cans”. You can see somewhat antiquated examples of what I’m calling “headphones” below.

Club holds radio dance wearing earphones 1920

I mean, sure, it’s a wonder of modern technology and all, but the fidelity is just so low.

Alright, now that we’ve  cleared that up, let’s get down to brass tacks. (Or, you might say…. bass tacks.)

  1. Frequency response curve: How much distortion do they introduce? In an ideal world, ‘phones should responded equally well to all frequencies (or pitches), without transmitting one frequency rage more loudly than another. This desirable feature is commonly referred to as a “flat” frequency response. That means that the signal you’re getting out is pretty much the same one that was fed in, at all frequency ranges.
    1. Earphones: In general, earphones tend to have a worse frequency response.
    2. Headphones: In general, headphones tend to have better frequency response.
    3. WinnerHeadphones are probably the better choice if you’re really worried about distortion. You should read the specifications of the device you’re interested in, however, since there’s a large amount of variability.
  2. Frequency response: What is their pitch range? This term is sometimes used to refer to the frequency response curve I talked about above and sometimes used to refer to pitch range. I know, I know, it’s confusing. Pitch range is usually expressed as the lowest sound the ‘phones can transmit followed by the highest. Most devices on the market today can pretty much play anything between 20 and 20k Hz. (You can see what that sounds like here. Notice how it sounds loudest around 300Hz? That’s an artifact of your hearing, not the video. Humans are really good at hearing sounds around 300Hz which [not coincidentally] is about where the human voice hangs out.)
    1. Earphones: Earphones tend to have a smaller pitch range than headphones. Of course, there are always exceptions.
    2. Headphones: Headphones tend to have a better frequency range than earphones.
    3. Winner: In general, headphones have a better frequency range. That said, it’s not really that big of a deal. You can’t really hear very high or very low sounds that well because of the way your hearing system works regardless of how well your ‘phones are delivering the signal. Anything that plays sounds between 20Htz and 20,000Htz should do you just fine.
  3. Noise isolation: How well do they isolate you from sounds other than the ones you’re trying to listen to? More noise isolation is generally better, unless there’s some reason you need to be able to hear environmental sounds as well whatever you’re listening to. Better isolation also means you’re less likely to bother other people with your music.
    1. Earphones:  A properly fitted pair of in-ear earphones will give you the best noise isolation. It makes sense; if you’re wearing them properly they should actually form a complete seal with your ear canal. No sound in, no sound out, excellent isolation.
    2. Headphones: Even really good over-ear headphones won’t form a complete seal around your ear. (Well, ok, maybe if you’re completely bald and you make some creative use of adhesives, but you know what I mean.) As a result, you’re going to get some noise leakage .
    3. Winner: You’ll get the best noise isolation from well-fitting earphones that sit in the ear canal.
  4. Noise cancellation: How well can they correct for atmospheric sounds? So noise cancellation is actually completely different from noise isolation. Noise isolation is something that all ‘phones have. Noise-cancelling ‘phones, on the other hand, actually do some additional signal processing before you get the sound. They “listen” for atmospheric sounds, like an air-conditioner or a car engine. Then they take that waveform, reproduce it and invert it. When they play the inverted waveform along with your music, it exactly cancels out the sound. Which is awesome and space-agey, but isn’t perfect. They only really work with steady background noises. If someone drops a book, they won’t be able to cancel that sudden, sharp noise. They also tend not to work as well with really high-pitched noises.
    1. Earphones: Noise-cancelling earphones tend not be as effective as noise-cancelling headphones until you get to the high end of the market (think $200 plus).
    2. Headphones: Headphones tend to be slightly better at noise-cancellation than earphones of a similar quality, in my experience. This is partly due to the fact that there’s just more room for electronics in headphones.
    3. Winner: Headphones usually have a slight edge here. Of course, really expensive noise-cancelling devices, whether headphones or earphones, usually perform better than their bargain cousins.
  5. Comfort/fit: Is they comfy?
    1. Earphones: So this is where earphones tend to suffer. There is quite a bit of variation in the shape of the cavum conchæ, which is the little bowl shape just outside your ear canal. Earphone manufacturers have to have somewhere to put their magnets and drivers and driver support equipment and it usually ends up in the “head” of the earphone, nestled right in your concha cavum. Which is awesome if it’s a shape that fits your ear. If it’s not, though, it can quickly start to become irritating and eventually downright painful. Personally, this is the main reason I prefer over-ear headphones.
    2. Headphones: A nicely fitted pair of over-ear headphones that covers your whole ear is just incredibly comfortable. Plus, they keep your ears warm! I find on-ear headphones less comfortable in general, but a nice cushy pair can still feel awesome. There are other factors to take into account, though; wearing headphones and glasses with a thick frame can get really uncomfortable really fast.
    3. Winner: While this is clearly a matter of personal preference, I have a strong preference for headphones on this count.

So, for me at least, headphones are the clear winner overall. I find them more comfortable, and they tend to reproduce sound better than earphones. There are instances where I find earphones preferable, though. They’re great for travelling or if I really need an isolated signal. When I’m just sitting at my desk working, though, I reach for headphones 99% of the time.

One final caveat: the sound quality you get out of your ‘phones depends most on what files you’re playing. The best headphones in the world can’t do anything about quantization noise (that’s the noise introduced when you convert analog sound-waves to digital ones) or a background hum in the recording.

Is dyslexia genetic?

I’m a graduate student in linguistics. I have a degree in English literature. I love reading, writing and books with the fiery passion of a thousand suns. And I am dyslexic. While I was very successfully academically in secondary and higher education, let’s just say that primary school was… rough. I’ve failed enough spelling tests that I could wallpaper a small room with them. At this point I’m a fluent reader, mainly because no one’s asking me to read things without context. (For an interesting experimental look at the effects of world-knowledge and context on reading, I’d recommend Paul Kolers 1970 article, “Three stages of reading”.) These days, language processing problems tend to be flashes in the pan rather than a constant barrier I’m pushing against. I tend to confused “cloaca” and “cochlea”, for example, and I feel like I use “etymology” and “entomology” correctly at chance. But still, it would be nice to know that my years of suffering in primary school were due to genetic causes and not because I was “dumber” or “lazier” than other kids. And recent research does seem to support that: it looks like dyslexia probably is genetic.

Dyslexic words

They all look good to me.


First off, a couple caveats. Dyslexia is an educational diagnosis. There’s a pretty extensive battery of tests, any of which may be used to diagnose dyslexia. The International Dyslexia Association defines dyslexia thusly:

It is characterized by difficulties with accurate and / or fluent word recognition and by poor spelling and decoding abilities. These difficulties typically result from a deficit in the phonological component of language that is often unexpected in relation to other cognitive abilities and the provision of effective classroom instruction. Secondary consequences may include problems in reading comprehension and reduced reading experience that can impede growth of vocabulary and background knowledge.

Which sounds pretty standard, right? But! There are a number of underlying causes that might lead to this. One obvious one is an undiagnosed hearing problem. Someone who only has access to part of the speech signal will probably display all of these symptoms. Or someone in an English-only environment who speaks, say, Kola, as their home language. It’s hard to learn that ‘p’ means /p/ if your language doesn’t have that sound. Of course, educators know that these things affect reading ability. But there are also a number of underlying mental processes that might lead to a diagnosis of dyslexia, which may or may not be related to each other but are all almost certainly genetic. Let’s look at a couple of them.

  • Phonological processing. I’ve talked a little bit about phonology before. Basically, someone with phonological disorder has a hard time with language sounds specifically. For example, they may have difficulty with rhyming tasks, or figuring out how many sounds are in a word. And this does seem to have a neurological compontent. One study shows that, when children with dyslexia were asked to come up with letters that rhymed, they did not show activity in the Temporoparietal junction, unlike their non-dyslexic peers. Among other things, the Temporoparietal junction plays a role in interpreting sequences of events.
  • Auditory processing. Auditory processing difficulties aren’t necessarily linguistic in nature. Someone who has difficulty processing sounds may be tone deaf, for example, unable to tell whether two notes are the same or different. For dyslexics, this tends to surface as difficulty with sounds that occur very quickly. And there’s pretty much no sounds that humans need to process more quickly than speech sounds. A flap, for example, lasts around 20 milliseconds. To put that in perspective, that’s about 15 times slower than a fast blink. And it looks like there’s a genetic cause for these auditory processing problems: dyslexic brains have a localized asymmetry in their neurons. They also have more, and smaller neurons.
  • Sequential processing. For me, this is probably the most interesting. Sequential processing isn’t limited to language. It had to do with doing or perceiving things in the correct order. So, for example, if I gave you all the steps for baking a cake in the wrong order, you’d need to use sequential processing to put them in the correct order. And there’s been some really interesting work done, mainly by Beate Peter at the University of Washington (represent!) that suggests that there is a single genetic cause responsible for a number of rapid sequential processing task, and one of the effects of an abnormality in this gene is dyslexia. But people with this mutation also tend to be bad at, for example, touching each of their fingers to their thumb in order. 
  • Being a dude. Ok, this one is a little shakier, but depending on who you listen to, dyslexia is either equally common men and women, 4 to 5 times more common in men or 2 to 3 times more common in men. This may be due to structural differences, since it seems that male dyslexics have less gray matter in language processing centers, whereas females have less gray matter in sensing and motor processing areas. Or the difference could be due to the fact that estrogen does very good things for your brain, especially after traumatic injury. I include it here because sex is genetic (duh) and seems to (maybe, kinda, sorta) have an effect on dyslexia.

Long story short, there’s been quite a bit of work done on the genetics of dyslexia and the evidence points to a probably genetic common cause. Which in some ways is really exciting! That means that we can predict better who will have learning difficulties and work to provide them with additional tutoring and help. And it also means that some reading difficulties are due to anatomy and genes. If you’re dyslexic, it’s because you’re wired that way, and not because your parents did or didn’t do something (well, other than contribute your genetic material, obvi) or because you didn’t try hard enough. I really wish I could go back in time and tell that to my younger self after I completely failed yet another spelling test, even though I’d copied the words a hundred times each

But the genetic underpinning of dyslexia might also seem like a bad thing. After all, if dyslexia is genetic, does that mean that children with reading difficulties will just never get over them? Not at all! I don’t have space here to talk about the sort of interventions and treatments that can help dyslectics. (Perhaps I’ll write a future post on the subject.) Suffice to say, the dyslexic brain can learn to compensate and adapt over time. Like I said above, I’m currently a very fluent reader. And dyslexia can be a good thing. The same skills that can make learning to read hard can make you very, very good at picking out one odd thing in a large group, or at surveying a large quantity of visual information quickly — even if you only see it out of the corner of your eye. For example, I am freakishly good at finding four-leaf clovers. In high school, I collected thousands of them just while doing chores around the farm. And that’s not the only advantage. I’d recommend the Dyslexic Advantage (it’s written for a non-scientific audience) if you’re interested in learning more about the benefits of dyslexia. The authors point out that dyslectics are very good at making connections between things, and suggest that they enjoy an advantage when reasoning spatially, narratively, about related but unconnected things (like metaphors) or with incomplete or dynamic information.

The current research suggests pretty strongly that dyslexia is something you’re born with. And even though it might make some parts of your school career very difficult, it won’t stop you from thriving. It might even end up helping you later on in life.