This post is pretty special: it’s the 100th post I’ve made since starting my blog! It’s hard to believe I’ve been doing this so long. I started blogging in 2012, in my final year of undergrad, and now I’m heading into my last year of my PhD. Crazy how fast time flies.
Ok, back on topic. As I was looking back over everything I’ve written, it struck me that 99 posts worth of text on a very specific subject domain (linguistics) in a very specific register (informal) should be enough text to train a simple text generator.So how did I go about building a blog bot? It was pretty easy! All I needed was:
- 67,000 words of text (all blog posts before this one)
- 1 R script to tidy up the text
- 1 Python script to train a Markov Chain text generator
OK, now for the actual training of the model. If you want to play around with this yourself, all my code and text is up on GitHub here.
- First, grab your text. I downloaded all my blog posts as an XML file and used a quick little R script to snag just the text I was interested in.
- Next, train your model. I used PyMarkovChain, but you can always write your own if you’re feeling it.
- Finally, use the model to generate text.
Alright, now I’ve got (admittedly very, very dumb) little program that kinda knows what I write like when I’m blogging. How well does it do at pretending to be me?
I’m going to break eye contact, look down at your own personalized ASR system
Well that’s… vaguely disturbing. But also not really very human-sounding, and certainly not something I’d write in a blog post. Let’s give it a couple more tries and see if it does any better.
- But, if frosting has to have a career where you learned it from Clarice
- We only perceive the things that our bodies and sensory organs and brains can perceive, and we really don’t care about to tell the sounds in the second part, I was expecting to see whether something is grammatical is to put up a bunch of historical stuff that we’ve all heard about the structure in the arts and sciences were depressed
- (Which is really interesting–I’d love to learn your language doesn’t make you very, very good a recognizing speech
- Believe me, I know what you can uncontract them and what’s the take-away
People with short face syndrome–yes, that’s a classical description of what a language does, along with parallels drawn to another, related, languages
- Short answer: they’re all correct
- And those speakers are aware of
- The Job Market for Linguistics PhDsWhat do you much
Hmm, not so great. The only output that sounded good to me was “Short answer: they’re all correct”. And that was just the exact same phrasing I used in this blog post. No wonder it sounds just like me; it is me!
So it looks like I won’t be handing the reins over to Making Noise and Hearing Things bot any time soon. True, I could have used a fancier tool, like a Recurrent Neural Network. But to be perfectly honest, I have yet to see any text generation system that can produce anything even close to approximating a human-written blog post. Will we get there? Maybe. But natural language generation, especially outside of really formulaic things like weather or sports reporting, is a super hard problem. Heck, we still haven’t gotten to point where computers can reliably solve third-grade math word problems.
The very complexities that make language so useful (and interesting to study) also make it so hard to model. Which is good news for me! It means there’s still plenty of work to do in language modelling and blogging.