Tuesday, March 10, 2009

Family Guy Linguistics

(screen shot from Family Guy via Hulu)

The most recent Family Guy episode ("Family Gay": Season 7, Episode 8 ) relied on some interesting linguistics in two separate jokes.

First, when Peter enters his brain damaged horse, 'Till Death, in a race, it runs amok in the crowd and the announcer on the loud speaker says the following (about the 5:50 mark on Hulu):

What's this? It looks like 'Till Death has taken a right turn and is heading into the stands. Dear god! I could describe the horror I am witnessing but it is so unfathomably ugly and heart rendering that I cannot bring myself to do so, although I do posess the necessary descriptive powers. Haw, well at least the horse raced past the class of visiting deaf second graders...oh no! Dear god he's going back oh I know you can't hear any screams but I assure you they are signing frantically just as fast as their little fingers can shape the complicated phonemes necessary to convey dread and terror.

Having never studied the linguistics of sign language, my first reaction was to ask, are there truly phonemes in sign langauge? In spoken language, a phoneme is a conceptual clustering of phonetic segments into a single group. For example, in English the segment /p/ can occur with a little extra burst of air called aspiration (typically at the beginning of words), or not, like at the end of words (try saying the words "pat" and "tap" with your hand in front of your lips and, if you're a native speaker if English, you should be able to feel the little burst of air that accompanies the /p/ in "pat" but not in "tap"). So, there is a difference in how we articulate /p/ depending on where it occurs in a word. Nonetheless, we still consider both versions of /p/ to be "the same sound." We say there is a single phoneme [p] with two phonetic realizations, aspirated and unaspirated.

But this use of "phoneme" is based on spoken language. How does this relate to signed languages like ASL? After a quick bit of Googling, I've discovered that the term "phoneme" is in fact used to refer to segments of signed language by various sign language scholars, though it is used more as a conceptual borrowing than as a term referring to sound. The most relevant discussion I found was in an abstract for the paper "Sign language phoneme transcription with PCA-based representation" by Kong, W.W. and Ranganath, S. (from the National University of Singapore). They "first apply a semi-automatic segmentation algorithm which detects minimal velocity and maximal change of directional angle to segment the hand motion trajectory of signed sentences. We then extract feature descriptors based on principal component analysis (PCA) to represent the segments efficiently. These high level features are used with k-means to cluster the segments to form phonemes." According to this approach, phonetic segments are roughly approximated to sign language as feature sets composed of "minimal velocity and maximal change of directional angle" and phonemes are approximated as k-means clusters of those feature sets. Cool stuff, for sure. But it's not clear to me if this computational approach is consistent with the natural way humans actually perceive and analyze sign language segments. I'm still looking for more on that topic.

Nonetheless, there remains the issue of the Family Guy writers getting the nature of phonemes fundamentally wrong. Phonemes convey no meaning (excusing for the moment the weak possibility of sound symbolic associations). The writers, who were clearly willing to do a little research (even if a very little), could easily have substituted "morphemes" for "phonemes" in the script and would have had the same joke without the error. And just how complicated are the signs for dread and terror anyway?

(pssst, I've clearly spent too much time reading linguistics because when I first read "the horse raced past the class of visiting deaf second graders" I assumed it was a garden path sentence similar to Bever's famed example "The horse raced past the barn fell." It took me several reads to realize that, nope, "raced" is not a reduced relative clause, but rather a run-of-the-mill past tense main verb. A nice example of construction priming, eh? I'm primed to read any "X raced past Y" clause as being a reduced relative).


Second, the writers went out of their way to construct a joke not only based on conversational pragmatics, but based on EXPLAINING Gricean maxims (about the 9:50 mark on Hulu.com).

Lois: Peter, what exactly did they inject you with?

Peter
: Oh all sorts of things. Hepatitis vaccine, a couple of steroids, the gay gene, calcium, a vitamin B extract...


Lois
: What did you just say?


Peter
: The gay gene. I assume that's the one you meant even though it wasn't literally the last thing I said when you said what did you just say, it's just that clearly (it) was most unusual... (note: the pronoun "it" was reduced to near imperceptibility).


In this exchange, Peter explains that, under normal circumstances, after listing a set of items and someone asks "what did you just say" he would interpret "what" as referring to the most recent item in the list (presumably because of the semantics of "just"). But in this case, one earlier item was more "unusual" than the others.

Let's re-explain this using conversational pragmatics and Gricean maxims, okay?

Peter lists five items. He believes that one of the five items is controversial while the other four are not. He believes Lois believes this too. The controversial item is in the middle of the list. Peter believes he articulated each item clearly such that Lois could properly hear all items. He believes Lois believes this too. So, when Lois asks "what did you just say," Peter believes 1) that she heard the most recent item clearly and 2) that this item has little informational value. He believes Lois believes this too. Peter believes Lois is not flouting conversational norms. He believes Lois believes this too. Therefore, Peter believes Lois is trying to make her contribution (her question) informative (maxim of quantity). He believes Lois believes this too. Peter believes that repeating a well heard, uncontroversial item has no information value. He believes Lois believes this too. Thus, he infers that "what" must refer to some item other than the last one. He believes Lois believes this too. Peter believes there is only one item on the list that meets the information value requirement. He believes Lois believes this too.

This is a long-winded way of saying the same thing Peter did, but we linguists have to make things complicated and technical. It's our job.

Sunday, March 8, 2009

Taco Bell Grammar

The grammar of my recent Taco Bell receipt is remarkably interesting. Here's my faithful transcription of the actual receipt pictured above:

THANK'S FOR CHOOSING TACO BELL 3009
HAVE YOU WON YOUR $ 1000 YET ?
IF YOU DON'T PLEASE ASK A CASHIER
HOW YOU CAN . . .
WE APPRECIATE YOUR BUSINESS
PLEASE LET US KNOW HOW WE DID IT
CALL US AT ( 510 ) 844-0764
OR CALL THE MANAGER.

Linguistically, there are some obviously interesting and not so obviously interesting features of this receipt.

1. They used an apostrophe for "thanks"
2. An unnecessary space between "$" and "1000"
3. An unnecessary space between "yet" and the question mark
4. Incorrect verb choice in the conditional clause ("do" instead of "have")
5. Extraneous pronoun "it" at the end of a clause
6. Unnecessary spaces after and before "(" and ")"

(1) is a common typo/error/misunderstanding. (2), (3), and (6) seem to be some spacing convention of the receipt format, but the convention is unpredictable because the "$", "?", "...", "(" and ")" all follow it, but the " ' ", "-", and "." do not (a tokenizer could be built to account for this fairly easily because the only thing that hinges on this is correctly identifying 1000 as a dollar amount and the "510" as an area code). (4) and (5) seem to be legitimate grammar errors.

My guess is that each Taco Bell can personalize the message and the local manager either made the mistake or failed to identify and correct the mistake.

Finally, and perhaps most compelling of all, at the bottom, they got the Bagging Summary wrong. There were three items, not two.

NLPers: How would you characterize your linguistics background?

That was the poll question my hero Professor Emily Bender posed on Twitter March 30th. 573 tweets later, a truly epic thread had been cre...