Tagged: texting Toggle Comment Threads | Keyboard Shortcuts

  • John Stokes 12:13 pm on March 12, 2012 Permalink | Reply
    Tags: grammaticality judgments, , literature, , newspapers, , text speak, texting, txt   

    Those narrow-minded, prescriptivist . . . texters? 

    Texting encourages us to be creative and unconstrained with our language, right? Traditional print media, fettered as they are by the bounds of Standard English, promote more rigid acceptability and grammaticality judgments, don’t they? Aren’t those prescriptivist editors and stodgy old style columnists just concerned with dictating how we speak and write?

    Not so says some new research from University of Calgary linguist Joan Lee.

    Lee’s Master’s thesis tested students with varying levels of exposure to text/instant messaging versus traditional media (newspapers, magazines, literature, nonfiction). She hypothesized that those with comparatively more exposure to the free-form nature of ‘text speak’ would be comparatively more lenient in their acceptance of novel and deviate forms of words, both morphological and orthographic. What she found was the opposite.  Students who had spent more time reading books and newspapers were more likely to judge novel words or deviate forms of words as acceptable. Those who had more exposure to text speak tended to be considerably more rigid and constrained in their acceptability judgments.

    This result is moderately mind-boggling. Think of all you know about texting and compare that to your expectations of the effects of traditional media on language use. From texters, we see such classics as ‘wot r u doin 2day‘ and ‘ur stoopid dood‘ and ‘kewl‘ and, perhaps best of all, kthxbai‘. There’s a bunch of research discussing why texters say and spell things like this. In the preview of her thesis linked above, Lee cites one such study that says people are trying to be playful, spontaneous, socially interactive, and even creative. Compare this to what you read in the New York Times, where there’s actually someone who writes articles nitpicking the grammar of other traditional-print-media articles, including sometimes the NYTimes itself!

    I haven’t read the whole 150-page thesis yet, but it seems there are several plausible explanations for what’s going on. One idea is that for a novel form to be acceptable to texters, it must be a novel form commonly seen in text speak. So while there may not be the same types of grammatical constraints, there are conventions that are respected nonetheless. Or perhaps text speak is still free-form and not subject to constraints in the ways that traditional media might be, but free-formedness doesn’t actually equate with creativity. So while you’re tossing standardized grammar out the window, you’re not necessarily looking for the most precise, novel, creative word to express a given idea. You may be using words with odd morphology and spelling, but you’re probably not reaching deep into the dictionary to find those words. This means that readers of text speak don’t often see words they’re unfamiliar with and thus don’t often have to figure what those unfamiliar words might mean.

    If, on the other hand, you’re a big-time reader of traditional print media, you probably encounter unusual words all the time. To figure out what they mean, you either infer their meaning from context or use your knowledge of productive morphology like -ity or -ness. This could explain why texters are ok with ur and kthxbai and wot, but not some of the novel forms that Lee proposed in her study, like canality and groundness. If you read and don’t text, the textisms may be ridiculous to you, but you might be able to come up with a meaning for canality and groundness that makes sense, and thus conclude they’re acceptable words.

    I’m sure much more will be said on this front in the near future. For now, I think it’s enough to note this interesting bit of research and sign off — TTYL, folks.

    • Chad Nilep 7:39 pm on March 13, 2012 Permalink | Reply

      I haven’t read the thesis at all, but but it seems there are several plausible arguments that this phenomenon is not going on. According to Lee’s abstract, she administered a questionnaire to 33 university students. Small sample size, homogeneity of the subject pool, or confounded variables might cause such research to turn up artefacts that don’t hold in the larger population, texters and newspaper-readers generally, that they are extrapolated to.

      It sounds like an interesting study, and I’ll add it to my queue of things to read in my spare time. But I’m not confident that Lee’s findings would necessary hold up in a larger study. I’m more confident, however, that if much more will be said on this front in the near future some newspapers will report the results as though they were proven to be true among a much broader general public.

    • Josh 11:33 pm on April 4, 2012 Permalink | Reply

      I can’t see past page 17 of the article, but the abstract says the sample size is 33, which seems awfully low. I’d also be curious if she controlled for gender/income/some proxy for intelligence since it’s easy to imagine some variable like this being highly correlated with text/instant message frequency. But I imagine these issues were probably addressed in pages 18-150!

      • Josh 11:34 pm on April 4, 2012 Permalink | Reply

        Note: I didn’t see Chad’s comment when I first posted.

  • The Diacritics 9:00 am on November 17, 2011 Permalink | Reply
    Tags: , email, , , , grammar b, , , texting, written language   

    The effects of txt 

    (Posted by Sandeep)

    If you’ve ever transcribed a free-form conversation, you have probably been struck by how little of a spoken exchange is made up of true grammatical sentences. Listen to your conversations—we hardly ever talk “properly.” We interrupt each other, we lose our train of thought or we misconjugate verbs and get flustered.

    We’re not all careful speakers at all times: redundancies, mistakes and misinterpretations are as central to human language as descriptiveness and precision are.

    Despite this, our educational system—in fact, all of literate society in every language—demands that we write in grammatical sentences. We can’t write our academic essays in phrases and incomplete thoughts. Our literate culture requires completeness and grammaticality. Deviations from this sentence model are dismissed, at best, as art projects or, at worst, serious misunderstandings of grammar.

    Not everyone believes writing should be this way. Thirty years ago, a composition theorist named Winston Weathers proposed “Grammar B,” an alternate style providing, in his words, “options that do not yet exist but which would be beneficial if they did.” His Grammar B sought to convey information from author to reader in the same way it travels from speaker to listener. He promoted a written representation of human thought that mimicked the mechanisms of spoken language—with interruptions, redundancies and visual elements (in lieu of cues like intonation).

    Winston Weathers.

    It was a radical idea with several merits. In fact, for a writing project three years ago, I rewrote a sociology essay into Grammar B. The result was easier to read and understand than the “Grammar A” version. It was also more engaging and conversational.

    But it’s not a coincidence that Weathers’ book is out of print. Writing, especially academic writing, is driven by a cycle that rewards Grammar A and produces it too. I would never have actually submitted my Grammar B essay to my sociology professor and have expected a positive response.

    So if we write in Grammar A and speak and think in Grammar B, are we being cognitively torn apart? Are we being required to think in two different ways? To use language incongruously and inconsistently?

    Consider, at least, that spoken language dwarfs writing in our species’ timeline. We started speaking at least 200,000 years ago, around when Homo sapiens emerged. Written language, on the other hand, appeared no earlier than 10,000 years ago, and it wasn’t until about 200 years ago that mass literacy became common.

    Significant swaths of today’s world remain illiterate. All societies in the world are still based fundamentally on spoken language. In fact, all literate societies are both oral and written—and the conventional wisdom until recently was that a society can be completely oral, but it cannot be completely written.

    World rates of literacy. (Click to enlarge and for source information.)

    If our spoken language is different from our written language, what does it mean that the literate establishment requires such rigidity in writing? It’s obvious that I’m writing this post in Grammar A. I write all of my papers in Grammar A, and you probably do too. That’s considered normal. But when I speak in Grammar A, you think I am working hard to be a careful speaker: I am being formal, or I am delivering a speech.

    So we recognize the merits of Grammars A and B in different situations. But I’m no fool to think that academic writing will ever comprise Grammar B works. It’s a fun idea, but it’s not sensible for any mainstream academic or student to discard the established rules of grammar, even if Grammar B is clearer.


    I once wondered if the dichotomy between written and oral traditions would continue to grow until they had little to no relationship to one another: whether Grammar A’s rate of change would be so much slower than Grammar B’s that they eventually split.

    In my family’s first language, Kannada, a beautiful literary tradition spanning 15 centuries continues to flourish. But today’s formalized Kannada grammar and vocabulary has very little obvious relation to the spoken form—so much so that a Kannada-user like me, familiar only with speaking the language, can barely understand formal text.

    This phenomenon is called diglossia, and I wonder if English is headed toward it. To be sure, all literary languages have some spoken/written diglossia. When we have the luxury to be careful (like in writing), we are generally more grammatical. And written language usually changes more slowly than spoken language because of various forces—compare English spellings to pronunciations, for example.

    But forms of communication like short and ungrammatical text messages, or even longer, conversational emails, have thrown us a linguistic curveball.

    For the first time in our species’ history, we are constantly and continuously using written communication for real-time conversations. We IM, we text and we e-mail. Just 20 years ago, the only written communication reliably employed by most people was letter writing. Now, there are entire online communities whose primary, if not only, form of communication is through written language.

    What does this mean for the future of human communication? Will diglossia be thwarted? Or will there be an even greater divide between spoken (including instant, written messages) and formalized written English?

    Spoken language uses subtle cues like intonation, pausing and volume to deliver meaning. Written language lends itself to longer reflection and more careful word and phrasing selection. I’m not constructing the two in opposition to each other, although it is obvious which is more fundamental to our species.

    We have used spoken and written language mostly for different purposes, so they may have developed divergent characteristics for that reason. But as we communicate more and more through text, our use and understanding of language will change fundamentally—even if we never actually write our essays in Grammar B.

    (A version of this post appeared in The (Duke) Chronicle on September 23, 2010.)

    • Lane 10:58 am on November 17, 2011 Permalink | Reply

      In my book I argued that really successful prescriptivism, which enforces the rules of “Grammar A” religiously in writing and encourages them sternly in “proper” speech, leads inevitably to diglossia in the long run. My exhibits are Arabic, with its early, successful prescriptive grammar tradition freezing Classical Arabic while spoken Arabic changed normally over the centuries; and French, where nearly 400 years of a French Academy has also frozen a formal version of the language that no one speaks. (An alien linguist would give the French verb paradigm as “je parl, tu parl, il parl, on parl, vous parlé, il parl.” Our alien would conclude that one negative particle, “pas”, is sufficient in nearly all cases. Etc.)

      So I think even those who value stability in formal language must either let Grammar A (I’d just say “writing”) change gradually in line with natural change in Grammar B (“speech”), or watch diglossia take root, with its inevitable plaints that “nobody speaks real Arabic anymore.”

      • The Diacritics 3:35 pm on November 17, 2011 Permalink | Reply

        Fantastic. Is that “You Are What You Speak”? Can’t wait to read it over winter break.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc