Tagged: future of language Toggle Comment Threads | Keyboard Shortcuts

  • The Diacritics 9:00 am on November 17, 2011 Permalink | Reply
    Tags: , email, , , future of language, grammar b, , , , written language   

    The effects of txt 

    (Posted by Sandeep)

    If you’ve ever transcribed a free-form conversation, you have probably been struck by how little of a spoken exchange is made up of true grammatical sentences. Listen to your conversations—we hardly ever talk “properly.” We interrupt each other, we lose our train of thought or we misconjugate verbs and get flustered.

    We’re not all careful speakers at all times: redundancies, mistakes and misinterpretations are as central to human language as descriptiveness and precision are.

    Despite this, our educational system—in fact, all of literate society in every language—demands that we write in grammatical sentences. We can’t write our academic essays in phrases and incomplete thoughts. Our literate culture requires completeness and grammaticality. Deviations from this sentence model are dismissed, at best, as art projects or, at worst, serious misunderstandings of grammar.

    Not everyone believes writing should be this way. Thirty years ago, a composition theorist named Winston Weathers proposed “Grammar B,” an alternate style providing, in his words, “options that do not yet exist but which would be beneficial if they did.” His Grammar B sought to convey information from author to reader in the same way it travels from speaker to listener. He promoted a written representation of human thought that mimicked the mechanisms of spoken language—with interruptions, redundancies and visual elements (in lieu of cues like intonation).

    Winston Weathers.

    It was a radical idea with several merits. In fact, for a writing project three years ago, I rewrote a sociology essay into Grammar B. The result was easier to read and understand than the “Grammar A” version. It was also more engaging and conversational.

    But it’s not a coincidence that Weathers’ book is out of print. Writing, especially academic writing, is driven by a cycle that rewards Grammar A and produces it too. I would never have actually submitted my Grammar B essay to my sociology professor and have expected a positive response.

    So if we write in Grammar A and speak and think in Grammar B, are we being cognitively torn apart? Are we being required to think in two different ways? To use language incongruously and inconsistently?

    Consider, at least, that spoken language dwarfs writing in our species’ timeline. We started speaking at least 200,000 years ago, around when Homo sapiens emerged. Written language, on the other hand, appeared no earlier than 10,000 years ago, and it wasn’t until about 200 years ago that mass literacy became common.

    Significant swaths of today’s world remain illiterate. All societies in the world are still based fundamentally on spoken language. In fact, all literate societies are both oral and written—and the conventional wisdom until recently was that a society can be completely oral, but it cannot be completely written.

    World rates of literacy. (Click to enlarge and for source information.)

    If our spoken language is different from our written language, what does it mean that the literate establishment requires such rigidity in writing? It’s obvious that I’m writing this post in Grammar A. I write all of my papers in Grammar A, and you probably do too. That’s considered normal. But when I speak in Grammar A, you think I am working hard to be a careful speaker: I am being formal, or I am delivering a speech.

    So we recognize the merits of Grammars A and B in different situations. But I’m no fool to think that academic writing will ever comprise Grammar B works. It’s a fun idea, but it’s not sensible for any mainstream academic or student to discard the established rules of grammar, even if Grammar B is clearer.


    I once wondered if the dichotomy between written and oral traditions would continue to grow until they had little to no relationship to one another: whether Grammar A’s rate of change would be so much slower than Grammar B’s that they eventually split.

    In my family’s first language, Kannada, a beautiful literary tradition spanning 15 centuries continues to flourish. But today’s formalized Kannada grammar and vocabulary has very little obvious relation to the spoken form—so much so that a Kannada-user like me, familiar only with speaking the language, can barely understand formal text.

    This phenomenon is called diglossia, and I wonder if English is headed toward it. To be sure, all literary languages have some spoken/written diglossia. When we have the luxury to be careful (like in writing), we are generally more grammatical. And written language usually changes more slowly than spoken language because of various forces—compare English spellings to pronunciations, for example.

    But forms of communication like short and ungrammatical text messages, or even longer, conversational emails, have thrown us a linguistic curveball.

    For the first time in our species’ history, we are constantly and continuously using written communication for real-time conversations. We IM, we text and we e-mail. Just 20 years ago, the only written communication reliably employed by most people was letter writing. Now, there are entire online communities whose primary, if not only, form of communication is through written language.

    What does this mean for the future of human communication? Will diglossia be thwarted? Or will there be an even greater divide between spoken (including instant, written messages) and formalized written English?

    Spoken language uses subtle cues like intonation, pausing and volume to deliver meaning. Written language lends itself to longer reflection and more careful word and phrasing selection. I’m not constructing the two in opposition to each other, although it is obvious which is more fundamental to our species.

    We have used spoken and written language mostly for different purposes, so they may have developed divergent characteristics for that reason. But as we communicate more and more through text, our use and understanding of language will change fundamentally—even if we never actually write our essays in Grammar B.

    (A version of this post appeared in The (Duke) Chronicle on September 23, 2010.)

    • Lane 10:58 am on November 17, 2011 Permalink | Reply

      In my book I argued that really successful prescriptivism, which enforces the rules of “Grammar A” religiously in writing and encourages them sternly in “proper” speech, leads inevitably to diglossia in the long run. My exhibits are Arabic, with its early, successful prescriptive grammar tradition freezing Classical Arabic while spoken Arabic changed normally over the centuries; and French, where nearly 400 years of a French Academy has also frozen a formal version of the language that no one speaks. (An alien linguist would give the French verb paradigm as “je parl, tu parl, il parl, on parl, vous parlé, il parl.” Our alien would conclude that one negative particle, “pas”, is sufficient in nearly all cases. Etc.)

      So I think even those who value stability in formal language must either let Grammar A (I’d just say “writing”) change gradually in line with natural change in Grammar B (“speech”), or watch diglossia take root, with its inevitable plaints that “nobody speaks real Arabic anymore.”

      • The Diacritics 3:35 pm on November 17, 2011 Permalink | Reply

        Fantastic. Is that “You Are What You Speak”? Can’t wait to read it over winter break.

  • The Diacritics 6:01 am on October 4, 2011 Permalink | Reply
    Tags: austin, future of language, , ,   

    #sorryimnotsorry (an addendum) 

    (Posted by Sandeep)

    Discussing the significance of Clinton’s word choice of “regret” makes me wonder about the strength of Austin’s third justification, that natural language can convey all distinctions required by people.

    Does Clinton really feel “regret” as we might understand it? Is “regret” an appropriate word? It’s probably one of the most significant words in his whole statement — he doesn’t actually ever say “sorry” — so we might assume that he weighed his choice of “regret” carefully.

    So Clinton’s choice is fallible; fine. The way we use natural language isn’t infallible, but in his third justification,

    (3) that the words available in a natural language suffice to satisfactorily convey all distinctions that we might like to make

    Austin seems to be hinting at that by arguing generations of language users have honed and perfected the spoken tongue over centuries.

    Does that mean that somewhere along the line language couldn’t convey all distinctions? That seems highly unlikely, unless we return so far back in human history we’re no longer talking about the modern species. If something needs to be said, it will be said.

    Language isn’t along some sort of scala naturaewith modern language at the peak. Language changes, but it’s not necessarily improving. What I mean to say is that perhaps Austin’s second and third justifications are sometimes at odds with each other — we can’t always acknowledge that our language is inadequate and arbitrary while still glorifying the existing language as complete.

    Other troublesome questions remain, too: is a person who claims to speak English always required to know the semantic shades of meaning of all words? Some people do for some words, and some people don’t for the same words.

    Austin talks about dictionaries and how to use them. I wonder how useful a dictionary is in terms of ordinary language. Yes, we use them, but what about before dictionaries? How were people able to deduce shades of meaning?

    Ordinary language is what people do say. People speak, and people have spoken for the history of our species. Here is where Austin’s scala naturae also uncomfortably collides with the science of language development. Oral language came first, and there were no dictionaries then. All present literate societies are fundamentally oral and secondarily written. Austin argues that, well, language has developed more shades of distinction over time. We know that to be dubious, but it would actually theoretically go in tandem with the fact that dictionaries are a recent, helpful invention.

    The relationship between Austin’s assertions and what is known about historical language development is complex, but I can’t help feeling that there is some gulf of understanding between the two.

  • The Diacritics 9:00 am on September 13, 2011 Permalink | Reply
    Tags: , future of language, , ,   

    Frenemy, pls refudiate haters 

    Posted by Sandeep

    Around every new year, I have fun looking at different dictionaries’ and publications’ lists of “the new words” of the year. Sociolinguistic commentators always have a field day around every January: Some loudly lament the decline of English, and others marvel at the flexibility of our language. Everyone seems to love a good invective against modern society—but why do we care about change in language so much?

    Natural systems are dynamic; any scientist will argue that. But most things man-made—from buildings to morals—strive to achieve an intrinsic stability. Nature fluctuates, but the concrete and the abstract of man-made creations are designed to be constant. Language, one of the most fundamentally human of characteristics, straddles this dichotomy. It is at once a natural system encoded into society and into the human brain—an ability to process and synthesize communicative gestures of the same species—as well as a synthetic system based on arbitrarily assigning sounds and symbols to the human experience.

    As such, language might be torn between natural drift and conscious shift in ways that no mathematical or historical model will ever be able to describe or predict.

    Amid this complexity, change in language—a perceived or documented shift in semantics, vocabulary, grammar, pronunciation, or writing—has engaged humanity because it is associated with some of the most important elements of the human condition: culture, identity and the origin of man.

    We know at least some of the why of language change. Cultures attack, conquer and interact with one another. Language academies are created and maintained for the explicit purpose of standardizing written and spoken forms because there’s a historically and geographically universal perception that society in general and language in particular is falling precipitously from a refined past. Certain forms of grammar and vocabulary are stigmatized and others are praised; the same ones might be conversely mocked and admired by different groups. The inherent variability in the human experience, a necessary component for any change, allows that when different cultures interact, there is a productive exchange of ideas and language.

    This is all well and good. We can name what has happened and offer some explanations as to why these changes occurred. I have shown above that language is inconstant: This much every linguistics student knows. But if language is a man-made system, then it was created to be constant. Man works toward homeostasis.

    Language, ostensibly, arose because certain neural pathways more complex than those of our predecessors allowed for abstract thought and the use of symbols. This adaptive development allowed for complex social interactions and communication that made one Homo species more fit than another.

    And yet, as humans, we—by the same or different neural mechanisms—are capable of reflection on our behavior in a way that hasn’t been observed in other animals. So for the entire history of our species, where we are today is as much a product of natural forces as it is deliberate choice. Taken into context, if humans use language to communicate, and if language is useless without facilitating communication, then we ask why vocabulary isn’t finite and syntax isn’t fixed—this would seem to be the most effective way to ensure meaningful interaction. But the point is that it’s not.

    This problem has occupied researchers and still isn’t resolved—but will it ever be?

    It seems to me that change in language might be such a complex, multifaceted, multivariable process that we may never be able to understand how all of the forces work together in one whole, coherent way such that we’ll be able to competently describe or predict past and future change in every respect. Language is natural and synthetic and neither, and it displays trends characteristic of both and none. Human behavior, the human mind and human interactions are all inconceivably complex variables. Linguists might always be consigned to dividing up language change into neat parcels and analyzing the hell out of each. This might not be such a desperate thing to do.

    As long as we recognize that the entirety of language change defies conclusive explanation, we can concern ourselves with functional explanations of certain trends in a way that is useful and productive.

    Even if we don’t fully understand the mechanisms and processes of language change, it would be silly to believe that our language is declining: It’s difficult to objectively characterize any change as degradation. We’re not moving toward some end-goal, no matter what sort of a harbinger the “texting generation” is. The truth is that every generation has always experienced language change and called it decline. Change is just change.

    As far as I’m concerned, the “new words of the year” lists are only useful in reminding me how woefully square (we’re still using that word, right?) I am.

    (A version of this post appeared in The (Duke) Chronicle on January 13, 2011.)

    • JP 9:57 am on September 13, 2011 Permalink | Reply

      Like this piece, except, maybe, the third paragraph. It works well rhetorically, but we don’t really know that. Got here via languagehat.com and think I will be a regular..:-) Saudações from Brazil.

    • johnwcowan 3:45 pm on September 13, 2011 Permalink | Reply

      Well, we don’t always seek homeostasis. In particular, we know that some language change happens in order to differentiate an in-group: notable cases are the characteristic Martha’s Vineyard phonology, which is actually more common in young people now than it is among their parents (who were trying to assimilate to off-islander English), and the Northern Cities Chain Shift, which seems to be a way of differentiating white Northern Americans from black Americans and Canadians.

      • The Diacritics 11:34 am on September 22, 2011 Permalink | Reply

        Right. There are a few really well-known examples of people consciously changing their speech. What I meant to say was that at any given point, people want stability in their language — it wouldn’t do to have the meaning of, say, “table” change from day to day. We seek homeostasis in describing our world within our communities.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc