Thank you for correcting the text in this article. Your corrections improve Papers Past searches for everyone. See the latest corrections.

This article contains searchable text which was automatically generated and may contain errors. Join the community and correct any errors you spot to help us improve Papers Past.

Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image

Sign language: not just semaphore

From “The Economist”, London

The speed and subtlety with which deaf people use sign language to communicate surprises many hearing people, but then deaf people are better than hearing people at interpreting the signs. Recent research has not only shown how much better; it has also shed new light on the way language is organised in the brain. In animals such as cats, the brain’s development is affected by experience. What effect does deafness have on the development of human brains? What, for example, happens to the part of the brain that is normally specialised for the analysis of sound, the auditory cortex? Dr Helen Neville of the Salk Institute in La Jolla, California, has begun to find out. She uses “event-related potentials,” the characteristic electrical signals that can be recorded from the scalp when the brain does a certain task.

She showed people flashes of light and recorded the appropriate signals from the region of the auditory cortex. Deaf people gave a much larger response if the flash

was seen in the perphery of the visual field, which suggests that peripheral vision had expanded to take over the hearing part of the brain in such people. And deaf people are five to six times better at perceiving motion in the periphery of their visual field. Dr Neville then tried testing the abilities of the two different halves of the brain. Hearing people are better at detecting a moving object when it is in their left visual field — that is, when it is seen by the right hemisphere of the brain. No surprise there: what are loosely called spatial skills are concentrated in the right hemisphere. But for deaf people it is the opposite: their left hemispheres are better at detecting movement. It seems natural enough that the deaf should use more of their brains to analyse vision. But why transfer some of that ability from one half of the brain to the other? The answer came from Dr Neville’s latest experiment. She compared 12 congenitally deaf people with 12 people who were not deaf but learnt American sign language as their first language,

because they had deaf parents. The sons and daughters of deaf parents showed no expansion of the visual skills into the auditory cortex but they did show the sensitivity of the left hemisphere to motion. In other words, there are two separate effects: the enhancement of vision to compensate for deafness and the specialisation of the left hemisphere for processing sign language. Why the left hemisphere? Because that is where language is housed. This has led Dr Neville to the remarkable conclusion that the brains of native “speakers” of sign languages are treating some movement analysis as a linguistic, not a spatial, skill. Other evidence bears this out. Dr Ursula Bellugi, also at the Salk Institute, has studied four congenitally deaf people who have damaged right hemispheres (for example, as a result of strokes) and four who have damaged left hemispheres. She finds that damage to the left hemisphere interferes with such peculiarly linguistic skills as rhyming ability (in American sign language, for instance, apple rhymes with key because both

employ the same sign in different positions) or syntax (for example, the direction of movement of the hands broadly corresponds to word order as a way of indicating the subject and object). Damage to the right hemisphere affects spatial skills, such as drawing ability, but not language. One woman with a damaged right hemisphere, when asked to describe with her hands the layout of her room, “drew” (in the air) an image in which everything was crammed into the right half of the space in front of her. Yet, when she used sign language, her gestures filled the whole of the space — a dramatic demonstration of the way in which the two tasks were controlled by different parts of the brain — the one needing both hemispheres, the other being controlled by the language centre in the intact left hemisphere. Dr Bellugi and her colleagues have surprised themselves and much of the scientific world with their discoveries. For they have convincingly shown that sign languages are formal, grammatical languages and not, as was previously thought, charades whose

grammar was borrowed from spoken tongues. Each sign language (and there are many) has a grammar just like a spoken language, but that grammar owes nothing to any spoken language in its details. It has evolved separately. For example, in American sign language, “religious” is a movement imposed on the sign for “church” and “business-like" is a similar movement imposed on the word for “business,” but if the movement is sharp the signs become, respectively, “narrowminded” and “proper” — a typical grammatical rule. Studies of how children acquire sign language confirm this. True sign languages develop only after they have passed through several generations, and it seems to be the children who create the grammar. The same thing happens with pidgin dialects. They turn into grammatical Creole languages only when learnt by a second generation as its first tongue. Dr Pat Launer at the University of California at San Diego has found that the way in which children acquire sign language has many parallels with the way other children acquire spoken languages. Children “babble” — they make poor imitations of signs in which both hands do symmetrical things, just as speaking children use symmetrical words like “baba.” Parents use exaggerated, ungrammatical signs just as other parents use “motherese.” Contrary to received wisdom, pantomime — the equivalent of onomatopeia — is unimportant and rare in the early talk of children. Even where it does exist (such as the sign for milk, which is one of the earliest words learnt and imitates the action of milking a cow), it is doubtful that children understand the allusion. Most astonishing of all, deaf children learning sign language as their first language begin to “speak” at an earlier age than do talking children and have a larger vocabulary at 15 months. There is no evidence that English displaces the primary role of sign language in those hearing children who learn sign language first. Dr Neville has found that deaf people whose first language is American sign language show much the same pattern of brain activity when they “read” sign language as do hearing people when they read English. But when the deaf people read English their brain activity is different from that of hearing people reading English. English, to the deaf, is a poor second language, much clumsier than signs. Who is say they are wrong? Copyright, “The Economist,” London.

Permanent link to this item

https://paperspast.natlib.govt.nz/newspapers/CHP19850724.2.98

Bibliographic details

Press, 24 July 1985, Page 16

Word Count
1,096

Sign language: not just semaphore Press, 24 July 1985, Page 16

Sign language: not just semaphore Press, 24 July 1985, Page 16

Help

Log in or create a Papers Past website account

Use your Papers Past website account to correct newspaper text.

By creating and using this account you agree to our terms of use.

Log in with RealMe®

If you’ve used a RealMe login somewhere else, you can use it here too. If you don’t already have a username and password, just click Log in and you can choose to create one.


Log in again to continue your work

Your session has expired.

Log in again with RealMe®


Alert