Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image
Article image

A micro that listens to a paralysed boy

By

STEVE CONNOR

in London

Nicholas Stephens was only 18 months old when be became paralysed overnight, victim of a bacterial infection. Five years later, a computer that he can talk to gives him a chance of a normal life in the family home in the southern England seaside town of Portsmouth. Nicholas’s paralysis is so severe that a mechanical ventilator has to do his breathing for him. This means that he has to talk through a tube in his throat and his talking speed is determined by the rate at which the machine delivers and expels air from his lungs. Nevertheless, Nicholas has succeeded in learning to talk clearly and almost normally. He can say up to six words in one breath, depending on the length of the words, and he is given 15 artificial breaths a minute. Nicholas’s one piece of good fortune is to have a father with the foresight to develop a computer that could understand simple, spoken commands. Through this computer, Nicholas is now able to expand his knowledge of the world, play games, switch electrical appliances on and off and even turn pages of a book. It has become an electronic extension of himself; The struggle Nicholas has had mastering his own voice underlines the complexity of speech. It is the most intricate of social activities and yet something we all take for granted. As complex as presentday computers are, they come nowhere near to matching the intricacies of the human brain. Nicholas may have mechanical difficulties in talking, but there is nothing wrong with his equipment for understanding the words and sentences he hears. Getting computers to. talk — speech output — is easier than getting them to understand what is

being said — speech input. Computers are dumb contraptions. They may be able to imitate sounds, but they find it incredibly difficult to understand what they mean. Take an example of two quite different sentences that sound similar: “It is a grey day" and “It is a grade A.” The brain, as far as we know, not only matches words with a library of sounds in the memory, but also takes into account a range of other important features, such as the context in which the sounds are made, the inflexion the speaker puts on them and so on. Present research into voice processing has concentrated‘on matching the pattern of a sound with patterns held in a memory. This means that the

computer must be trained to recognise a word against a "template” that has already been fed into its memory. Nicholas’s computer, a standard home micro, has such a device, a wordrecognition unit made in the United States, which can store the templates of about 200 words. It is not able to identify accurately words spoken by more than one person because of the differences between people’s speech. Nicholas’s father, Ronald, a researcher at Portsmouth Polytechnic’s School of biological Sciences, says: “The recognition accuracy is highest if spoken words are stored in well defined groups known as nodes, and each node should not contain more than 50 templates representing 50 spoken words.”

When these conditions are met, he says, the device is able to detect accurately 99 words out of a hundred. The machine cannot, of course, identify words that are not in its dictionary. The device connected to Nicholas’s computer is relatively unsophisticated. It has a limited dictionary, it can only deal with words one at a time, not continuous phrases or sentences, and there is still the problem of understanding sounds that are similar but which mean totally different things, like “grey day” and “grade A.” Experts in voice processing are now experimenting with speech input and output in far more complicated situations. Their aim is to try to refine speech processing so that it can be used in situations where the operators of a computer would find it easier to tell a computer what to do by speaking to it, rather than feeding in commands from a keyboard. Under the present system of template matching, it is difficult to build

voice-recognition devices that can identify more than a couple of hundred words. But researchers in the field of voice processing are now searching for a fundamentally different approach to template matching of two sounds. Given the problems, the day when we will be able to hold a conversation with a computer is a long way off. For the moment, we will have to be content with simple verbal instructions, such as telling a computer to dial a particular telephone number.

But for children like Nicholas this represents a quantum leap in the quality of life. — Copyright, London Observer Service.

And some ask: what use is a computer?

This article text was automatically generated and may include errors. View the full page to see article in its original form.
Permanent link to this item

https://paperspast.natlib.govt.nz/newspapers/CHP19861104.2.112.9

Bibliographic details

Press, 4 November 1986, Page 29

Word Count
794

A micro that listens to a paralysed boy Press, 4 November 1986, Page 29

A micro that listens to a paralysed boy Press, 4 November 1986, Page 29