Linguistics Researchers Adding Ultrasound Tool to Study Speech Formation

Lori Harwood
July 9, 2003

Technical innovations in how researchers study the physical mechanisms of speech may offer help in preserving the dying languages of small cultures, and possibly aid those with speech disorders communicate effectively.

Diana Archangeli, linguistics professor and associate dean for research in the College of Social and Behavioral Sciences at the University of Arizona in Tucson, and Robert Kennedy, a UA linguistics post-doctoral associate, have received a grant from the James S. McDonnell Foundation to visually study speech mechanics. The one-year, $93,770 grant for their proposal, "Coordinating mental, motor, and perceptual constraints in language," will be used to purchase an ultrasound unit that will allow them to acquire moving cross-sectional images of the vocal tract.

Previous research relied upon such techniques as X-ray and magnetic resonance imaging (MRI). But X-ray filming can be dangerous because of radiation, and MRI cannot capture moving images. By contrast, ultrasound makes for an ideal recording mechanism. It is safe, provides good resolution images in real time and is portable.

The goal of the project is to better understand the relation between the mental representation of sounds and their physical manifestation in language. Archangeli and Kennedy will study cases in numerous languages where they say there is reason to believe that there is a mismatch between the mental representation (the representation in the mind of the speaker) and what is heard (what is actually perceived through the ear of the listener).

In these cases, the mismatch may arise in two possible ways. Linguists usually assume that articulatory gestures are reliably recovered from auditory perception, in which case, the mismatch exists between how a speakers think they produce a particular sound and how they actually pronounce it. However, such things as one's native language can heavily influence how precisely one perceives language sounds, and it may instead be that some mismatches exist between how speakers pronounce a sound and what listeners actually hear.

Using ultrasound, Archangeli and Kennedy will record images of the gestures made by the tongue (mouth) when articulating the potential mismatches, to determine whether the mismatch lies between the mind and mouth, or between the mouth and ear.

According to Archangeli, "The research is important for the languages we intend to study, some of which are underdocumented and endangered. Solid generalizations about languages must be based on a diverse sample of languages. This diversity diminishes when the last speaker of a language dies. To the extent that speakers of a dying language will work with us, our project will contribute to the maintenance by documenting the precise articulations while speakers remain alive."

Bob Kennedy says an example of a potential mismatch can be seen in Turkish. Spoken Turkish requires all vowels in a word to share a specific tongue gesture.

"Either they have the tongue bunched forward, as in the word 'ipler' (ropes), or they all have the tongue bunched back, as in the word 'pullar' (stamps). The sounds between the vowels are perceived to be 'the same' by speakers of Turkish regardless of the surrounding vowels. In other words, the 'l' in 'ipler' is not different from the 'l' in 'pullar,'" Kennedy said.

Archangeli and Kenney cite this as a potential mismatch. It could be that the two words actually have a different pronunciation of 'l' despite perceptions to the contrary.

"Such a difference could reveal a relationship between physical pronunciation of sounds and our abilities to perceive them that is far more complex and abstract than linguists typically claim. As a result, the project has quite a potential impact on current understanding of the cognitive organization of language," he said.

Archangeli and Kennedy plan to include a diverse group of languages in their study, including Turkish, Arabic, American languages like Menomini and Navajo, and African languages like Wolof and Kinande.

Archangeli also notes that future uses of the ultrasound could include working with children with speech impediments, assisting with foreign language learning, and helping deaf people learn to articulate oral language more effectively.

The University of Arizona is one of a handful of research facilities exploring the promising use of ultrasound technology in linguistics. Other institutions using ultrasound for speech are the University of British Columbia, Haskins Labs (Yale) and the University of Maryland-Baltimore County. The UA linguistics department, which has one of the top-10 graduate programs in the country, has state-of-the-art equipment for studying speech, including perception experiment facilities, speech analysis software, recording equipment, air flow and pressure measurement, palatography, and speech synthesis and recognition.

Michael Hammond, head of the UA linguistics department, said "This is a wonderful honor for Archangeli and Kennedy, and a great opportunity for linguistics at Arizona. I don't believe there is another linguistics program in the country that has more technological resources than we do now for the experimental investigation of speech."