All DSE

Quick links to our other sites.

The effects of early auditory deprivation - insights from children with cochlear implants

This update explores the importance of early auditory stimulation by considering the development of speech processing skills in profoundly deaf children who have received a cochlear implant. This literature is relevant to issues affecting children with Down syndrome, because like them, children with cochlear implants have hearing difficulties, but unlike the former, they do not have oral-motor issues.

Download PDF

Pettinato, M. (2009) The effects of early auditory deprivation - insights from children with cochlear implants. Down Syndrome Research and Practice, 12(3), 176-178. doi:10.3104/updates.2119

The development of speech is an area of weakness for most children with Down syndrome, but the underlying causes are little understood. Similarly, the degree to which poor speech may affect the development of language and memory abilities in this population remains to be determined. The literature on the development of speech in Down syndrome tends to concentrate on motor issues, and although these are clearly part of the problem [1,2] , the extent to which they can account for the difficulties in this area is not entirely clear. Similarly, although the majority of children with Down syndrome have some form of hearing loss, usually because of glue ear but also due to sensori-neural losses [3,4] , evidence concerning the contribution of hearing losses to delays in the development of speech and language is inconclusive (for a comprehensive review, see ref 5).

This update explores the importance of early auditory stimulation by considering the development of speech processing skills in profoundly deaf children who have received a cochlear implant. This literature is relevant to issues affecting children with Down syndrome, because like them, children with cochlear implants have hearing difficulties, but unlike the former, they do not have oral-motor issues. Before reviewing the literature on cochlear implants, it is useful to recapitulate why psycholinguists and speech scientists think that babies' exposure to speech sounds is so crucial for the development of speech and language.

Within the first year of life, typically developing infants acquire an acute sensitivity to the phonological and acoustic features of their native language. As early as four months, infants show a preference to the most common stress pattern in the words of the surrounding language [6,7] ('stress' refers to the most prominent syllable in a word, for example in 'banana' it is the second syllable, but for 'daffodil' it is the first) and by six months, infants seem to have established what the vowels of their native language are [8] . For consonants, this process is thought to be accomplished by one year [9] . Infants are also building up an awareness of the most common ways in which sounds occur together (the technical term for this is 'phonotactics'), for example the fact that in English, 'bl' is a frequent combination, whereas 'lb' is not [10] .

Infants face a difficult task when learning the words of their language: how can they recognise words in fluent speech, when there are no clear acoustic cues to word boundaries and most utterances consist of several words (think of the experience of listening to an unfamiliar language). However, knowledge of the sounds of their native language and how they can combine helps infants begin to recognise separate units in the continuous stream of speech. For example, since the majority of English words start with a stressed syllable, a good strategy for determining word boundaries would be to assume the start of a new word when hearing a stressed syllable. By nine months of age, infants seem to indeed use this strategy [11] . Friederici and Wessels showed that infants also used frequent phonotactic patterns to recognise words in fluent speech [10] .

These studies indicate that infants are already learning about and performing quite complex analyses on the sound structure of their native language, long before they begin to utter their first words. It seems that this exposure to speech and the intensive analysis of its sound patterns is an important preparation for later more complex language learning. In an important study, Newman et al. retrospectively compared the performance of children who at two years had high versus low vocabularies [12] . These children had all taken part in a variety of speech perception tasks during the first year of their life. The performance on speech segmentation tasks (i.e. the ability to use phonological cues such as stress or phonotactics for recognising words in continuous speech) of the two groups differed significantly, in that the group who later had small vocabularies had also performed significantly worse on speech segmentation tasks during the first year than the children who would go on to develop large vocabularies at two years. A second study was carried out between the ages of 4-6, and again children who obtained higher measures on a variety of language tests had also performed significantly better on speech segmentation tasks as babies. As better segmentation and higher language scores could simply have been a consequence of overall better cognitive abilities in this group, the researchers also assessed the two groups of children on non-linguistic cognitive abilities. The groups did not differ on measures of cognitive development, and it was concluded that the relationship between segmentation skills and later language development was not based on general cognitive abilities, but rather seemed to be the result of a specific ability to recognise regularities in speech patterns and to use this to learn language.

Surprisingly, very little is known about how speech discrimination and segmentation abilities develop in infants with Down syndrome and how it may relate to their difficulties with language development. The studies that have been carried out assessing speech processing in infants with Down syndrome are not fully conclusive, but do indicate that the same methodologies that have been used with typically developing infants can be applied [13,14] . Research into the neurology of hearing and speech processing suggests that further investigations of speech processing would indeed be warranted: Jiang et al. report evidence of either delayed or atypical auditory system development in infants with Down syndrome [15] , and neuroanatomical studies in older individuals with Down syndrome have found that cell columns were further apart and cell density was decreased in the areas responsible for auditory processing [16,17-19] .

In the absence of information on infants with Down syndrome, it may be informative to look at another clinical population where early speech processing is disrupted. This is the case for children who were born profoundly deaf and who have received cochlear implants. Although the cochlear implant provides auditory stimulation, it is important to note that this does not restore fully normal hearing. Cochlear implants can have a maximum of 22 to 24 channels, so all sounds have to be broken down and processed as having a maximum of 22/24 frequencies, whereas the normally hearing ear can distinguish many hundreds of different frequencies. Therefore these children are not only deprived of sound stimulation from birth, but once the implant has been fitted, the auditory input continues to be less optimal. Although it would be premature to draw direct parallels between the two clinical populations *, there are some surprising similarities in their language development.

Like children with Down syndrome, children with cochlear implants are considerably delayed in their language acquisition [20,21] . This includes difficulties with articulation and intelligibility [22,23], even though there is no reason to expect difficulties with oral-motor skills in children with cochlear implants. Researchers also report greater variability in sound productions than in typically developing children [24], and this inconsistency in production has been described as a key feature of the speech of children with Down syndrome [25,26] .

The two groups not only have difficulties in producing speech, but they also have difficulties with retaining speech in short-term memory, also known as phonological short-term memory [7 , 22] . For most phonological short-term memory assessments, participants are asked to repeat either numbers or words; accurate perception and good speech are therefore necessary to complete these tasks. Since hearing and speech are areas of weakness for both groups of children, a number of studies have tried to establish their role in phonological short-term memory (PSTM) problems. Both groups of children seem to have PSTM problems which go beyond a mere difficulty in reproducing the words they have been asked to remember: when tasks did not require a verbal response and children could instead point to pictures or written words of the items they were asked to remember, impaired phonological short-term memory was still present [7 , 27] . Similarly, presenting the items to remember as pictures or written text so that hearing difficulties could be discounted did not improve phonological short-term memory performance in either group [7 , 27] . It has therefore been suggested that for both groups, there is a specific difficulty with retaining, scanning and retrieving speech in short-term memory which is independent of the immediate effects of hearing or speech problems.

Children with cochlear implants vary in how they communicate. They can be divided into two groups, those who use speech as their main mode of communication and those who use a mixture of signs, lip-reading and speech, also known as 'total communication' [2] . Some studies [22 , 27] have found that the mode of communication after implantation has a strong influence on speech and short-term memory abilities. Children who used speech as their main means of communication had clearer speech, spoke faster and importantly, had better PSTM than children who used total communication [22 , 27] . Authors have commented that amount of experience with speech sounds is the determining factor, irrespective whether this is through the auditory modality or indirectly through visual and proprioceptive cues to speech sounds [22 , 27] (i.e. feeling where and how in the mouth sounds are produced). Other studies contend that age of implantation, rather than communication mode, has a stronger influence on outcomes for children with cochlear implants [20 , 28] . Both opinions emphasise the importance of early exposure to speech sounds, but the extent to which visual and proprioceptive cues can lessen the effect of auditory deprivation is still debated.

The studies with children with cochlear implants indicate that early experience to speech sounds not only significantly affects the development of speech, but also feeds into more abstract abilities such as being able to process speech in short-term memory. By analogy, some of the problems with the acquisition of speech sounds (see Ref 2: p25) and later PSTM deficit in children with Down syndrome may be in part due to early difficulties with processing speech sounds [15] . Presently it is only possible to speculate on this issue, but as the studies on speech perception in infants with Down syndrome have shown that the same methodologies can be used with this population, it is hoped that future investigations will begin to address this gap in our knowledge.

Which tentative conclusions may be drawn for intervention strategies? The literature on speech processing in typical development and in children with cochlear implants suggests that future interventions for children with Down syndrome may have to put greater emphasis on early speech perception, but this needs to be confirmed with evidence from research with children with Down syndrome. In the meantime, the importance of stimulating interest in and sensitivity to speech patterns early on by playing sound games with infants should not be underestimated. Furthermore, research into speech and language development in children with cochlear implants may also inform intervention for children with Down syndrome. If communication mode is confirmed to be an important factor in the development of language abilities of the former, similar recommendations may be applicable to intervention strategies with the latter.

* As in Down syndrome, outcomes vary hugely for children with cochlear implants; some of the factors which affect this are the age at which the implant was fitted, how many channels the implant has, amount of hearing before the implant was fitted and amount of experience with the implant as well as the type of communication used [20 ].

References

  1. Kumin L. Speech intelligibility and childhood verbal apraxia in children with Down syndrome. Down Syndrome Research and Practice. 2006;10: 10-22. doi:10.3104/reports.301
  2. Stoel-Gammon C. Down Syndrome Phonology: Developmental Patterns and Intervention Strategies. Down Syndrome Research and Practice. 2001; 7(3):93-100.
    doi:10.3104/reviews.118
  3. Marcell MM. Relationships between hearing and auditory cognition in Down's syndrome youth. Down Syndrome Research and Practice. 1995;3:75-91. doi:10.3104/reports.54
  4. Roizen NJ, Wolters C, Nicol T, Blondis TA. Hearing loss in children with Down syndrome. Journal of Pediatrics. 1993;123:9-12.
  5. Laws G, Bishop DVM. Verbal deficits in Down syndrome and Specific Language Impairment: a comparison. International Journal of Language and Communication Disorders. 2004;39:423-451.
  6. Weber C, Hahne A, Friedrich M, Friederici AD. Discrimination of word stress in early infant perception: electrophysiological evidence. Cognitive Brain Research. 2004;18:149-161.
  7. Jarrold C, Baddeley AD, Phillips CE. Verbal short-term memory in Down syndrome: A problem of memory, audition, or speech? Journal of Speech, Language and Hearing Research. 2002;45:531-544.
  8. Kuhl PK, Williams KA, Lacerda F, Stevens KN, Lindblom B. Linguistic experience alters phonetic perception in infants by 6 months of age. Science. 1992:606-608.
  9. Werker JF, Tees RC. Cross-language speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behaviour and Development. 1984;7:49-63.
  10. Friederici AD, Wessels JM. Phonotactic knowledge of word boundaries and its use in infant speech perception. Perception and Psychophysics. 1993;54:287-295.
  11. Mattys S, Jusczyk PW, Morgan JL. Phonotactic and Prosodic Effects on Word Segmentation in Infants. Cognitive Psychology. 1999;38:465-494.
  12. Newman R, Ratner NB, Jusczyk AM, Jusczyk PW, Dow KA. Infants' early ability to segment the conversational speech signal predicts later language development: a retrospective analysis. Developmental Psychology. 2006;42:643-655.
  13. Tristao R, Feitosa M. Use of visual habituation paradigm to investigate speech perception in Down syndrome infants. Proceedings of the International Society for Psycholphysics. 2002;18:552-557.
  14. Eilers R, Bull D, Oller D, Lewis D. The discrimination of rapid spectral speech cues by Down syndrome and normally developing infants. In: Harel S, Anastasiouw N, editors. The at-risk Infant. Psycho/Social/Medical Aspects. Baltimore: Brookes; 1985. p. 115-32.
  15. Jiang ZD, Wu YY, Liu XY. Early development of brainstem auditory evoked potentials in Down's syndrome. Early Human Development. 1990;23:41-51.
  16. Buxhoeveden D, Fobbs A, Roy E, Casanova M. Quantitative comparison of radial cell columns in children with Down's syndrome and controls. Journal of Intellectual Disability Research. 2002;46:76-81.
  17. Kemper T. The psychobiology of Down syndrome. In: Nadel L, editor. MIT Press, Cambridge, MA;1988. p.269-289.
  18. Schmidt-Sidor B, Wisniewski KE, Shepard TH, Sersen EA. Brain growth in Down syndrome subjects 15 to 22 weeks of gestational age and birth to 60 months. Clinical Neuropathology. 1990;9:181-190.
  19. Golden JA, Hyman BT. Development of the superior temporal neocortex is anomalous in trisomy 21. Journal of Neuropathology and Experimental Neurology. 1994;53:513-520.
  20. Nicholas JG, Geers A. Effects of early auditory experience on the spoken language of deaf children at 3 years of age. Ear and Hearing. 2006;27:286-298.
  21. Crosson J, Geers A. Analysis of narrative ability in children with cochlear implants. Ear and Hearing. 2001;22:381-394S.
  22. Burkholder RA, Pisoni DB. Speech timing and working memory in profoundly deaf children after cochlear implantation. Journal of Experimental Child Psychology. 2003;85:63-88.
  23. Dillon CM, Cleary M, Pisoni DB, Carter AK. Imitation of nonwords by hearing-impaired children with cochlear implants: segmental analyses. Clinical Linguistics and Phonetics. 2004;18(1):39-55.
  24. Hide O, Gillis S, Govaerts P. Suprasegmental aspects of pre-lexical speech in cochlear implanted children. Proceedings of Interspeech 2007: Eighth Annual Conference of the International Speech Communication Association. Antwerp: 2007. p. 638-41.
  25. Dodd BJ, Thompson L. Speech disorder in children with Down's syndrome. Journal of Intellectual Disability Research. 2001;45:308-316.
  26. So LKH, Dodd BJ. Downs-Syndrome and the Acquisition of Phonology by Cantonese-Speaking Children. Journal of Intellectual Disability Research. 1994;38:501-517.
  27. Cleary M, Pisoni DB, Geers A. Some measures of verbal and spatial working memory in eight- and nine-year-old hearing-impaired children with cochlear implants. Ear and Hearing. 2001;22:395-411.
  28. Connor CM, Hieber S, Arts HA, Zwolan TA. Speech, Vocabulary, and the Education of Children Using Cochlear Implants: Oral or Total Communication? Journal of Speech Language and Hearing Research. 2000:43:1185-1204.