Info

Figure 1.2 Rates of sucking among infants signalling their reactions to a change of stimulus. The graph shows variations in the rate of sucking among infants when they are presented with a phonological contrast (the phoneme /p/ versus /b/), a sound change (/p/ versus /p'/)> and the same stimulus (/p/ versus /p/) (the control group). The dotted vertical line indicates the moment of the stimulus change. Only the first group of babies clearly increased the rate of sucking following the change of stimulus, thus showing that these babies perceived a difference between the two sounds. This reaction cannot be explained by a discrepancy in voicing lag between the two sounds, since the voicing lag between /p/ and /b/ is of the same order as that between /p/ and /p'/ (from Eimas, Siqueland, Jusczyk, and Vigorito, 1971).

that constitute phonetic categories. Babies who are only a few days old show themselves to be small geniuses in this domain.

Questions then began to multiply. Is the predisposition to process language sounds limited to a talent for distinguishing language segments? Or are babies also precociously sensitive to other particularly important aspects of language, such as the prosodie contours of sentences, with their melody and rhythm? Experiments quickly confirmed the importance of prosody. Newborns only a few days old prefer to listen to the voice of their mother when this is presented in competition with that of another mother talking to her baby. But the mother's intonation must be natural; if a tape recording of her voice is played backward, the child's preference no longer holds. This preference is related to the dynamic aspects of maternal speech, such as intonation, rather than to the static aspects of sounds, which are preserved when the tape is played backward. The child's attention therefore is not related to the static characteristics of the voice but on those characteristics of the voice present under normal conversational circumstances (Mehler, Bertoncini, Barriere, and Jassik-Gershenfeld, 1978).

In addition to a preference for the mother's voice, the child shows a preference for the mother's language (Mehler et al., 1988). When sequences of French speech are presented following sequences of Russian speech, four-day-old French infants show stronger renewed sucking than when these sequences are presented in the reverse order. Since the two samples are recorded by a single bilingual speaker, it is not a matter of different speakers. Furthermore, this preference is maintained when the sequences have been filtered to remove most of the phonetic information while leaving the prosody intact. The prosodic differences between the child's native language and the foreign language are therefore sufficient to arouse a livelier reaction during the presentation of the native language. Is this familiarity with the native language uniquely the product of the baby's contact with the mother during the first few days following birth? Is so short a time really sufficient to orient the infant's attention to certain general properties that characterize the prosody of the language spoken in its environment? Or might this process of familiarization have begun earlier, during the course of prenatal life?

The Infant Is Prepared Before Birth

The embryo of the first months does not seem to have much to tell us about language, but the fetus of the final months does. Traditionally, it was supposed that the future child, comfortably insulated inside its mother, bathed in amniotic fluid, enjoys an agreeable silence that enables it to develop peacefully before having to brave the noisy, air-filled atmosphere in which it will later live. Physicians long dismissed as maternal imagination the observations of mothers who felt the fetus react to sharp noises and jump at the sound of a loud telephone ring. It is now known that the child's senses gradually begin to function before birth. The auditory system of the fetus is functional from the twenty-fifth week of gestation, and its level of hearing toward the thirty-fifth week approaches that of an adult. Auditory sensory data reach the fetus both from the intrauterine area—from the mother's living body—and from the outside world. The first recordings of sounds reaching the fetus gave the impression of a very loud atmosphere within the womb. Internal sounds (respiratory, cardiovascular, and gastrointestinal noises) were therefore thought to partly mask external sounds, already muffled by the uterine membrane and the beating of the mother's heart. More recent recordings have somewhat changed this picture of the fetus's acoustic environment. These recordings, made by slipping a hydrophone inside the uterus of pregnant women at rest, show that intrauterine background noise is located in the low frequencies, which limits its masking effect (Querleu, Renard, and Versyp, 1981). The mother's voice and the voices of others in the environment thus manage to pass through this background noise. The intensity of the mother's voice in utero is not far removed from its intensity ex utero. The high frequencies are attenuated, but the spectral properties of the mother's speech remain the same, and the chief acoustic properties of the signal are preserved. Words spoken by the mother are transmitted through the air and also through her own body. They are therefore more perceptible than sounds coming from the outside alone, though these too are perfectly audible by the fetus. The prosody is particularly well preserved: the intonation of the speech recorded in utero was faultlessly recognized by adult listeners; the same was true for 30 percent of the phonemes.

But how can we explore prenatal capacities for speech? The technique of nonnutritive sucking involves an interruption, and then a revival, of attention during a change of stimulus. This same type of approach can be used to test perception in the fetus. We possess physiological measures of its behavior in more or less profound states of waking and sleeping. Cardiac and motor responses can give us some idea of what surprises and alerts the fetus when it is in a state of rest. When presented with a repetitive sound, by means of a speaker positioned twenty centimeters (or about eight inches) above the mother's abdomen, the fetus gradually becomes used to it. The beginning of the presentation of the sound provokes an initial reaction of arousal, which is manifested by a reduced heart rate. Cardiac deceleration then subsides and finally disappears, and the heart resumes its normal rhythm with repeated presentations of the sound. This is the period of habituation. If, after habituation, the sound is changed, a new round of cardiac deceleration indicates that the novelty of the sound has been perceived. This habituation-dishabituation paradigm is the basis of methods used to test the capacities of the fetus (Lecanuet et al., 1987).

A number of studies reveal how the fetus reacts to variations of the physical characteristics of the stimulation by examining its behavioral state (see, for example, Lecanuet and Granier-Deferre, 1993; see also Lecanuet, Granier-Deferre, and Schaal, 1993). Differences in both the intensity and frequency of sound stimuli elicit discriminatory reactions in the form of cardiac deceleration. The same is true for variation in the order of speech sounds. Jean-Pierre Lecanuet and C. Granier-Deferre (1993) presented fetuses between thirty-six and forty weeks old with sixteen repetitions of the disyllable [babi]; when the fetus was habituated, this disyllable was changed to [biba]. The change in the order of the syllables provoked a deceleration in the rate of the heartbeat of the fetus, tested in a state of calm sleep. This deceleration indicates that the two sequences were distinguished. Nothing allows us to say that the fetus recognized them. Nonetheless, the fetus did react to a simple change in the order of the two phonetically similar syllables that made up the disyllables. The second disyllable was new to it by comparison with the first.

The question arises, then, whether exposing fetuses to their mother's language before birth favors perceptual adjustment to the phonetic and prosodic parameters that characterize this language and differentiate it from others. We have seen that categorical discrimination is universal in newborns. At the same time, however, they recognize the voice of their mother only when prosody is preserved. Does there exist a prenatal framework that helps regulate certain sophisticated infant perceptual capacities? Does external stimulation leave an imprint on the brain of the fetus? Or are the observed reactions simply signs of intermittent arousal in response to changes in stimulation?

To better determine the source of the discriminations observed in the fetus and their impact on the capacities of newborns, attempts were made to discover whether memories of prenatal experiences persist. Using the well-tested method of nonnutritive sucking, it was first asked simply whether newborns between one and three days old were able, by virtue of the prenatal experience of their mother's voice, to distinguish this voice from that of other speakers (DeCasper and Fifer, 1980). When they had no more than twelve hours of effective contact (ex utero) with her, newborns preferred the voice of their mother to that of another woman. The questions that followed were more specific. Are the effects of exposing the fetus to important acoustic characteristics for speech carried forward with the newborn? To find out, Anthony DeCasper and Melanie Spence (1986) used a more sensitive variant of the procedure of nonnutritive sucking. In their procedure, one of the stimuli is presented while the newborn makes long pauses between suckings; the other is presented during brief pauses. Newborns regulate the rhythm of their sucking according to their preference for the stimulus: slow sucking generates one of the stimuli, and rapid sucking the other.

Using this method, the authors showed that newborns would give one rhythm to their sucking to hear a passage of prose that had been recited by the mother in a loud voice during the last six weeks of pregnancy and another rhythm to hear a new prose passage read by the mother but not previously heard during pregnancy. One might suppose that the mother's voice simply enjoys an altogether special status and serves as the model for recognizing the intonation and the regularities of the passage that had been heard in utero. But the newborns preferred the passage read by the mother before their birth even if it was read by another woman during the test. The fetus therefore appeared to be responsive to the general acoustic properties of the speech signal and not simply to the voice and specific intonations of the mother.

This conclusion called for verification: the authors carried out another experiment, now testing recognition not in newborns but in the fetus (DeCasper and Spence, 1986). They asked future mothers to read a poem in a loud voice every day for four weeks. At the end of these four weeks, when the mother was in the thirty-seventh week of gestation, the fetus listened to the poem that the mother had recited in alternation with another poem never heard before. These alternating sequences were recorded by a third person and retransmitted through a speaker positioned at the level of the head of the fetus. Variations in the heart rate served as an index of discrimination. This technique confirmed the role of prenatal exposure. In fact, the heart rate systematically decreased only in response to the poem read by the mother during the preceding four weeks and did not vary during the reading of the other poem. What cues enabled the fetus to react to the familiar poem? These were not characteristics of the mother's voice, since the test poems were recorded by another woman. It was not a matter of some distinctive rhythm peculiar to a quite particular poem, chosen for just this reason, for precautions had been taken not to habituate all the fetuses to the same poem. It must be concluded, then, that every language event with normal intonation and rhythm alerts the fetus and leads it to regulate its listening to this linguistic model, whose imprint persists at least for a certain time.

Familiarization with the mother's language therefore takes place in the last months of prenatal life. Sound stimulations received during the last months of intrauterine development are likely to contribute to the priming of sensory pathways and to attune the perceptual calibration to certain characteristics of speech sounds.

The Talents of Infants

But let us return to infants, who are not so naive as had been thought, since they are prepared for listening during the prenatal period. At birth, they are capable of distinguishing a broad range of consonant and vowel contrasts, whether or not these contrasts belong to the repertoire of the language spoken in their immediate environment. What is more, babies very quickly show evidence of perceptual constancy: they recognize the similarity of sounds belonging to a single phonetic category, despite physical variations. A single sound can be phonetically realized in many different ways, yet each variant must be recognized as the same sound. Let us take an example: the sound [a] spoken by a man with a deep bass voice, by a child with a high-pitched voice, by a person with a southern accent, by a person with a northern accent, with a rising intonation or with a falling tone, in different contexts, must be categorized as the same vowel /a/.

Studies have shown that at five months, infants are able to neglect the variations of a vowel due to changes in speaker and intonation (Kuhl, 1983). They arrange the different samples of a single sound into a single category.

Another talent of two-month-old newborns is the particular status that they accord the syllable. The syllable is perceived by them as a whole rather than as a combination of distinct elements. This has been demonstrated experimentally. Two-month-old babies were familiarized with a series of syllables—for example, [bi], [si], [li], [mi] (that is, a common vowel with different consonants). It was then observed that the infants were capable of detecting the addition of a new syllable—for example, [di] or [bu]—following a series of syllables that they were familiar with. The babies noticed that [bu] is different from [bo] [ba] [be] and also from [du]. The fact that they noticed the novelty of [bu] shows that the babies did not extract the phoneme /b/ as the property common to the habituation stimuli— that is, they did not decompose the syllables into smaller elements (Juscyzk and Derrah, 1987; see also Bertoncini et al., 1988). This study showed that babies distinguish between sequences of disyllables and sequences of trisyllables, even when the total duration of the sequences remains the same (Bijeljac-Babic, Bertoncini, and Mehler, 1993). This again indicates that the perception of a sound series is organized by syllables.

These aptitudes of the infant are highlighted by experiments in which the acoustic cues are presented in isolation. However, we may ask whether the same discriminatory performance will be found when other auditory promptings, such as prosody, compete for the baby's attention. Recent experiments by Denise Mandel, P. Jusczyk, and D. Kemler-Nelson (1994) sought to establish that performance is similar. They formed the hypothesis that the prosodic cues detected by infants in the first weeks after birth are likely to play an important role in helping the infant organize speech information. They therefore tested the discrimination of phonetic contrasts presented in sentences and compared it against phonetic contrasts presented in word lists. The results of these experiments confirmed their hypothesis: babies of two months detect changes of phonemes better when they occur as part of short sentences than in lists of words. The babies' rate of sucking strongly increased when a series of sentences of the type The (r)at chases the white mouse follows the sentence The (c)at chases the white mouse. The babies reacted less strongly to the change of the phoneme /r/ to /k/ when it appeared in a list of words read in succession than when it appeared in sentences uttered with a natural intonation.

In everyday life, the natural prosody of the language of infants' mothers commands their listening attention. As the authors suggest, prosody serves as a sort of perceptual glue that holds together sequences of speech. Certainly mothers, who amplify the variations of intonation and play with their voices when they talk to their children, feel this to be so. Thanks to such variations, babies not only retain their capacity for discrimination but find their capacity reinforced by the exaggeration of rhythm and prosodic contours. One observes furthermore that babies better distinguish phonetic contrasts when sentences are read by a woman speaking directly to a child than when they are read by an adult addressing another adult.

What's in a Name?

Are newborns sensitive only to the superficial characteristics of speech? Do some patterns in particular come to acquire a meaning?

The infant's name is often spoken when her parents cuddle or play with her. Does this sound form, which is often associated with feelings of personal well-being, take on particular significance? Can the child recognize when her name is pronounced? Denise Mandel, P. Jusczyk, and D. Pisoni (1995) studied infants of four and a half months to determine whether their names had special status for them.

The method of nonnutritive sucking no longer works for infants at this age. Fortunately, it becomes possible to inquire into their preferences more directly. Two speakers are placed on either side of the infant. Above each speaker there is a small light. So long as the child gazes toward one of the lights, a sound stimulus (the child's own name, on the one hand, and, on the other, three other names spoken with the same tone) is broadcast through the corresponding speaker. The cumulative listening time—or, more exactly, the amount of time spent looking at the light sources—indicates the child's preference for one or the other stimulus.

It turns out that babies listen more attentively to their own name than to the names of their friends. One's name is therefore a recognized signal. However, to say that it is a signal does not imply that the baby of four months connects sound patterns with meanings. Dogs recognize their name, which is as much of a signal for them as the sight of their leash or that of their masters putting on their coats. For the dog as for the baby, names are sound signals arousing attention in one or more particular situations. Babies of four months react to their name, without necessarily realizing that the sound forms have a referential function.

The brain of the newborn is therefore far from being empty. But is the newborn's language capacity organized like that of an adult?

The Organization of Language in the Brain

The principal characteristic of the cerebral cortex is its subdivision into zones that support particular modules—motor or sensory modules, for instance—and cognitive functions. For a century it has been known that discrete areas of the cortex are involved in processes specific to the comprehension and production of speech and language (see figure 1.3). In the adult, the cognitive aspects of language are represented in the left hemisphere of the cerebral cortex, along the Sylvian sulcus (or fissure). The two areas chiefly involved in the comprehension and production of speech are Broca's area (Broca, 1969/1861) and Wernicke's area (Wernicke, 1874), whose functions, until the recent advent of cerebral imaging in clinical research, were determined on the basis of studies of pathology. Lesions involving Broca's area, located in the lower part of the third convolution of the frontal lobe, beneath the Sylvian sulcus, entail the near impossibility of producing speech, due to the loss of articulatory control, while leaving intact the understanding of words and sentences. Adjacent to Broca's area is the system of representations responsible for the precise control of the muscles of the mouth and larynx.

Lesions involving Wernicke's area, in the rear and upper part of the temporal lobe at its junction with the parietal and occipital

Supplementary motor area for language

Frontal langua area (Broca)

Frontal lo

Supplementary motor area for language

Frontal langua area (Broca)

Frontal lo

Figure 1.3 Location of Broca's and Wernicke's areas in the human brain and the motor areas involved in articulation and phonation (supplementary motor area and voice control area).

lobes, entail a loss of comprehension while leaving intact the ability to speak, though the speech of such patients is for the most part incomprehensible. The arcuate fasciculus (identified by Burdach) connects Wernicke's area with Broca's.

The left hemisphere has a fundamental role in the processing of rapid acoustical changes and consequently in the processing of speech sounds. The right hemisphere, by contrast, has responsibility for the perception of acoustical events distributed over a long period of time. This hemisphere controls prosody. Lesions of the right hemisphere do not produce aphasias or apraxias, but they do cause problems in the processing and production of prosody and music. The elements of prosody as well as variations in intonation due to affec-tivity are processed on the right side, and their anatomical organization is located in mirror image to that of the cognitive and analytical language processed on the left side.

Prosodic elements are particularly important for the acquisition of speech. As we have seen, babies are attentive first to intonation; they vocalize prosodic contours before they articulate. They produce isolated syllables before producing sequences of syllables; the phonological and syntactical organization of speech comes later. We now know that the right hemisphere, in utero and at birth, matures more rapidly than the left hemisphere. The discrepancy between the maturational rhythms of the two hemispheres in the first year is the source of differences in the emergence of functional capacities (De Schonen, Van Hout, Mancini, and Livet, 1994). It may explain certain characteristics of language development, such as the form in which words are first coded. This is a point to which we return later.

Cerebral imaging provides additional information: the localization of fundamental processes is more variable than had previously been thought, exhibiting patterns that are liable to differ from individual to individual. More elaborate functions are derived from interconnections among several regions of the brain.

Corresponding to the left lateralization of language-processing areas are anatomical and histological asymmetries. The planum tempo-rale, including Wernicke's area, which plays a primary role in language comprehension, is larger on the left side of the brain than on the right in 65 percent of individuals (Geschwind and Galaburda, 1987).

Since infants are born without fully functional language abilities—they cannot talk at birth—why should they display such cerebral lateralization? Does it exist from birth, or does it develop at the same time as language? Broca (1969/1865) hypothesized that it accompanies the development of language. The model of acquisition developed by Eric Lenneberg in 1967 rests on the same idea. For Lenneberg, lateralization and the acquisition of language arise in complementary fashion, beginning at two years and concluding with the onset of puberty between the ages of ten and twelve. Rather surprising observations show, in fact, that brain-damaged young children learn to speak and to speak well. These children, whether victims of a perinatal injury to the left side of the brain or of a disease requiring surgical removal of the left hemisphere (lobotomy), more completely recover the capacity to speak the earlier the accident or operation. When the lesion occurs before the age of one year, recuperation is total. In the case of later lesions, deficits are observed over the long term in certain aspects of syntactic processing. It is possible, therefore, in infants and small children for the architecture of the cortex and that of its connections to be restructured, and the propensity of the left hemisphere to process and produce language reversed, cerebral plasticity enabling the damaged brain to furnish substrates for language in the right hemisphere instead. Lenneberg concluded from this that a functional equipotentiality of the two hemispheres exists during the first two years and that cerebral lateralization is the result of learning processes.

The possibility of early functional and structural reorganization, however, does not necessarily indicate that responsibility for language is not one of the purposes of the left hemisphere. In a normal brain, linguistic functions depend on the operation of certain cerebral structures in the left hemisphere. Only a dramatic change that profoundly alters cerebral activity leads other structures to support these processes. The plasticity of the brain, while it is significant in the very young child, does not inevitably amount to a congenital hemispheric equipotentiality.

Among the anatomical asymmetries in the newborn and the infant is that of the planum temporale, which is larger on the left side than on the right from the thirty-first week of gestation (Geschwind and Galaburda, 1987). Why do functional asymmetries matter? To be able to refute the idea of initial equipotentiality and progressive lateralization, it is necessary to demonstrate the early specialization of the left hemisphere for speech processing. Is such a demonstration possible?

Indications—but nothing more—of early functional lateralization have been reported thanks to a variety of ingenious techniques. Researchers have racked their brains thinking up ways to lead newborns to indicate whether they prefer using one hemisphere or the other in processing language sounds. Psychological approaches, such as nonnutritive sucking, along with certain physiological approaches have made it possible to investigate whether one hemisphere rather than another seems specially involved in tasks of discriminating syllables or musical notes. The method researchers developed draws on a phenomenon known as dichotic listening.

Dichotic listening rests on the fact that the principal transmission pathways for auditory signals are crossed: sounds reaching the right ear are transmitted first to the left hemisphere, while sounds reaching the left ear are sent first to the right hemisphere (see figure 1.4). In dichotic listening, when two different sounds are simultaneously presented in a synchronized way, one to the right ear and the other to

Right hemisphere Left hemisphere

Right hemisphere Left hemisphere

Figure 1.4 Diagram showing the crossing of auditory pathways. The presentation of a sound to the right ear goes first to the left hemisphere.

the left ear, the hearer reports a single sound—the dominant sound. In adults, the sound presented to the right ear—which travels, then, to the left hemisphere—is dominant when a speech sound is involved. The sound presented to the left ear—traveling to the right hemisphere—is dominant when a musical sound is involved.

In 1977, Anne Entus made use of this phenomenon by combining it with nonnutritive sucking to examine hemisphere preference in newborns. Infants of two months were presented with a musical sound in one ear and a speech sound in the other. These sounds were repeated until the point of habituation and the babies resumed their normal sucking rhythm. At this moment, one of the sounds was replaced by another of the same kind. The increase in the rate of sucking indicated that the child had perceived the change. This increase was clearer following a change of the speech sound in the right ear (as opposed to the left) and following a change of musical notes in the left ear (as opposed to the right). Though sometimes questioned, these results have more often than not been reproduced, either using the same experimental approach (Bertoncini et al., 1989), or event-related potentials (Molfese and Molfese, 1979), or measures of cardiac deceleration (Glanville, Best, and Levenson, 1977; see also Best, Hoffman, and Glanville, 1982). They seem to indicate, in any case, that at the age of two to three months the left hemisphere does a better job discriminating between speech sounds and the right hemisphere a better job discriminating between musical sounds.

Auditory event-related potentials produced in response to phonetic and musical presentations provide measures of the electrical activity of the brain generated by acoustic stimuli. These responses are difficult to interpret, particularly in babies, but the data they supply are important. The first studies in this domain favored the thesis of a preferential activation of the left hemisphere during the presentation of syllables. Using the same method, Ghyslaine Dehaene-Lambertz and Stanislas Dehaene (1994) recently showed that three-month-old infants can very rapidly detect, in less than 400 milliseconds, a change in the first consonant of a syllable. The electrophysiological correlates of this phonetic discrimination indicate a moderate temporal functional asymmetry favoring the left hemisphere. This hemisphere seems to possess an advantage in acoustically and phonetically processing short syllables. However, the ERP responses obtained in this experiment were subject to considerable individual variations that obliged the authors to qualify their assessment. They concluded that "lateralization in favor of the potentiality of rapid syllable discrimination in the left hemisphere appears to be a modest advantage for this hemisphere rather than a radical division of functions between the two hemispheres" (Dehaene-Lambertz, 1994, pp. 43-49).

It follows that there is something the brain encodes asymmetrically in the case of speech stimuli and that such encoding occurs early in life. But one can only speculate about the nature of the mechanism that produces this asymmetry. It is possible that acoustic stimuli have neuronal substrates similar to those that are useful for processing speech. Thus, the left hemisphere might serve the purpose of perceiving sequences of auditory stimuli characterized by constantly changing acoustic spectra. This kind of analysis may account for early auditory discrimination and a tendency to lateralization, without requiring the inference that speech is processed by the left hemisphere in three-month-old babies. But it might also be supposed that a functional asymmetry corresponding to the anatomical asymmetry observed in newborns underlies a tendency for the left hemisphere to process syllables rather than melodic sounds or sounds that cannot be articulated in languages (Bertoncini et al., 1989).

The process of language acquisition may, in fact, be essential to cortical maturation and hemispheric lateralization. If, for external reasons, language cannot be acquired within the normal time period, lateralization seems to be greatly affected. In the case of Genie (Cur-tiss, 1977) a child who was isolated shortly after birth and not exposed to a normal linguistic atmosphere until the age of twelve, the right hemisphere was dominant for the incomplete form of language that she was able to acquire. Generalization from the small number of similar cases is not possible, but Genie's case may suggest that the architecture of language centers and the experience of linguistic events are related.

In sum, newborns are far from being the blank slate described by Aristotle. They manifest innate gifts for processing the linguistic environment: they distinguish and categorize the phonemes of languages, and they are sensitive to the voices and prosodic characteristics of their maternal language. Their perceptual system is prepared in advance for processing language sounds. But infants are not only brilliant listeners. Although speech is not yet at their disposal, they are nonetheless preparing themselves for this moment by sharpening their vocal capabilities, organizing their perceptual capacities, and conversing with adults through looks, sounds, and gestures.

Was this article helpful?

0 0

Post a comment