Top-Down Predictions of Familiarity and Congruency in Audio-Visual Speech Perception at Neural Level
Kolozsvári, O. B., Xu, W., Leppänen, P. H. T., & Hämäläinen, J. A. (2019). Top-Down Predictions of Familiarity and Congruency in Audio-Visual Speech Perception at Neural Level. Frontiers in Human Neuroscience, 13, 243. 10.3389/fnhum.2019.00243
Published inFrontiers in Human Neuroscience
© The Authors, 2019.
During speech perception, listeners rely on multimodal input and make use of both auditory and visual information. When presented with speech, for example syllables, the differences in brain responses to distinct stimuli are not, however, caused merely by the acoustic or visual features of the stimuli. The congruency of the auditory and visual information and the familiarity of a syllable, that is, whether it appears in the listener's native language or not, also modulates brain responses. We investigated how the congruency and familiarity of the presented stimuli affect brain responses to audio-visual (AV) speech in 12 adult Finnish native speakers and 12 adult Chinese native speakers. They watched videos of a Chinese speaker pronouncing syllables (/pa/, /pha/, /ta/, /tha/, /fa/) during a magnetoencephalography (MEG) measurement where only /pa/ and /ta/ were part of Finnish phonology while all the stimuli were part of Chinese phonology. The stimuli were presented in audio-visual (congruent or incongruent), audio only, or visual only conditions. The brain responses were examined in five time-windows: 75-125, 150-200, 200-300, 300-400, and 400-600 ms. We found significant differences for the congruency comparison in the fourth time-window (300-400 ms) in both sensor and source level analysis. Larger responses were observed for the incongruent stimuli than for the congruent stimuli. For the familiarity comparisons no significant differences were found. The results are in line with earlier studies reporting on the modulation of brain responses for audio-visual congruency around 250-500 ms. This suggests a much stronger process for the general detection of a mismatch between predictions based on lip movements and the auditory signal than for the top-down modulation of brain responses based on phonological information. ...
Publication in research information system
MetadataShow full item record
Related funder(s)European Commission; Academy of Finland
The content of the publication reflects only the author’s view. The funder is not responsible for any use that may be made of the information it contains.
Showing items with similar title or keywords.
Neural generators of the frequency-following response elicited to stimuli of low and high frequency : a magnetoencephalographic (MEG) study Gorina-Careta, Natàlia; Kurkela, Jari L.O.; Hämäläinen, Jarmo; Astikainen, Piia; Escera, Carles (Elsevier, 2021)The frequency-following response (FFR) to periodic complex sounds has gained recent interest in auditory cognitive neuroscience as it captures with great fidelity the tracking accuracy of the periodic sound features in the ...
Magnetoencephalography Responses to Unpredictable and Predictable Rare Somatosensory Stimuli in Healthy Adult Humans Xu, Qianru; Ye, Chaoxiong; Hämäläinen, Jarmo A.; Ruohonen, Elisa M.; Li, Xueqiao; Astikainen, Piia (Frontiers Media SA, 2021)Mismatch brain responses to unpredicted rare stimuli are suggested to be a neural indicator of prediction error, but this has rarely been studied in the somatosensory modality. Here, we investigated how the brain responds ...
Brain's capacity to detect abstract regularities from visual stimuli under different attentive conditions- an ERP study Pynnönen, Silja (2010)Many previous studies have applied oddball paradigm to study change detection. Although changes within single features have been investigated a lot, the changes in multiple feature conjunctions have not. The aim of our ...
Coherence between brain activation and speech envelope at word and sentence levels showed age-related differences in low frequency bands Kolozsvári, Orsolya B; Xu, Weiyong; Gerike, Georgia; Parviainen, Tiina; Nieminen, Lea; Noiray, Aude; Hämäläinen, Jarmo A (MIT Press, 2021)Speech perception is dynamic and shows changes across development. In parallel, functional differences in brain development over time have been well documented and these differences may interact with changes in speech ...
Hämäläinen, Jarmo; Parviainen, Tiina; Hsu, Yi-Fang; Salmelin, Riitta (Pergamon Press, 2019)Initial stages of reading acquisition require the learning of letter and speech sound combinations. While the long-term effects of audio-visual learning are rather well studied, relatively little is known about the short-term ...