Näytä suppeat kuvailutiedot

dc.contributor.authorKolozsvári, Orsolya B.
dc.contributor.authorXu, Weiyong
dc.contributor.authorLeppänen, Paavo H. T.
dc.contributor.authorHämäläinen, Jarmo A.
dc.date.accessioned2019-07-29T06:49:15Z
dc.date.available2019-07-29T06:49:15Z
dc.date.issued2019
dc.identifier.citationKolozsvári, O. B., Xu, W., Leppänen, P. H. T., & Hämäläinen, J. A. (2019). Top-Down Predictions of Familiarity and Congruency in Audio-Visual Speech Perception at Neural Level. <i>Frontiers in Human Neuroscience</i>, <i>13</i>, Article 243. <a href="https://doi.org/10.3389/fnhum.2019.00243" target="_blank">https://doi.org/10.3389/fnhum.2019.00243</a>
dc.identifier.otherCONVID_32166646
dc.identifier.urihttps://jyx.jyu.fi/handle/123456789/65130
dc.description.abstractDuring speech perception, listeners rely on multimodal input and make use of both auditory and visual information. When presented with speech, for example syllables, the differences in brain responses to distinct stimuli are not, however, caused merely by the acoustic or visual features of the stimuli. The congruency of the auditory and visual information and the familiarity of a syllable, that is, whether it appears in the listener's native language or not, also modulates brain responses. We investigated how the congruency and familiarity of the presented stimuli affect brain responses to audio-visual (AV) speech in 12 adult Finnish native speakers and 12 adult Chinese native speakers. They watched videos of a Chinese speaker pronouncing syllables (/pa/, /pha/, /ta/, /tha/, /fa/) during a magnetoencephalography (MEG) measurement where only /pa/ and /ta/ were part of Finnish phonology while all the stimuli were part of Chinese phonology. The stimuli were presented in audio-visual (congruent or incongruent), audio only, or visual only conditions. The brain responses were examined in five time-windows: 75-125, 150-200, 200-300, 300-400, and 400-600 ms. We found significant differences for the congruency comparison in the fourth time-window (300-400 ms) in both sensor and source level analysis. Larger responses were observed for the incongruent stimuli than for the congruent stimuli. For the familiarity comparisons no significant differences were found. The results are in line with earlier studies reporting on the modulation of brain responses for audio-visual congruency around 250-500 ms. This suggests a much stronger process for the general detection of a mismatch between predictions based on lip movements and the auditory signal than for the top-down modulation of brain responses based on phonological information.en
dc.format.mimetypeapplication/pdf
dc.languageeng
dc.language.isoeng
dc.publisherFrontiers Media
dc.relation.ispartofseriesFrontiers in Human Neuroscience
dc.rightsCC BY 4.0
dc.subject.otherspeech perception
dc.subject.othermagnetoencephalography
dc.subject.otheraudio-visual stimuli
dc.subject.otheraudio-visual integration
dc.subject.otherfamiliarity
dc.titleTop-Down Predictions of Familiarity and Congruency in Audio-Visual Speech Perception at Neural Level
dc.typearticle
dc.identifier.urnURN:NBN:fi:jyu-201907293693
dc.contributor.laitosPsykologian laitosfi
dc.contributor.laitosDepartment of Psychologyen
dc.contributor.oppiainePsykologiafi
dc.contributor.oppiaineMonitieteinen aivotutkimuskeskusfi
dc.contributor.oppiaineHyvinvoinnin tutkimuksen yhteisöfi
dc.contributor.oppiainePsychologyen
dc.contributor.oppiaineCentre for Interdisciplinary Brain Researchen
dc.contributor.oppiaineSchool of Wellbeingen
dc.type.urihttp://purl.org/eprint/type/JournalArticle
dc.type.coarhttp://purl.org/coar/resource_type/c_2df8fbb1
dc.description.reviewstatuspeerReviewed
dc.relation.issn1662-5161
dc.relation.volume13
dc.type.versionpublishedVersion
dc.rights.copyright© The Authors, 2019.
dc.rights.accesslevelopenAccessfi
dc.relation.grantnumber641652
dc.relation.grantnumber641652
dc.relation.grantnumber292466
dc.relation.grantnumber641858
dc.relation.grantnumber641858
dc.relation.projectidinfo:eu-repo/grantAgreement/EC/H2020/641652/EU//ChildBrain
dc.relation.projectidinfo:eu-repo/grantAgreement/EC/H2020/641858/EU//PREDICTABLE
dc.subject.ysohavaitseminen
dc.subject.ysopuhe (puhuminen)
dc.subject.ysoärsykkeet
dc.subject.ysoMEG
dc.format.contentfulltext
jyx.subject.urihttp://www.yso.fi/onto/yso/p5293
jyx.subject.urihttp://www.yso.fi/onto/yso/p2492
jyx.subject.urihttp://www.yso.fi/onto/yso/p2943
jyx.subject.urihttp://www.yso.fi/onto/yso/p3329
dc.rights.urlhttps://creativecommons.org/licenses/by/4.0/
dc.relation.doi10.3389/fnhum.2019.00243
dc.relation.funderEuroopan komissiofi
dc.relation.funderSuomen Akatemiafi
dc.relation.funderEuroopan komissiofi
dc.relation.funderEuropean Commissionen
dc.relation.funderResearch Council of Finlanden
dc.relation.funderEuropean Commissionen
jyx.fundingprogramMSCA Marie Skłodowska-Curie Actions, H2020fi
jyx.fundingprogramProfilointi, SAfi
jyx.fundingprogramMSCA Marie Skłodowska-Curie Actions, H2020fi
jyx.fundingprogramMSCA Marie Skłodowska-Curie Actions, H2020en
jyx.fundingprogramResearch profiles, AoFen
jyx.fundingprogramMSCA Marie Skłodowska-Curie Actions, H2020en
jyx.fundinginformationThis work was supported by the European Union Projects ChildBrain (Marie Curie Innovative Training Networks, # 641652), Predictable (Marie Curie Innovative Training Networks, # 641858), and the Academy of Finland (MultiLeTe #292 466).
dc.type.okmA1


Aineistoon kuuluvat tiedostot

Thumbnail

Aineisto kuuluu seuraaviin kokoelmiin

Näytä suppeat kuvailutiedot

CC BY 4.0
Ellei muuten mainita, aineiston lisenssi on CC BY 4.0