Show simple item record

dc.contributor.authorHernández, Doris
dc.contributor.authorPuupponen, Anna
dc.contributor.authorKeränen, Jarkko
dc.contributor.authorWainio, Tuija
dc.contributor.authorPippuri, Outi
dc.contributor.authorOrtega, Gerardo
dc.contributor.authorJantunen, Tommi
dc.contributor.editorJantunen, Jarmo Harri
dc.contributor.editorKalja-Voima, Johanna
dc.contributor.editorLaukkarinen, Matti
dc.contributor.editorPuupponen, Anna
dc.contributor.editorSalonen, Margareta
dc.contributor.editorSaresma, Tuija
dc.contributor.editorTarvainen, Jenny
dc.contributor.editorYlönen, Sabine
dc.date.accessioned2022-12-13T12:48:26Z
dc.date.available2022-12-13T12:48:26Z
dc.date.issued2022
dc.identifier.citationHernández, D., Puupponen, A., Keränen, J., Wainio, T., Pippuri, O., Ortega, G., & Jantunen, T. (2022). Use of Sign Language Videos in EEG and MEG Studies : Experiences from a Multidisciplinary Project Combining Linguistics and Cognitive Neuroscience. In J. H. Jantunen, J. Kalja-Voima, M. Laukkarinen, A. Puupponen, M. Salonen, T. Saresma, J. Tarvainen, & S. Ylönen (Eds.), <i>Diversity of Methods and Materials in Digital Human Sciences : Proceedings of the Digital Research Data and Human Sciences DRDHum Conference 2022, December 1-3, Jyväskylä, Finland</i> (pp. 148-155). Jyväskylän yliopisto. <a href="http://urn.fi/URN:ISBN:978-951-39-9450-1" target="_blank">http://urn.fi/URN:ISBN:978-951-39-9450-1</a>
dc.identifier.otherCONVID_164253306
dc.identifier.urihttps://jyx.jyu.fi/handle/123456789/84357
dc.description.abstractIn this paper, we describe our experiences of bringing together methodologies of two disciplines – sign language (SL) linguistics and cognitive neuroscience – in the multidisciplinary ShowTell research project (Academy of Finland 2021–2025). More specifically, we discuss the challenges we encountered when creating and using video materials for the study of SL processing in the brain. Rather than using still images, the study of SL comprehension is better performed by using videos, thus providing more naturalistic stimuli as observed in face-to-face interaction. On the other hand, in neuroimaging (electroencephalography [EEG]/magnetoencephalography [MEG]), it is vital to track the timing of the stimulation exactly and to minimize the noise that could arise from inside and outside the brain. Any brain activity not related to the specific aspect being studied could create artifacts that diminish the signal-to-noise ratio of the measurements, thus compromising the quality of the data. This creates significant challenges when integrating both disciplines into the same study. In the paper, we (i) describe the process of, and requirements for, creating signed video materials that try to mirror naturalistic signing; (ii) discuss the problems in the synchronization of the video stimuli with the brain imaging data; and (iii) introduce the steps we have taken to minimize these challenges in different phases of the process, such as the design, recording, and processing of the video stimuli. Finally, we discuss how, with the use of these steps, we have been able to deal successfully with the resulting data and creating materials that integrate the naturalistic nature of human communication.en
dc.format.extent243
dc.format.mimetypeapplication/pdf
dc.language.isoeng
dc.publisherJyväskylän yliopisto
dc.relation.ispartofDiversity of Methods and Materials in Digital Human Sciences : Proceedings of the Digital Research Data and Human Sciences DRDHum Conference 2022, December 1-3, Jyväskylä, Finland
dc.relation.urihttp://urn.fi/URN:ISBN:978-951-39-9450-1
dc.rightsCC BY 4.0
dc.titleUse of Sign Language Videos in EEG and MEG Studies : Experiences from a Multidisciplinary Project Combining Linguistics and Cognitive Neuroscience
dc.typeconferenceObject
dc.identifier.urnURN:NBN:fi:jyu-202212135614
dc.contributor.laitosKieli- ja viestintätieteiden laitosfi
dc.contributor.laitosDepartment of Language and Communication Studiesen
dc.contributor.oppiainePsykologiafi
dc.contributor.oppiaineSuomalainen viittomakielifi
dc.contributor.oppiainePsychologyen
dc.contributor.oppiaineFinnish Sign Languageen
dc.type.urihttp://purl.org/eprint/type/ConferencePaper
dc.relation.isbn978-951-39-9450-1
dc.type.coarhttp://purl.org/coar/resource_type/c_5794
dc.description.reviewstatuspeerReviewed
dc.format.pagerange148-155
dc.type.versionpublishedVersion
dc.rights.copyright© 2022 Authors and University of Jyväskylä
dc.rights.accesslevelopenAccessfi
dc.relation.conferenceDigital Research Data and Human Sciences
dc.relation.grantnumber339268
dc.subject.ysomonitieteisyys
dc.subject.ysokognitiivinen neurotiede
dc.subject.ysoneurolingvistiikka
dc.subject.ysoviittomakieli
dc.subject.ysoEEG
dc.subject.ysoMEG
dc.subject.ysovideo
dc.subject.ysokielellinen vuorovaikutus
dc.format.contentfulltext
jyx.subject.urihttp://www.yso.fi/onto/yso/p11916
jyx.subject.urihttp://www.yso.fi/onto/yso/p23133
jyx.subject.urihttp://www.yso.fi/onto/yso/p13491
jyx.subject.urihttp://www.yso.fi/onto/yso/p6834
jyx.subject.urihttp://www.yso.fi/onto/yso/p3328
jyx.subject.urihttp://www.yso.fi/onto/yso/p3329
jyx.subject.urihttp://www.yso.fi/onto/yso/p8368
jyx.subject.urihttp://www.yso.fi/onto/yso/p7831
dc.rights.urlhttps://creativecommons.org/licenses/by/4.0/
dc.relation.funderResearch Council of Finlanden
dc.relation.funderSuomen Akatemiafi
jyx.fundingprogramAcademy Project, AoFen
jyx.fundingprogramAkatemiahanke, SAfi
jyx.fundinginformationFunding from the Academy of Finland under Project 339268 (ShowTell) is gratefully acknowledged.
datacite.isSupplementedBy.doi10.17011/jyx/dataset/89371
datacite.isSupplementedByJantunen, Tommi; Puupponen, Anna; Hernández Barros, Doris; Wainio, Tuija; Keränen, Jarkko. (2023). <i>Project data of ShowTell - EEG material</i>. University of Jyväskylä. <a href="https://doi.org/10.17011/jyx/dataset/89371" target="_blank">https://doi.org/10.17011/jyx/dataset/89371</a>. <a href="http://urn.fi/URN:NBN:fi:jyu-202310045391">https://urn.fi/URN:NBN:fi:jyu-202310045391</a>
dc.type.okmA4


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

CC BY 4.0
Except where otherwise noted, this item's license is described as CC BY 4.0