Show simple item record

dc.contributor.authorJantunen, Tommi
dc.contributor.authorPuupponen, Anna
dc.contributor.authorBurger, Birgitta
dc.contributor.editorCalzolari, Nicoletta
dc.contributor.editorBéchet, Frédéric
dc.contributor.editorBlache, Philippe
dc.contributor.editorChoukri, Khalid
dc.contributor.editorCieri, Christopher
dc.contributor.editorDeclerck, Thierry
dc.contributor.editorGoggi, Sara
dc.contributor.editorIsahara, Hitoshi
dc.contributor.editorMaegaard, Bente
dc.contributor.editorMariani, Joseph
dc.contributor.editorMazo, Hélène
dc.contributor.editorMoreno, Asuncion
dc.contributor.editorOdijk, Jan
dc.contributor.editorPiperidis, Stelios
dc.date.accessioned2020-07-02T07:35:19Z
dc.date.available2020-07-02T07:35:19Z
dc.date.issued2020
dc.identifier.citationJantunen, T., Puupponen, A., & Burger, B. (2020). What Comes First : Combining Motion Capture and Eye Tracking Data to Study the Order of Articulators in Constructed Action in Sign Language Narratives. In N. Calzolari, F. Béchet, P. Blache, K. Choukri, C. Cieri, T. Declerck, S. Goggi, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, & S. Piperidis (Eds.), <i>LREC 2020 : Proceedings of the 12th Conference on Language Resources and Evaluation</i> (pp. 6003-6007). European Language Resources Association. LREC proceedings. <a href="https://www.aclweb.org/anthology/2020.lrec-1.735.pdf" target="_blank">https://www.aclweb.org/anthology/2020.lrec-1.735.pdf</a>
dc.identifier.otherCONVID_36243546
dc.identifier.urihttps://jyx.jyu.fi/handle/123456789/71022
dc.description.abstractWe use synchronized 120 fps motion capture and 50 fps eye tracking data from two native signers to investigate the temporal order in which the dominant hand, the head, the chest and the eyes start producing overt constructed action from regular narration in seven short Finnish Sign Language stories. From the material, we derive a sample of ten instances of regular narration to overt constructed action transfers in ELAN which we then further process and analyze in Matlab. The results indicate that the temporal order of articulators shows both contextual and individual variation but that there are also repeated patterns which are similar across all the analyzed sequences and signers. Most notably, when the discourse strategy changes from regular narration to overt constructed action, the head and the eyes tend to take the leading role, and the chest and the dominant hand tend to start acting last. Consequences of the findings are discussed.en
dc.format.extent7353
dc.format.mimetypeapplication/pdf
dc.languageeng
dc.language.isoeng
dc.publisherEuropean Language Resources Association
dc.relation.ispartofLREC 2020 : Proceedings of the 12th Conference on Language Resources and Evaluation
dc.relation.ispartofseriesLREC proceedings
dc.relation.urihttps://www.aclweb.org/anthology/2020.lrec-1.735.pdf
dc.rightsCC BY-NC 4.0
dc.subject.othermotion capture
dc.subject.othereye tracking, sign language
dc.subject.otherconstructed action
dc.subject.othernarration
dc.titleWhat Comes First : Combining Motion Capture and Eye Tracking Data to Study the Order of Articulators in Constructed Action in Sign Language Narratives
dc.typeconferenceObject
dc.identifier.urnURN:NBN:fi:jyu-202007025203
dc.contributor.laitosKieli- ja viestintätieteiden laitosfi
dc.contributor.laitosMusiikin, taiteen ja kulttuurin tutkimuksen laitosfi
dc.contributor.laitosDepartment of Language and Communication Studiesen
dc.contributor.laitosDepartment of Music, Art and Culture Studiesen
dc.contributor.oppiaineSuomalainen viittomakielifi
dc.contributor.oppiaineMusiikkitiedefi
dc.contributor.oppiaineFinnish Sign Languageen
dc.contributor.oppiaineMusicologyen
dc.type.urihttp://purl.org/eprint/type/ConferencePaper
dc.relation.isbn979-10-95546-34-4
dc.type.coarhttp://purl.org/coar/resource_type/c_5794
dc.description.reviewstatuspeerReviewed
dc.format.pagerange6003-6007
dc.relation.issn2522-2686
dc.type.versionpublishedVersion
dc.rights.copyright© European Language Resources Association (ELRA)
dc.rights.accesslevelopenAccessfi
dc.relation.conferenceInternational Conference on Language Resources and Evaluation
dc.relation.grantnumber304034
dc.relation.grantnumber269089
dc.relation.grantnumber299067
dc.subject.ysoviittomakieli
dc.subject.ysokatseenseuranta
dc.subject.ysosuomalainen viittomakieli
dc.subject.ysoliikkeenkaappaus
dc.format.contentfulltext
jyx.subject.urihttp://www.yso.fi/onto/yso/p6834
jyx.subject.urihttp://www.yso.fi/onto/yso/p37956
jyx.subject.urihttp://www.yso.fi/onto/yso/p21310
jyx.subject.urihttp://www.yso.fi/onto/yso/p27199
dc.rights.urlhttps://creativecommons.org/licenses/by-nc/4.0/
dc.relation.funderResearch Council of Finlanden
dc.relation.funderResearch Council of Finlanden
dc.relation.funderResearch Council of Finlanden
dc.relation.funderSuomen Akatemiafi
dc.relation.funderSuomen Akatemiafi
dc.relation.funderSuomen Akatemiafi
jyx.fundingprogramResearch costs of Academy Research Fellow, AoFen
jyx.fundingprogramAcademy Research Fellow, AoFen
jyx.fundingprogramPostdoctoral Researcher, AoFen
jyx.fundingprogramAkatemiatutkijan tutkimuskulut, SAfi
jyx.fundingprogramAkatemiatutkija, SAfi
jyx.fundingprogramTutkijatohtori, SAfi
jyx.fundinginformationThe study was financed by the Academy of Finland under grants 269089 & 304034 (TJ) and 299067 (BB).
datacite.isSupplementedBy.doi10.17011/jyx/dataset/83520
datacite.isSupplementedByJantunen, Tommi; Wainio, Tuija; Burger, Birgitta. (2022). <i>Project data of ShowTell – Finnish Sign Language MoCap corpus</i>. University of Jyväskylä. <a href="https://doi.org/10.17011/jyx/dataset/83520" target="_blank">https://doi.org/10.17011/jyx/dataset/83520</a>. <a href="http://urn.fi/URN:NBN:fi:jyu-202210124845">https://urn.fi/URN:NBN:fi:jyu-202210124845</a>
dc.type.okmA4


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

CC BY-NC 4.0
Except where otherwise noted, this item's license is described as CC BY-NC 4.0