Näytä suppeat kuvailutiedot

dc.contributor.authorWang, Yingying
dc.contributor.authorLu, Yingzhi
dc.contributor.authorDeng, Yuqin
dc.contributor.authorGu, Nan
dc.contributor.authorParviainen, Tiina
dc.contributor.authorZhou, Chenglin
dc.date.accessioned2019-07-16T06:20:30Z
dc.date.available2019-07-16T06:20:30Z
dc.date.issued2019
dc.identifier.citationWang, Y., Lu, Y., Deng, Y., Gu, N., Parviainen, T., & Zhou, C. (2019). Predicting domain-specific actions in expert table tennis players activates the semantic brain network. <i>NeuroImage</i>, <i>200</i>, 482-489. <a href="https://doi.org/10.1016/j.neuroimage.2019.06.035" target="_blank">https://doi.org/10.1016/j.neuroimage.2019.06.035</a>
dc.identifier.otherCONVID_32097257
dc.identifier.urihttps://jyx.jyu.fi/handle/123456789/65068
dc.description.abstractMotor expertise acquired during long-term training in sports enables top athletes to predict the outcomes of domain-specific actions better than nonexperts do. However, whether expert players encode actions, in addition to the concrete sensorimotor level, also at a more abstract, conceptual level, remains unclear. The present study manipulated the congruence between body kinematics and the subsequent ball trajectory in videos of an expert player performing table tennis serves. By using functional magnetic resonance imaging, the brain activity was evaluated in expert and nonexpert table tennis players during their predictions on the fate of the ball trajectory in congruent versus incongruent videos. Compared with novices, expert players showed greater activation in the sensorimotor areas (right precentral and postcentral gyri) in the comparison between incongruent vs. congruent videos. They also showed greater activation in areas related to semantic processing: the posterior inferior parietal lobe (angular gyrus), middle temporal gyrus, and ventromedial prefrontal cortex. These findings indicate that action anticipation in expert table tennis players engages both semantic and sensorimotor regions and suggests that skilled action observation in sports utilizes predictions both at motor-kinematic and conceptual levels.en
dc.format.mimetypeapplication/pdf
dc.languageeng
dc.language.isoeng
dc.publisherElsevier
dc.relation.ispartofseriesNeuroImage
dc.rightsCC BY-NC-ND 4.0
dc.subject.othertoiminnallinen magneettikuvaus
dc.subject.otherennakointi
dc.subject.otherpöytätennis
dc.subject.otherpelaajat
dc.subject.otherpeilisolut
dc.subject.otherhavainnointi
dc.subject.otherfunctional magnetic resonance imaging
dc.subject.othersemantic expectation
dc.subject.otheraction anticipation
dc.subject.othertable tennis player
dc.subject.othermirror neuron system
dc.subject.otheraction observation
dc.titlePredicting domain-specific actions in expert table tennis players activates the semantic brain network
dc.typeresearch article
dc.identifier.urnURN:NBN:fi-fe2019071823133
dc.contributor.laitosPsykologian laitosfi
dc.contributor.laitosDepartment of Psychologyen
dc.contributor.oppiaineMonitieteinen aivotutkimuskeskusfi
dc.contributor.oppiaineHyvinvoinnin tutkimuksen yhteisöfi
dc.contributor.oppiaineCentre for Interdisciplinary Brain Researchen
dc.contributor.oppiaineSchool of Wellbeingen
dc.type.urihttp://purl.org/eprint/type/JournalArticle
dc.type.coarhttp://purl.org/coar/resource_type/c_2df8fbb1
dc.description.reviewstatuspeerReviewed
dc.format.pagerange482-489
dc.relation.issn1053-8119
dc.relation.volume200
dc.type.versionacceptedVersion
dc.rights.copyright© 2019 Elsevier Inc.
dc.rights.accesslevelopenAccessfi
dc.type.publicationarticle
dc.format.contentfulltext
dc.rights.urlhttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.relation.doi10.1016/j.neuroimage.2019.06.035
jyx.fundinginformationThis work was supported by the grants from National Natural Science Foundation of China (No.31571151), and YW was supported by a grant from the Chinese Scholarship Council.
dc.type.okmA1


Aineistoon kuuluvat tiedostot

Thumbnail

Aineisto kuuluu seuraaviin kokoelmiin

Näytä suppeat kuvailutiedot

CC BY-NC-ND 4.0
Ellei muuten mainita, aineiston lisenssi on CC BY-NC-ND 4.0