Näytä suppeat kuvailutiedot

dc.contributor.authorZhuang, Mingrui
dc.contributor.authorChen, Zhonghua
dc.contributor.authorWang, Hongkai
dc.contributor.authorTang, Hong
dc.contributor.authorHe, Jiang
dc.contributor.authorQin, Bobo
dc.contributor.authorYang, Yuxin
dc.contributor.authorJin, Xiaoxian
dc.contributor.authorYu, Mengzhu
dc.contributor.authorJin, Baitao
dc.contributor.authorLi, Taijing
dc.contributor.authorKettunen, Lauri
dc.date.accessioned2022-09-02T08:54:54Z
dc.date.available2022-09-02T08:54:54Z
dc.date.issued2023
dc.identifier.citationZhuang, M., Chen, Z., Wang, H., Tang, H., He, J., Qin, B., Yang, Y., Jin, X., Yu, M., Jin, B., Li, T., & Kettunen, L. (2023). Efficient contour-based annotation by iterative deep learning for organ segmentation from volumetric medical images. <i>International Journal of Computer Assisted Radiology and Surgery</i>, <i>18</i>(2), 379-394. <a href="https://doi.org/10.1007/s11548-022-02730-z" target="_blank">https://doi.org/10.1007/s11548-022-02730-z</a>
dc.identifier.otherCONVID_155803705
dc.identifier.urihttps://jyx.jyu.fi/handle/123456789/82925
dc.description.abstractPurpose Training deep neural networks usually require a large number of human-annotated data. For organ segmentation from volumetric medical images, human annotation is tedious and inefficient. To save human labour and to accelerate the training process, the strategy of annotation by iterative deep learning recently becomes popular in the research community. However, due to the lack of domain knowledge or efficient human-interaction tools, the current AID methods still suffer from long training time and high annotation burden. Methods We develop a contour-based annotation by iterative deep learning (AID) algorithm which uses boundary representation instead of voxel labels to incorporate high-level organ shape knowledge. We propose a contour segmentation network with a multi-scale feature extraction backbone to improve the boundary detection accuracy. We also developed a contour-based human-intervention method to facilitate easy adjustments of organ boundaries. By combining the contour-based segmentation network and the contour-adjustment intervention method, our algorithm achieves fast few-shot learning and efficient human proofreading. Results For validation, two human operators independently annotated four abdominal organs in computed tomography (CT) images using our method and two compared methods, i.e. a traditional contour-interpolation method and a state-of-the-art (SOTA) convolutional network (CNN) method based on voxel label representation. Compared to these methods, our approach considerably saved annotation time and reduced inter-rater variabilities. Our contour detection network also outperforms the SOTA nnU-Net in producing anatomically plausible organ shape with only a small training set. Conclusion Taking advantage of the boundary shape prior and the contour representation, our method is more efficient, more accurate and less prone to inter-operator variability than the SOTA AID methods for organ segmentation from volumetric medical images. The good shape learning ability and flexible boundary adjustment function make it suitable for fast annotation of organ structures with regular shape.en
dc.format.mimetypeapplication/pdf
dc.language.isoeng
dc.publisherSpringer
dc.relation.ispartofseriesInternational Journal of Computer Assisted Radiology and Surgery
dc.rightsCC BY 4.0
dc.subject.othermedical image annotation
dc.subject.otherdeep learning
dc.subject.otherorgan segmentation
dc.subject.otherinteractive segmentation
dc.titleEfficient contour-based annotation by iterative deep learning for organ segmentation from volumetric medical images
dc.typearticle
dc.identifier.urnURN:NBN:fi:jyu-202209024458
dc.contributor.laitosInformaatioteknologian tiedekuntafi
dc.contributor.laitosFaculty of Information Technologyen
dc.contributor.oppiaineLaskennallinen tiedefi
dc.contributor.oppiaineSecure Communications Engineering and Signal Processingfi
dc.contributor.oppiaineComputing, Information Technology and Mathematicsfi
dc.contributor.oppiaineComputational Scienceen
dc.contributor.oppiaineSecure Communications Engineering and Signal Processingen
dc.contributor.oppiaineComputing, Information Technology and Mathematicsen
dc.type.urihttp://purl.org/eprint/type/JournalArticle
dc.type.coarhttp://purl.org/coar/resource_type/c_2df8fbb1
dc.description.reviewstatuspeerReviewed
dc.format.pagerange379-394
dc.relation.issn1861-6410
dc.relation.numberinseries2
dc.relation.volume18
dc.type.versionpublishedVersion
dc.rights.copyright© The Author(s) 2022
dc.rights.accesslevelopenAccessfi
dc.subject.ysolääketieteellinen tekniikka
dc.subject.ysoalgoritmit
dc.subject.ysosyväoppiminen
dc.format.contentfulltext
jyx.subject.urihttp://www.yso.fi/onto/yso/p13486
jyx.subject.urihttp://www.yso.fi/onto/yso/p14524
jyx.subject.urihttp://www.yso.fi/onto/yso/p39324
dc.rights.urlhttps://creativecommons.org/licenses/by/4.0/
dc.relation.doi10.1007/s11548-022-02730-z
jyx.fundinginformationThis work was supported in part by the National Key Research and Development Program No. 2020YFB1711500, 2020YFB1711501 and 2020YFB1711503, the general program of National Natural Science Fund of China (No. 81971693, 61971445 and 61971089), Dalian City Science and Technology Innovation Funding (No. 2018J12GX042), the Fundamental Research Funds for the Central Universities (No. DUT19JC01 and DUT20YG122), the funding of Liaoning Key Lab of IC & BME System and Dalian Engineering Research Center for Artificial Intelligence in Medical Imaging.
dc.type.okmA1


Aineistoon kuuluvat tiedostot

Thumbnail

Aineisto kuuluu seuraaviin kokoelmiin

Näytä suppeat kuvailutiedot

CC BY 4.0
Ellei muuten mainita, aineiston lisenssi on CC BY 4.0