Näytä suppeat kuvailutiedot

dc.contributor.authorWang, Hongkai
dc.contributor.authorHan, Ye
dc.contributor.authorChen, Zhonghua
dc.contributor.authorHu, Ruxue
dc.contributor.authorChatziioannou, Arion F.
dc.contributor.authorZhang, Bin
dc.date.accessioned2024-02-28T08:23:44Z
dc.date.available2024-02-28T08:23:44Z
dc.date.issued2019
dc.identifier.citationWang, H., Han, Y., Chen, Z., Hu, R., Chatziioannou, A. F., & Zhang, B. (2019). Prediction of major torso organs in low-contrast micro-CT images of mice using a two-stage deeply supervised fully convolutional network. <i>Physics in Medicine and Biology</i>, <i>64</i>(24), Article 245014. <a href="https://doi.org/10.1088/1361-6560/ab59a4" target="_blank">https://doi.org/10.1088/1361-6560/ab59a4</a>
dc.identifier.otherCONVID_33683600
dc.identifier.urihttps://jyx.jyu.fi/handle/123456789/93715
dc.description.abstractDelineation of major torso organs is a key step of mouse micro-CT image analysis. This task is challenging due to low soft tissue contrast and high image noise, therefore anatomical prior knowledge is needed for accurate prediction of organ regions. In this work, we develop a deeply supervised fully convolutional network which uses the organ anatomy prior learned from independently acquired contrast-enhanced micro-CT images to assist the segmentation of non-enhanced images. The network is designed with a two-stage workflow which firstly predicts the rough regions of multiple organs and then refines the accuracy of each organ in local regions. The network is trained and evaluated with 40 mouse micro-CT images. The volumetric prediction accuracy (Dice score) varies from 0.57 for the spleen to 0.95 for the heart. Compared to a conventional atlas registration method, our method dramatically improves the Dice of the abdominal organs by 18~26%. Moreover, the incorporation of anatomical prior leads to more accurate results for small-sized low-contrast organs (e.g. the spleen and kidneys). We also find that the localized stage of the network has better accuracy than the global stage, indicating that localized single organ prediction is more accurate than global multiple organ prediction. With this work, the accuracy and efficiency of mouse micro-CT image analysis are greatly improved and the need for using contrast agent and high X-ray dose is potentially reduced.en
dc.format.mimetypeapplication/pdf
dc.language.isoeng
dc.publisherInstitute of Physics
dc.relation.ispartofseriesPhysics in Medicine and Biology
dc.rightsCC BY-NC-ND 4.0
dc.subject.otherdeeply supervised network
dc.subject.otherfully convolutional network
dc.subject.othermicro-CT
dc.subject.othermouse image
dc.subject.otherorgan segmentation
dc.titlePrediction of major torso organs in low-contrast micro-CT images of mice using a two-stage deeply supervised fully convolutional network
dc.typearticle
dc.identifier.urnURN:NBN:fi:jyu-202402282187
dc.contributor.laitosInformaatioteknologian tiedekuntafi
dc.contributor.laitosFaculty of Information Technologyen
dc.type.urihttp://purl.org/eprint/type/JournalArticle
dc.type.coarhttp://purl.org/coar/resource_type/c_2df8fbb1
dc.description.reviewstatuspeerReviewed
dc.relation.issn0031-9155
dc.relation.numberinseries24
dc.relation.volume64
dc.type.versionacceptedVersion
dc.rights.copyright© 2019 Institute of Physics and Engineering in Medicine
dc.rights.accesslevelopenAccessfi
dc.subject.ysotietokonetomografia
dc.subject.ysoneuroverkot
dc.subject.ysokuvantaminen
dc.subject.ysohahmontunnistus (tietotekniikka)
dc.subject.ysoanatomia
dc.format.contentfulltext
jyx.subject.urihttp://www.yso.fi/onto/yso/p20535
jyx.subject.urihttp://www.yso.fi/onto/yso/p7292
jyx.subject.urihttp://www.yso.fi/onto/yso/p3532
jyx.subject.urihttp://www.yso.fi/onto/yso/p8266
jyx.subject.urihttp://www.yso.fi/onto/yso/p1523
dc.rights.urlhttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.relation.doi10.1088/1361-6560/ab59a4
dc.type.okmA1


Aineistoon kuuluvat tiedostot

Thumbnail

Aineisto kuuluu seuraaviin kokoelmiin

Näytä suppeat kuvailutiedot

CC BY-NC-ND 4.0
Ellei muuten mainita, aineiston lisenssi on CC BY-NC-ND 4.0