Using deep learning to generate synthetic B-mode musculoskeletal ultrasound images
Cronin, N. J., Finni, T., & Seynnes, O. (2020). Using deep learning to generate synthetic B-mode musculoskeletal ultrasound images. Computer methods and programs in biomedicine, 196, Article 105583. https://doi.org/10.1016/j.cmpb.2020.105583
Published in
Computer methods and programs in biomedicineDate
2020Copyright
© 2020 the Author(s)
Background and Objective Deep learning approaches are common in image processing, but often rely on supervised learning, which requires a large volume of training images, usually accompanied by hand-crafted labels. As labelled data are often not available, it would be desirable to develop methods that allow such data to be compiled automatically. In this study, we used a Generative Adversarial Network (GAN) to generate realistic B-mode musculoskeletal ultrasound images, and tested the suitability of two automated labelling approaches. Methods We used a model including two GANs each trained to transfer an image from one domain to another. The two inputs were a set of 100 longitudinal images of the gastrocnemius medialis muscle, and a set of 100 synthetic segmented masks that featured two aponeuroses and a random number of ‘fascicles’. The model output a set of synthetic ultrasound images and an automated segmentation of each real input image. This automated segmentation process was one of the two approaches we assessed. The second approach involved synthesising ultrasound images and then feeding these images into an ImageJ/Fiji-based automated algorithm, to determine whether it could detect the aponeuroses and muscle fascicles. Results Histogram distributions were similar between real and synthetic images, but synthetic images displayed less variation between samples and a narrower range. Mean entropy values were statistically similar (real: 6.97, synthetic: 7.03; p = 0.218), but the range was much narrower for synthetic images (6.91 – 7.11 versus 6.30 – 7.62). When comparing GAN-derived and manually labelled segmentations, intersection-over-union values- denoting the degree of overlap between aponeurosis labels- varied between 0.0280 – 0.612 (mean ± SD: 0.312 ± 0.159), and pennation angles were higher for the GAN-derived segmentations (25.1° vs. 19.3 °; p < 0.001). For the second segmentation approach, the algorithm generally performed equally well on synthetic and real images, yielding pennation angles within the physiological range (13.8-20°). Conclusions We used a GAN to generate realistic B-mode ultrasound images, and extracted muscle architectural parameters from these images automatically. This approach could enable generation of large labelled datasets for image segmentation tasks, and may also be useful for data sharing. Automatic generation and labelling of ultrasound images minimises user input and overcomes several limitations associated with manual analysis.
...


Publisher
ElsevierISSN Search the Publication Forum
0169-2607Keywords
Publication in research information system
https://converis.jyu.fi/converis/portal/detail/Publication/35935057
Metadata
Show full item recordCollections
- Liikuntatieteiden tiedekunta [3259]
Additional information about funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.License
Related items
Showing items with similar title or keywords.
-
DL_Track : Automated analysis of muscle architecture from B-mode ultrasonography images using deep learning
Ritsche, Paul; Faude, Oliver; Franchi, Martino; Finni, Taija; Seynnes, Olivier; Cronin, Neil (Bern Open Publishing, 2023) -
Generating synthetic past and future states of Knee Osteoarthritis radiographs using Cycle-Consistent Generative Adversarial Neural Networks
Prezja, Fabi; Annala, Leevi; Kiiskinen, Sampsa; Lahtinen, Suvi; Ojala, Timo; Nieminen, Paavo (Elsevier, 2025)Knee Osteoarthritis (KOA), a leading cause of disability worldwide, is challenging to detect early due to subtle radiographic indicators. Diverse, extensive datasets are needed but are challenging to compile because of ... -
Generating Hyperspectral Skin Cancer Imagery using Generative Adversarial Neural Network
Annala, Leevi; Neittaanmäki, Noora; Paoli, John; Zaar, Oscar; Pölönen, Ilkka (IEEE, 2020)In this study we develop a proof of concept of using generative adversarial neural networks in hyperspectral skin cancer imagery production. Generative adversarial neural network is a neural network, where two neural ... -
Taxonomy of generative adversarial networks for digital immunity of Industry 4.0 systems
Terziyan, Vagan; Gryshko, Svitlana; Golovianko, Mariia (Elsevier, 2021)Industry 4.0 systems are extensively using artificial intelligence (AI) to enable smartness, automation and flexibility within variety of processes. Due to the importance of the systems, they are potential targets for ... -
Causality-Aware Convolutional Neural Networks for Advanced Image Classification and Generation
Terziyan, Vagan; Vitko, Oleksandra (Elsevier, 2023)Smart manufacturing uses emerging deep learning models, and particularly Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), for different industrial diagnostics tasks, e.g., classification, ...