Vis enkel innførsel

dc.contributor.authorKaliyugarasan, Sathiesh Kumar
dc.date.accessioned2023-09-18T12:19:39Z
dc.date.available2023-09-18T12:19:39Z
dc.date.created2023-09-13T09:47:58Z
dc.date.issued2023
dc.identifier.isbn978-82-8461-023-8
dc.identifier.urihttps://hdl.handle.net/11250/3090119
dc.description.abstractDeep learning (DL), a branch of artificial intelligence (AI), has experienced significant growth and advancements over the past decade and has shown great potential in various sectors, including the medical domain. The goal that drives deep learning research for medical applications is the development of tools that can enhance the accuracy and efficiency of diagnosis, reduce medical costs, and streamline and improve diagnostic processes through a greater degree of precision medicine, with better prognostics and stratification of therapy. In modern medicine, radiology has become increasingly important, with medical imaging playing a critical role in detecting, diagnosing, and treating various diseases. Simultaneously, there is a shortage of qualified medical specialists, i.e., radiologists. The potential of deep learning for medical image analysis is evident; however, much of the excitement around the applications is rooted in retrospective studies. In practice, only a limited number of deep learning-based studies have progressed to deployment in clinical care. Moreover, at least part of the field seems to be facing a reproducibility crisis. The reasons for this are multiple, including technical debt, overfitting models, selection bias, and heavy preprocessing of data sets in the scientific community, not properly reflecting clinical diversity and local variations. These issues can be attributed, in part, to the insufficient collaboration between the medical and data science communities. To overcome these obstacles and fully realize the benefits of data-driven medical imaging, it is crucial to foster interdisciplinary collaboration. As one possible remedy, deep learning frameworks tailored to medical imaging can help foster interdisciplinary collaboration, facilitate rapid iterative development, and support reproducible research. Such frameworks can make it easier for domain experts to join in on method development and for other researchers to verify the validity of the reported results and build upon existing work. This can help accelerate the integration of deep learning-based solutions into clinical practice. To address these challenges and promote the integration of cutting-edge deep learning-based solutions into clinical practice, Medical Open Network for Artificial Intelligence (MONAI) provides an open-source PyTorch-based deep learning framework to support medical data, with a particular focus on imaging applications. Following best practices for software development, MONAI provides an easy-to-use,well-documented, and well-tested software framework freely available to all interested researchers via https://monai.io/. In this thesis, we present fastMONAI, a low-code Python-based open-source deep learning library built on frameworks from MONAI. The library incorporates several best practices and state-of-the-art techniques by integrating capabilities from MONAI with two other powerful libraries: fastai and TorchIO, along with custom-made modules. fastMONAI provides a high-level API that simplifies the process of data loading, preprocessing, training, and result interpretation, allowing researchers to spend less time on coding and focus more on the challenges within each project. Despite its high-level interface, fastMONAI maintains the customization and flexibility of fastai, enabling experienced practitioners to incorporate custom extensions when needed. The development and evaluation of fastMONAI have been conducted using both public and clinical study data involving multiple patient groups, radiological domains, and organ systems, including identifying the brain from surrounding tissue and structures (Paper B), lung cancer (Paper C), gynecological cancer (Paper D), and low back pain (Paper E). Each patient group requires accurate and efficient medical imaging analysis for diagnosis and treatment planning. Our results in this thesis demonstrate promising improvements in diagnostic accuracy and streamlined workflows. However, to thoroughly evaluate the models, it is crucial to integrate them into real-world workflows and study their performance in realistic contexts. In this thesis, we found that the flexibility and the user-friendly API of fastMONAI facilitate the integration of trained models into clinical infrastructure (see Figure 4.5). This is explored further in ongoing and future work building on the thesis results.en_US
dc.language.isoengen_US
dc.publisherHøgskulen på Vestlandeten_US
dc.relation.haspartKaliyugarasan, Satheshkumar and Lundervold, Alexander Selvikvåg. fast- MONAI: a low-code deep learning library for medical image analysis. Manuscript, April 2023en_US
dc.relation.haspartKaliyugarasan, Satheshkumar, Kociński, Marek, Lundervold, Arvid and Lundervold, Alexander Selvikvåg. 2D and 3D U-Nets for skull stripping in a large and heterogeneous set of head MRI using fastai. In Proceedings of the of the 33rd Norwegian Informatics Conference (NIK), 23 November 2020en_US
dc.relation.haspartKaliyugarasan, Satheshkumar, Lundervold, Arvid and Lundervold, Alexander Selvikvåg. Pulmonary nodule classification in lung cancer from 3D thoracic CT scans using fastai and MONAI. In International Journal of Interactive Multimedia and Artificial Intelligence (IJIMAI), Volume 6, Number 7, 4 May 2021.en_US
dc.relation.haspartHodneland, Erlend, Kaliyugarasan, Satheshkumar, Wagner-Larsen, Kari Strøno, Lura, Njål, Andersen, Erling, Bartsch, Hauke, Smit, Noeska, Halle, Mari Kyllesø, Krakstad, Camilla, Lundervold, Alexander Selvikvåg and Haldorsen, Ingfrid Salvesen. Fully Automatic Whole-Volume Tumor Segmentation in Cervical Cancer. In Cancers, Volume 14, Number 10, 11 May, 2022.en_US
dc.relation.haspartKaliyugarasan, Satheshkumar, Dagestad, Magnhild H., Papalini, Evin I., Andersen, Erling, Zwart, John-Anker, Brisby, Helena, Hebelka, Hanna, Ansgar, Espeland, Lagerstrand, Kerstin M. and Lundervold, Alexander Selvikvåg. Multi- Center CNN-based spine segmentation from T2w MRI using small amounts of data. To appear in the Proceedings of the of the 20th IEEE International Symposium on Biomedical Imaging (ISBI), 18-21 April 2023.en_US
dc.titleDeep learning in medical image analysis: Efficient use of data and radiological expertiseen_US
dc.typeDoctoral thesisen_US
dc.description.versionacceptedVersionen_US
dc.rights.holder© Satheshkumar Kaliyugarasan, 2023en_US
dc.source.pagenumber142en_US
dc.identifier.cristin2174566
cristin.ispublishedtrue
cristin.fulltextpostprint


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel