Show simple item record

dc.contributor.authorAhmed, Usman
dc.contributor.authorLin, Jerry Chun-Wei
dc.contributor.authorSrivastava, Gautam
dc.date.accessioned2023-03-23T14:09:25Z
dc.date.available2023-03-23T14:09:25Z
dc.date.created2022-08-16T13:31:18Z
dc.date.issued2022
dc.identifier.citationMultimedia Tools and Applications. 2022, 81(29), 41899-41910.en_US
dc.identifier.issn1380-7501
dc.identifier.urihttps://hdl.handle.net/11250/3060194
dc.description.abstractIn the Internet of Medical Things (IoMT), collaboration among institutes can help complex medical and clinical analysis of disease. Deep neural networks (DNN) require training models on large, diverse patients to achieve expert clinician-level performance. Clinical studies do not contain diverse patient populations for analysis due to limited availability and scale. DNN models trained on limited datasets are thereby constraining their clinical performance upon deployment at a new hospital. Therefore, there is significant value in increasing the availability of diverse training data. This research proposes institutional data collaboration alongside an adversarial evasion method to keep the data secure. The model uses a federated learning approach to share model weights and gradients. The local model first studies the unlabeled samples classifying them as adversarial or normal. The method then uses a centroid-based clustering technique to cluster the sample images. After that, the model predicts the output of the selected images, and active learning methods are implemented to choose the sub-sample of the human annotation task. The expert within the domain takes the input and confidence score and validates the samples for the model’s training. The model re-trains on the new samples and sends the updated weights across the network for collaboration purposes. We use the InceptionV3 and VGG16 model under fabricated inputs for simulating Fast Gradient Signed Method (FGSM) attacks. The model was able to evade attacks and achieve a high accuracy rating of 95%.en_US
dc.language.isoengen_US
dc.publisherSpringeren_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleMitigating adversarial evasion attacks by deep active learning for medical image classificationen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.rights.holder© The Author(s) 2021en_US
dc.source.pagenumber41899-41910en_US
dc.source.volume81en_US
dc.source.journalMultimedia Tools and Applicationsen_US
dc.source.issue29en_US
dc.identifier.doi10.1007/s11042-021-11473-z
dc.identifier.cristin2043430
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Navngivelse 4.0 Internasjonal
Except where otherwise noted, this item's license is described as Navngivelse 4.0 Internasjonal