Vis enkel innførsel

dc.contributor.authorSagemüller, Justus
dc.date.accessioned2024-04-12T06:42:58Z
dc.date.available2024-04-12T06:42:58Z
dc.date.created2024-04-11T09:07:47Z
dc.date.issued2024
dc.identifier.citationSagemüller, J. (2024). Paths towards reliable explainability: In neural networks for image-processing [Doctoral dissertation, Western Norway University of Applied Sciences]. HVL Open.en_US
dc.identifier.isbn978-82-8461-064-1
dc.identifier.urihttps://hdl.handle.net/11250/3126193
dc.description.abstractMachine learning systems (often referred to as AI, not always appropriately) are increasingly used in varied applications, including ones with strong impact on human lives. While this is expected to bring economic and scientific progress, it also has several controversial aspects. A major one of these is that black-box models make it hard or impossible to answer questions regarding e.g. the stability of an output or the influence of biases in a training dataset, let alone to rigorously reason about correctness. It is meanwhile known that AI systems can and do often produce convincing yet wrong or misaligned outputs, with a significant potential for detrimental impacts on society. Better understanding of such systems is therefore needed. The two main approaches to this are: finding explanations for the decisions of existing models, or designing models specifically to be more interpretable. This thesis investigates use of mathematical methods towards both of these goals. We provide a toolkit that improves on existing saliency methods for highlighting parts of images that are important for a classifier on these images. Main contribution is the Ablation Path formalism, which generates perturbations of inputs in a way that is convenient for a human to inspect and assess the faithfulness of the explanation. Additionally we propose a new way of using the SIFT technique as a feature basis for saliency. This overcomes some of the technical challenges with existing methods, and also provides information that can be argued to be more useful than the standard location-heatmaps. More towards the interpretability front, we study a use case of machine learning denoising in which symmetries play a crucial role: Cryo-EM. Symmetries are a known aspect of many neural networks and their applications; for Cryo-EM these can be unusually well exploited and quantified. We propose a variation of convolutional network that is dedicated to the particular symmetries of the application, and investigate how this impacts the performance and other properties. These contributions push the state of the art for explainability of image classification, and also provide a starting point for multiple further advances on both explainability and interpretability in this application and others.en_US
dc.language.isoengen_US
dc.publisherHøgskulen på Vestlandeten_US
dc.titlePaths towards reliable explainability: In neural networks for image-processingen_US
dc.typeDoctoral thesisen_US
dc.description.versionacceptedVersionen_US
dc.rights.holder©Justus Sagemüller, 2024en_US
dc.source.pagenumber188en_US
dc.identifier.cristin2260829
cristin.ispublishedtrue
cristin.fulltextpostprint


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel