bachelor thesis

Abstract

Prediction models are very useful in many areas, as they provide decisions as well as an understanding of the problem. In medicine, they are often used to predict diseases, outbreaks, reactions to medications, etc. Data scientists are striving to improve these models to get more accurate results as well as a better understanding of different phenomena. Since deep learning models are considered black boxes, the output decisions are not easily explained, but their interpretation would be very beneficial. In this thesis, two different approaches to medical model interpretation are shown. The first explains with contextual decomposition, focusing not only on the importance of singular features but also on interactions between them. This way, we can understand complex features and their role in models. The second approach leverages saliency maps in order to provide visual explanations through parts of images most impactful in the prediction model. A comparison of both methods on a skin cancer dataset shows similarities and differences between the two. The results show that the second approach gives us more understandable explanations, while the first one is more useful when trying to improve models' accuracy.

Keywords

artificial intelligence;predictions;background knowledge;medicine;explainable AI;computer and information science;diploma thesis;

Data

Language: English
Year of publishing:
Typology: 2.11 - Undergraduate Thesis
Organization: UL FRI - Faculty of Computer and Information Science
Publisher: [L. Lumburovska]
UDC: 004.8:61(043.2)
COBISS: 137274883 Link will open in a new window
Views: 38
Downloads: 20
Average score: 0 (0 votes)
Metadata: JSON JSON-RDF JSON-LD TURTLE N-TRIPLES XML RDFA MICRODATA DC-XML DC-RDF RDF

Other data

Secondary language: Slovenian
Secondary title: Razlaga napovednih modelov v medicini z uporabo predznanja
Secondary abstract: Napovedni modeli so uporabni na mnogih področjih, saj zagotavljajo odločitve in prispevajo k razumevanju problema. V medicini se pogosto uporabljajo za napovedovanje bolezni, izbruhov nalezljivih bolezni, reakcij na zdravila itd. Raziskovalci si prizadevajo izboljšati modele, da bi dobili boljše napovedi in boljše razumevanje različnih fenomenov. Ker modeli globokega učenja veljajo za črne škatle, njihovih odločitev ni enostavno interpretirati. Izboljšave na tem področju bi bile zelo dobrodošle. V diplomskem delu sta prikazana dva pristopa k interpretaciji modelov. Prvi uporablja kontekstualno dekompozicijo, metodo, ki se ne osredotoča le na pomembnost posameznih atributov, ampak tudi na interakcije med njimi. S pristopom lahko razumemo kompleksne značilke in njihovo vlogo v modelih. Drugi pristop izkorišča pomembne značilke, da najde vizualne razlage, kateri deli slike so najbolj vplivni v modelu. Primerjava obeh metod na problemu kožnega raka pokaže podobnosti in razlike med obema. Rezultati kažejo, da nam drugi pristop daje bolj razumljive razlage, prvi pristop pa je bolj uporaben, ko poskušamo izboljšati natančnost modelov.
Secondary keywords: napovedni modeli;predznanje;razložljiva umetna inteligenca;računalništvo in informatika;univerzitetni študij;diplomske naloge;Umetna inteligenca;Medicina;Računalništvo;Univerzitetna in visokošolska dela;
Type (COBISS): Bachelor thesis/paper
Study programme: 1000468
Embargo end date (OpenAIRE): 1970-01-01
Thesis comment: Univ. v Ljubljani, Fak. za računalništvo in informatiko
Pages: 36 str.
ID: 17653636