diplomsko delo

Abstract

S porastom uporabe kompleksnih modelov strojnega učenja so postale priljubljene post hoc razlagalne metode, kot sta SHAP in LIME. Te metode ustvarijo razlage v obliki prispevkov značilk za določeno napoved. Razlage na nivoju značilk zaradi nejasne povezave s predznanjem niso nujno razumljive človeškim strokovnjakom. V diplomskem delu predstavimo metodo ReEx (Sklepanje iz razlag), ki je združljiva z razlagami metod, ki delujejo na nivoju značilk, in z uporabo predznanja v obliki ontologij posploši razlage po principu najmanj splošne posplošitve. Rezultat so semantične razlage, ki so specifične za posamezne razrede in so potencialno bolj informativne, saj predstavijo ključne značilke v kontekstu predznanja. ReEx smo implementirali v obliki python knjižnice, kompatibilne z metodami, kot sta SHAP in LIME. Za ovrednotenje delovanja metode definiramo mere posploševanja in medsebojne povezanosti razlag. Metodo empirično ovrednotimo na treh tekstovnih podatkovnih množicah.

Keywords

razložljiva umetna inteligenca;razlage modelov;posploševanje razlag;razlage SHAP;računalništvo in informatika;univerzitetni študij;diplomske naloge;

Data

Language: Slovenian
Year of publishing:
Typology: 2.11 - Undergraduate Thesis
Organization: UL FRI - Faculty of Computer and Information Science
Publisher: [T. Stepišnik Perdih]
UDC: 004.8(043.2)
COBISS: 78873603 Link will open in a new window
Views: 347
Downloads: 47
Average score: 0 (0 votes)
Metadata: JSON JSON-RDF JSON-LD TURTLE N-TRIPLES XML RDFA MICRODATA DC-XML DC-RDF RDF

Other data

Secondary language: English
Secondary title: Background knowledge for better explanation of machine learning models
Secondary abstract: With the wide adoption of complex black-box models, instance-based post hoc explanation tools, such as SHAP and LIME became popular. These tools produce explanations as contributions of features to a given prediction. The obtained explanations at the feature level are not necessarily understandable by human experts because of unclear connections with the background knowledge. We propose ReEx (Reasoning from Explanations), a method applicable to explanations generated by instance-level explainers. By using background knowledge in the form of ontologies, ReEx generalizes instance explanations with the least general generalization principle. The resulting symbolic descriptions are specific for individual classes and offer generalizations based on the explainer's output. The derived semantic explanations are potentially more informative, as they describe the key attributes in the context of background knowledge. ReEx is available as a python library and is compatible with explanation approaches such as SHAP and LIME. For the evaluation of ReEx's performance, we define measures of generalization and overlap of explanations. We conduct experiments on three textual datasets.
Secondary keywords: explainable AI;model explanations;explanation generalization;SHAP explanations;machine learning;computer and information science;diploma;Strojno učenje;Umetna inteligenca;Računalništvo;Univerzitetna in visokošolska dela;
Type (COBISS): Bachelor thesis/paper
Study programme: 1000468
Embargo end date (OpenAIRE): 1970-01-01
Thesis comment: Univ. v Ljubljani, Fak. za računalništvo in informatiko
Pages: 38 str.
ID: 13257841