doctoral dissertation
Martin Možina (Avtor), Ivan Bratko (Mentor)

Povzetek

Argument based machine learning

Ključne besede

artificial intelligence;machine learning;background knowledge;domain knowledge;argument based machine learning;ABML;CN2;ABCN2;extreme value corection;computer science;doctoral dissertations;theses;

Podatki

Jezik: Angleški jezik
Leto izida:
Tipologija: 2.08 - Doktorska disertacija
Organizacija: UL FRI - Fakulteta za računalništvo in informatiko
Založnik: [M. Možina]
UDK: 004.85(043.3)
COBISS: 7530836 Povezava se bo odprla v novem oknu
Št. ogledov: 42
Št. prenosov: 2
Ocena: 0 (0 glasov)
Metapodatki: JSON JSON-RDF JSON-LD TURTLE N-TRIPLES XML RDFA MICRODATA DC-XML DC-RDF RDF

Ostali podatki

Sekundarni jezik: Slovenski jezik
Sekundarni naslov: Argumentirano strojno učenje
Sekundarni povzetek: The Thesis presents a novel approach to machine learning, called ABML (argument based machine learning). This approach combines machine learning from examples with some concepts from the field of defeasible argumentation, where arguments are used together with learning examples by learning methods in the induction of a hypothesis. An argument represents a relation between the class value of a particular learning example and its attributes and can be regarded as a partial explanation of this example. We require that the theory induced from the examples explains the examples in terms of the given arguments. Thus arguments constrain the combinatorial search among possible hypotheses, and also direct the search towards hypotheses that are more comprehensible in the light of expert's background knowledge. Arguments are usually provided by domain experts. One of the main differences between ABML and other knowledge-intensive learning methods is in the way the knowledge is elicited from these experts. Other methods require general domain knowledge, that is knowledge valid for the entire domain. The problem with this is the difficulty that experts face when they try to articulate their global domain knowledge. On the other hand, as arguments contain knowledge specific only to certain situations, they need to provide only local knowledge for the specific examples. Experiments with ABML and other empirical observations show that experts have significantly less problems while expressing such local knowledge. Furthermore, we define the ABML loop that iteratively selects critical learning examples, namely examples that could not be explained by the current hypothesis, which are then shown to domain experts. Using this loop, the burden that lies on experts is further reduced (only some examples need to be explained) and only relevant knowledge is obtained (difficult examples). We implemented the ABCN2 algorithm, an argument-based extension of the rule learning algorithm CN2. The basic version of ABCN2 ensures that rules classifying argumented examples will contain the reasons of the given arguments in their condition part. We furthermore improved the basic algorithm with a new method for evaluation of rules, called extreme value correction (EVC), that reduces the optimism of evaluation measures due to the large number of rules tested and evaluated during the learning process (known as the multiple comparison procedures problem). This feature is critical for ABCN2, since arguments given to different examples have different number of reasons and therefore differently constrain the space for different rules. Moreover, as shown in the dissertation, using this method in CN2 (without arguments) results in significantly more accurate models as compared to the original CN2. We conclude this work with a set of practical evaluations and comparisons of ABCN2 to other machine learning algorithms on several data sets. The results favour ABCN2 in all experiments, however, as each experiment requires a certain amount of time due to involvement of domain experts, the number of experiments is not large enough to allow a valid statistical test. Therefore, we explored the capability of ABCN2 to deal with erroneous arguments, and showed in the dissertation that using false arguments will not decrease the quality of the induced model. Hence, ABCN2 can not perform worse than CN2, but it can perform better given the quality of arguments is high enough.
Sekundarne ključne besede: umetna inteligenca;strojno učenje;argumentacija;učenje pravil;učenje s predznanjem;domensko znanje;argumentirano strojno učenje;ABML;CN2;ABCN2;korekcija ekstremnih vrednosti;računalništvo;disertacije;Strojno učenje;Disertacije;
Vrsta datoteke: application/pdf
Vrsta dela (COBISS): Doktorska disertacija
Komentar na gradivo: Univ. Ljubljana, Fak. za računalništvo in informatiko
Strani: XII, 193 str.
ID: 23914189