diplomsko delo
Tim Kmecl (Author), Marko Robnik Šikonja (Mentor)

Abstract

Na področju strojnega razumevanja naravnega jezika so v zadnjih letih najuspešnejši veliki jezikovni modeli. Pomemben problem s tega področja je logično sklepanje v naravnem jeziku, za reševanje katerega potrebujejo modeli tudi poznavanje resničnega sveta, strojno generiranje razlag sklepov pa nam omogoča dodaten vpogled v njihovo delovanje. V diplomskem delu smo preizkusili različne pristope za logično sklepanje v naravnem jeziku za slovenščino. Uporabili smo dva slovenska velika jezikovna modela, SloBERTa in SloT5 in mnogo večji angleški jezikovni model GPT-3.5-turbo. Za učenje modelov smo uporabili slovensko podatkovno množico SI-NLI, strojno pa smo prevedli še 50.000 primerov iz angleške množice ESNLI. Model SloBERTa smo prilagodili na obeh množicah. Model SloBERTa, prilagojen na SI-NLI, doseže na testni množici SI-NLI klasifikacijsko točnost 74,4 %. Z vnaprejšnjim učenjem na prevodih ESNLI smo točnost izboljšali na 75,3 %. Ugotovili smo, da modeli delajo drugačne vrste napak kot ljudje in da slabo posplošujejo med različnimi domenami primerov. SloT5 smo na množici ESNLI prilagodili za generiranje razlag pri logičnem sklepanju. Ustreznih je manj kot tretjina razlag, pri čemer se model dobro nauči pogostih stavčnih oblik v razlagah, večinoma pa so pomensko nesmiselne. Sklepamo, da so slovenski veliki jezikovni modeli z nekaj sto milijoni parametrov zmožni iskanja in uporabe jezikovnih vzorcev, poznavanje jezika pa ni povezano s poznavanjem resničnosti. Za klasificiranje in generiranje razlag smo uporabili še večji model GPT-3.5-turbo. Pri učenju brez dodatnih primerov doseže na testni množici SI-NLI točnost 56,5 %, pri pravilno klasificiranih primerih pa je ustreznih 81 % razlag. V primerjavi z manjšimi slovenskimi modeli kaže dokaj dobro razumevanje resničnosti, pri čemer ga omejuje slabše poznavanje slovenščine.

Keywords

logično sklepanje v naravnem jeziku;veliki jezikovni modeli;arhitektura Transformer;SloBERTa;SloT5;ChatGPT;slovenščina;prilagajanje modelov;interdisciplinarni študij;univerzitetni študij;diplomske naloge;

Data

Language: Slovenian
Year of publishing:
Typology: 2.11 - Undergraduate Thesis
Organization: UL FRI - Faculty of Computer and Information Science
Publisher: [T. Kmecl]
UDC: 004.85:81'322(043.2)
COBISS: 165503747 Link will open in a new window
Views: 161
Downloads: 41
Average score: 0 (0 votes)
Metadata: JSON JSON-RDF JSON-LD TURTLE N-TRIPLES XML RDFA MICRODATA DC-XML DC-RDF RDF

Other data

Secondary language: English
Secondary title: Natural language inference for Slovene using large language models
Secondary abstract: In recent years, large language models have been the most successful approach to natural language processing. An important problem in this field is natural language inference, which requires models to understand the real world to some degree. Requiring models to explain their reasoning offers us additional insight into their functioning. We tested several approaches for natural language inference in Slovene. We used two Slovene large language models, SloBERTa and SloT5, as well as much larger English model GPT-3.5-turbo. Training data consisted of Slovene dataset SI-NLI and additional 50,000 machine-translated samples from English dataset ESNLI. SloBERTa model was fine-tuned on both datasets. Fine-tuning it on SI-NLI achieves classification accuracy of 74.4 % on the SI-NLI test set. Pretraining it on ESNLI improves its accuracy to 75.3 %. We observe that models make different types of errors compared to humans and that they generalize poorly across different datasets. SloT5 was fine-tuned on ESNLI to generate explanations for natural language inference samples. Less than a third of explanations were appropriate, with the model learning common sentence patterns from the domain, producing semantically meaningless explanations. We assume that Slovene large language models with several hundred million parameters are capable of identifying and using language patterns, but language understanding is not inherently tied to understanding of reality. Even larger GPT-3.5-turbo was used both for classification and explanation generation. It achieves an accuracy of 56.5 % on SI-NLI test set using zero-shot learning, with 81 % explanations being appropriate for the correctly classified samples. In comparison with smaller Slovene models, this model shows a reasonably good understanding of reality, but is limited by its lesser Slovene proficiency.
Secondary keywords: natural language inference;large language models;transformer architecture;SloBERTa;SloT5;GPT-3.5-turbo;ChatGPT;Slovene;finetuning;computer science;computer and information science;computer science and mathematics;interdisciplinary studies;diploma;Računalniško jezikoslovje;Strojno učenje;Obdelava naravnega jezika (računalništvo);Računalništvo;Univerzitetna in visokošolska dela;
Type (COBISS): Bachelor thesis/paper
Study programme: 1000407
Embargo end date (OpenAIRE): 1970-01-01
Thesis comment: Univ. v Ljubljani, Fak. za računalništvo in informatiko
Pages: 68 str.
ID: 19909013