diplomsko delo
Abstract
Z razvojem družbenih omrežij je narasla pogostost sovražnega govora v upo-
rabniških vsebinah. Osredotočili se bomo na dve trenutno najbolj aktualni
temi, LGBT in migrante. Za napovedovanje sovražnega govora bomo upo-
rabili nevronsko mrežo BERT in naredili primerjavo med večjezikovnim mo-
delom, ki je naučen na 104 različnih jezikih ter trojezikovnim modelom, ki je
naučen na slovenščini, hrvaščini in angleščini. Ugotovili smo, da trojezikovni
model za približno 5% natančneje napoveduje sovražni govor na jeziku, na
katerem je bil model tudi naučen. Večjezikovni model, brez ali z dodatnim
učenjem, natančneje kot trojezikovni model napoveduje sovražni govor na
jezikih, na katerem prvotno model ni bil naučen. To kaže na boljši medjezi-
kovni prenos večjezikovnega napovednega modela.
Keywords
sovražni govor;model BERT;nevronske mreže;medjezikovni prenos;strojno učenje;obdelava naravnega jezika;računalništvo in informatika;univerzitetni študij;diplomske naloge;
Data
Language: |
Slovenian |
Year of publishing: |
2020 |
Typology: |
2.11 - Undergraduate Thesis |
Organization: |
UL FRI - Faculty of Computer and Information Science |
Publisher: |
[Ž. Pečovnik] |
UDC: |
004.85(043.2) |
COBISS: |
18357251
|
Views: |
1077 |
Downloads: |
278 |
Average score: |
0 (0 votes) |
Metadata: |
|
Other data
Secondary language: |
English |
Secondary title: |
Cross-lingual transfer of hate speech prediction models |
Secondary abstract: |
With the development of social networks, there has been a signicant in-
crease of hate speech in user generated contents. We focus on two most
discussed topics, LGBT and migrants. We use the BERT neural network for
prediction of hate speech and make a comparison between the multilingual
model, trained on 104 dierent languages, and a trilingual model, trained on
Slovene, Croatian and English. Results show that the trilingual model is ap-
proximately 5% more accurate predicting hate speech on a language that it
was trained on. The multilingual model with or without additional training
is more accurate on languages that it was not trained on. This indicates a
better cross-lingual transfer of multilingual model. |
Secondary keywords: |
hate speech;BERT model;neural networks;cross-lingual transfer;machine learning;natural language processing;computer and information science;diploma; |
Type (COBISS): |
Bachelor thesis/paper |
Study programme: |
1000468 |
Embargo end date (OpenAIRE): |
1970-01-01 |
Thesis comment: |
Univ. v Ljubljani, Fak. za računalništvo in informatiko |
Pages: |
28 str. |
ID: |
11779600 |