diplomsko delo
Marko Kaferle (Author), Sara Stančin (Mentor)

Abstract

Diplomska naloga z naslovom Izdelava orodja za shranjevanje in analizo spletnih medijskih vsebin je bila zastavljena s ciljem implementacije hitrega in sistematičnega preučevanja javnih medijskih vsebin. V diplomski nalogi smo si zadali sledeče cilje: ustvariti program za pridobivanje spletnih vsebin; vzpostaviti povezavo med omenjenim programom in strežnikom za shranjevanje podatkov; beležiti in analizirati pridobljeno vsebino. Naš začetni program za pridobivanje vsebin smo implementirali v programskem jeziku Java. Sprva smo se želeli posvetiti analizi slovenskih besedil, vendar smo se zaradi težav pri implementaciji preusmerili na angleški jezik. Uporabili smo NLP (Natural Language Processing) tehnike, ki so temeljile na knjižnici Stanford CoreNLP. Podatke smo shranjevali v SQL (Structured Query Language) bazo v njihovi osnovni in lematizirani obliki, pri čemer smo pridobljene vsebine razdelili na več sklopov. Temu je sledila analiza s pomočjo specifično zastavljenih funkcij. V glavnem delu analize smo se ukvarjali s testiranjem hitrosti poizvedovanja glede na uporabljeno metodo. Začeli smo z enostavnimi klici na celotni bazi, nadaljevali z uporabo pogledov, kasneje pa dodali še indekse. Rezultati so bili skladni s pričakovanji. Uporaba pogledov in indeksov je znatno skrajšala čas poizvedb.

Keywords

Stanford CoreNLP;SQL;lematizacija;indeksiranje;univerzitetni študij;Multimedija;diplomske naloge;

Data

Language: Slovenian
Year of publishing:
Typology: 2.11 - Undergraduate Thesis
Organization: UL FE - Faculty of Electrical Engineering
Publisher: [M. Kaferle]
UDC: 004.455:81'322.2(043.2)
COBISS: 169455363 Link will open in a new window
Views: 417
Downloads: 59
Average score: 0 (0 votes)
Metadata: JSON JSON-RDF JSON-LD TURTLE N-TRIPLES XML RDFA MICRODATA DC-XML DC-RDF RDF

Other data

Secondary language: English
Secondary title: Online media content retrieval and analysis
Secondary abstract: The thesis was conceived with the idea of implementing a fast and systematic way of analysing media coverage, namely web articles. The main focus of the thesis addresses: the implementation of a program for obtaining web content; bridging said program with a server for data storage; tracking and analysing the obtained data. Our starter program was implemented in Java. At the beginning, we wanted to focus on data in Slovene, but due to problems with the implementation of a working analyser, we shifted our focus to English. Using Stanford CoreNLP, we utilized various NLP (Natural Language Processing) techniques. The data was restructured to its most basic form using lemmatization and then stored in a SQL (Structured Query Language) server. What followed were experiments on said data using specific functions for specific subgroups. The main focus of the analysis was testing the speed of querying based on different factors. The first step used just a normally written query. The second step was focused on views, and the last optimization included the use of indexing. As predicted, the runtime significantly decreased with each additional step.
Secondary keywords: Stanford CoreNLP;SQL;lemmatization;indexing;
Type (COBISS): Bachelor thesis/paper
Study programme: 1001001
Embargo end date (OpenAIRE): 1970-01-01
Thesis comment: Univ. v Ljubljani, Fak. za elektrotehniko, Fak. za računalništvo in informatiko
Pages: XX, 50 str.
ID: 20386152