magistrsko delo
Žan Palčič (Author), Uroš Lotrič (Mentor), Patricio Bulić (Co-mentor)

Abstract

V magistrskem delu smo razvili programsko ogrodje za realizacijo in pohitritev delovanja polno povezanih nevronskih mrež na vezjih FPGA. Nevronske mreže so izvedene v visoko-nivojskem ogrodju OpenCL z nekaj prilagoditvami za vezja FPGA. Zaradi učinkovitosti vezja FPGA pri računanju s števili v fiksni vejici in zaradi prilagodljivih polno povezanih nevronskih mrež, smo uporabili približne množilnike in števila v fiksni vejici. Uporabili smo iterativni logaritmični množilnik ILM in hibridni logaritmični množilnik LOBO. Z enostavnim iterativnim učenjem in z uporabo približnih množilnikov nismo uspeli naučiti nevronske mreže. Pri napovedovanju se je najbolje izkazala nevronska mreža s približnim množilnikom ILM z enim korekcijskim vezjem. S približnimi množilniki smo v večini primerov uspeli sintetizirati vezja z višjo frekvenco ure in hkrati dosegli bolj uravnoteženo porabo različnih gradnikov na vezju FPGA.

Keywords

FPGA;Open CL;adaptivni algoritmi;umetna nevronska mreža;približni množilniki;računalništvo;računalništvo in informatika;magisteriji;

Data

Language: Slovenian
Year of publishing:
Typology: 2.09 - Master's Thesis
Organization: UL FRI - Faculty of Computer and Information Science
Publisher: [Ž. Palčič]
UDC: 004(043.2)
COBISS: 1538500547 Link will open in a new window
Views: 595
Downloads: 220
Average score: 0 (0 votes)
Metadata: JSON JSON-RDF JSON-LD TURTLE N-TRIPLES XML RDFA MICRODATA DC-XML DC-RDF RDF

Other data

Secondary language: English
Secondary title: Programming adaptive algorithms on FPGA with OpenCL
Secondary abstract: The goal of master thesis was to develop a framework for the development and acceleration of fully connected neural networks in FPGAs. We implement fully connected neural networks using Intel® FPGA SDK for OpenCL. To fully exploit the efficiency of FPGA’s fixed-point arithmetic operations on one hand and adaptiveness of neural networks on the other hand, we use fixed-point number representation and approximate multipliers. We perform experiments with iterative logarithmic multiplier (ILM) and a hybrid logarithmic-booth encoding multiplier (LOBO). Using simple iterative learning methods with approximate multipliers we could not successfully train neural networks. Configuration of a neural network using ILM with one correction circuits shows the best results during inference. In most cases, using the approximate multipliers, the compiler synthesises circuits with higher clock frequency and more balanced usage of FPGA's resources.
Secondary keywords: FPGA;OpenCL;adaptive algorithms;artificial neural network;approximate multipliers;computer science;computer and information science;master's degree;
Type (COBISS): Master's thesis/paper
Study programme: 1000471
Embargo end date (OpenAIRE): 1970-01-01
Thesis comment: Univ. v Ljubljani, Fak. za računalništvo in informatiko
Pages: 69 str.
ID: 11342851