diplomsko delo
Klemen Pravdič (Author), Tomaž Dobravec (Mentor)

Abstract

Množenje redkih matrik na arhitekturi CUDA

Keywords

množenje redkih matrik;redka matrika;GPE procesiranje;CUDA;računalništvo;univerzitetni študij;diplomske naloge;

Data

Language: Slovenian
Year of publishing:
Typology: 2.11 - Undergraduate Thesis
Organization: UL FRI - Faculty of Computer and Information Science
Publisher: [K. Pravdič]
UDC: 512.643.122(043.2)
COBISS: 9985364 Link will open in a new window
Views: 54
Downloads: 5
Average score: 0 (0 votes)
Metadata: JSON JSON-RDF JSON-LD TURTLE N-TRIPLES XML RDFA MICRODATA DC-XML DC-RDF RDF

Other data

Secondary language: English
Secondary title: Sparse matrix multiplication on CUDA
Secondary abstract: Sparse matrix multiplication is a common operation in linear algebra and an important element of other algorithms. Sparse matrix is a matrix populated primarily with zeros. This thesis presents two algorithms for sparse matrix multiplication, row-column algorithm and row-row (also known as row-wise) algorithm. It describes sequential implementation on CPU and parallel implementation on GPU for both algorithms. Algorithms were implemented in C programming language. For parallel implementation we used GPU with CUDA architecture. We described different formats of storage for sparse matrices (CSR, CSC, and COO) that are used in implementation of algorithms. For the purpose of understanding parallel implementations of algorithms CUDA architecture is described. Timings for all implementations were measured and compared against each other. For testing purposes we used sparse matrices from Matrix Market repository along with sparse matrices with different densities and dimensions that we generated ourselves. On GPU we stored product as both, a sparse and a dense matrix. We determined that row-row algorithm is faster than row-column algorithm and under certain conditions parallel implementation outperforms sequential implementation of a row-row algorithm. Performance of parallel row-row algorithm depends on density and dimensions of input matrices; for efficient performance input matrices with smaller dimensions should be denser. Row-row algorithm on CUDA performs better when groups of implicitly synchronized threads (warps) are used.
Secondary keywords: sparse matrix multiplication;sparse matrix;GPU processing;CUDA;computer science;diploma;
File type: application/pdf
Type (COBISS): Undergraduate thesis
Thesis comment: Univ. v Ljubljani, Fak. za računalništvo in informatiko
Pages: 51 str.
ID: 24168169