Bilkent University
Department of Computer Engineering


Parallel Stochastic Gradient Descent on Multicore Architectures


Selçuk Gülcan
MS Student
(Supervisor:Assoc. Prof. Dr. M. Mustafa Özdal)
Co-Supervisor: Prof. Dr. Cevdet Aykanat

Computer Engineering Department
Bilkent University

The focus of the thesis is efficient parallelization of the Stochastic Gradient Descent (SGD) algorithm for matrix completion problems on multicore architectures. Asynchronous methods and block-based methods utilizing 2D grid partitioning for task-to-thread assignment are commonly used approaches for shared-memory parallelization. However, asynchronous methods can have performance issues due to their memory access patterns, whereas grid-based methods can suffer from load imbalance especially when data sets are skewed and sparse. In this thesis, we first analyze parallel performance bottlenecks of the existing SGD algorithms in detail. Then, we propose new algorithms to alleviate these performance bottlenecks. Specifically, we propose bin-packing-based algorithms to balance thread loads under 2D partitioning. We also propose a grid-based asynchronous parallel SGD algorithm that improves cache utilization by changing the entry update order without affecting the factor update order and rearranging the memory layouts of the latent factor matrices. Our experiments show that the proposed methods perform significantly better than the existing approaches on shared-memory multi-core systems.


DATE: 15 September 2020, Tuesday @ 19:00