Department of Computer Engineering
Parallel Stochastic Gradient Descent with Sub-iterations on Distributed Memory Systems
(Supervisor: Assoc.Prof. Dr. M.Mustafa Özdal) (Co-advisor: Prof.Dr.Cevdet Aykanat)
Computer Engineering Department
Abstract: We investigate parallelization of the stochastic gradient descent (SGD) algorithm for solving the matrix completion problem. Applications in the literature show that stale data usage and communication costs are important concerns that affect the performance of parallel SGD implementations.. We first briefly visit the stochastic gradient descent algorithm and matrix partitioning for parallel SGD. Then we define the stale data problem and communication costs. In order to improve the performance of parallel SGD, we propose a new algorithm with intra-iteration synchronization (referred as sub-iterations) to decrease communication costs and stale data usage. Experimental results show that using sub-iterations can de-crease staleness up to 97% and communication volume up to 47%. Furthermore, using sub-iterations can improve test error up to 60% when compared to the conventional parallel SGD implementation that does not use sub-iterations.
DATE: 01 February 2022, Tuesday @ 20:00 Zoom