Department of Computer Engineering
Efficient Hybrid Parallel Stochastic Gradient Descent
(Supervisor: Prof. Dr. Cevdet Aykanat)
Computer Engineering Department
Many electronic retailers and content providers use recommendation systems to improve user experience and to provide personalized recommendations. These systems usually count on rating matrices which preserve users’ feedback for products. Since any single user is likely to rate limited number of products, these rating matrices are quite sparse. One of the commonly used methods for product recommendation systems is the matrix completion method which rely on latent factor model. Stochastic Gradient Descent (SGD) is a matrix completion technique for the optimization of latent factor models. We introduce a hybrid parallel SGD framework to show the scalability of parallel SGD up to thousands of processors in distributed memory setting. We used MPI for inter-node communication, and pthread library for intra-node parallelism.
DATE: 04 November 2020, Wednesday @ 13:15