Contact: Surya Sankagiri
The goal of learning user preferences in recommender systems from data can be posed as the goal of 'completing' a user-item utility matrix from some entries. This modeling framework leads to a classical optimization problem called 'matrix factorization' [1] . Despite it being a nonconvex problem, it is well known that gradient descent performs surprisingly well for this problem. In recent years, it has been shown theoretically justifies this observation [2].
In our lab, we have been studying a related problem that involves learning from comparison data: users compare two items at a time and pick which one is better of the two. A similar modelling framework can be used as in classical matrix factorization and we obtain a nonconvex problem. We want to study the performance of gradient descent on this task via simulations. You will generate synthetic data according to some generative model and then run gradient descent on this task. The main goal of this project is to analyse the performance of gradient descent as a function of the noise in the dataset, the sparsity of the dataset, and other properties of the data.
If you are interested, please contact me with your CV and transcript.
[1] Koren, Y., Bell, R. and Volinsky, C., 2009. Matrix factorization techniques for recommender systems. Computer, 42(8), pp.30-37.
[2] Zheng, Q. and Lafferty, J., 2016. Convergence analysis for rectangular matrix completion using Burer-Monteiro factorization and gradient descent. arXiv preprint arXiv:1605.07051.