Abstract
Methods from the field of optimization theory have played an important role in developing training algorithms for matrix factorization in recommender systems. Indeed, the realization that simple stochastic unconstrained gradient descent can be applied with success to the factorization of the user-item matrix is responsible, to a great extent, for the recent research interest in this area, and the introduction of a plethora of matrix factorization methods. In this paper, motivated by earlier approaches in training neural networks, we introduce a constrained optimization framework for incorporating additional knowledge into the matrix factorization formalism, which can overcome certain drawbacks of the unconstrained minimization approach. We examine two types of such additional knowledge, and consequently derive two algorithms, as a result of incorporating the different knowledge types in the context of the constrained optimization framework. Both algorithms are designed to improve convergence and accuracy in the broader class of matrix factorization methods in recommender systems.
Get full access to this article
View all access options for this article.
