A Gradient-based Adaptive Learning Framework for Efficient Personal Recommendation
Yue Ning, Naren Ramakrishnan
Abstract
Recommending personalized content to users is a long-standing challenge to many online services including Facebook, Yahoo, Linkedin and Twitter. Traditional recommendation models such as latent factor models and feature-based models are usually trained for all users and optimize an "average" experience for them, yielding sub-optimal solutions. Although multi-task learning provides an opportunity to learn personalized models per user, learning algorithms are usually tailored to specific models (e.g., generalized linear model, matrix factorization and etc.), creating obstacles for a unified engineering interface, which is important for large Internet companies. In this paper, we present an empirical framework to learn user-specific personal models for content recommendation by utilizing gradient information from a global model. Our proposed method can potentially benefit any model that can be optimized through gradients, offering a lightweight yet generic alternative to conventional multi-task learning algorithms for user personalization. We demonstrate the effectiveness of the proposed framework by incorporating it in three popular machine learning algorithms including logistic regression, gradient boosting decision tree and matrix factorization. Our extensive empirical evaluation shows that the proposed framework can significantly improve the efficiency of personalized recommendation in real-world datasets.
Yue Ning, Yue Shi, Liangjie Hong, Huzefa Rangwala, Naren Ramakrishnan: A Gradient-based Adaptive Learning Framework for Efficient Personal Recommendation. RecSys2017: 23-31
People
Publication Details
- Date of publication:
- August 27, 2017
- Conference:
- Page number(s):
- 23-31