• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

recommenders: 来自微软的推荐系统最佳实践

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称:

recommenders

开源软件地址:

https://gitee.com/mirrors/recommenders

开源软件介绍:

Recommenders

Documentation Status

What's New (January 13, 2022)

We have a new release Recommenders 1.0.0! The codebase has now migrated to TensorFlow versions 2.6 / 2.7 and to Spark version 3. In addition, there are a few changes in the dependencies and extras installed by pip (see this guide). We have also made improvements in the code and the CI / CD pipelines.

Starting with release 0.6.0, Recommenders has been available on PyPI and can be installed using pip!

Here you can find the PyPi page: https://pypi.org/project/recommenders/

Here you can find the package documentation: https://microsoft-recommenders.readthedocs.io/en/latest/

Introduction

This repository contains examples and best practices for building recommendation systems, provided as Jupyter notebooks. The examples detail our learnings on five key tasks:

  • Prepare Data: Preparing and loading data for each recommender algorithm
  • Model: Building models using various classical and deep learning recommender algorithms such as Alternating Least Squares (ALS) or eXtreme Deep Factorization Machines (xDeepFM).
  • Evaluate: Evaluating algorithms with offline metrics
  • Model Select and Optimize: Tuning and optimizing hyperparameters for recommender models
  • Operationalize: Operationalizing models in a production environment on Azure

Several utilities are provided in recommenders to support common tasks such as loading datasets in the format expected by different algorithms, evaluating model outputs, and splitting training/test data. Implementations of several state-of-the-art algorithms are included for self-study and customization in your own applications. See the recommenders documentation.

For a more detailed overview of the repository, please see the documents on the wiki page.

Getting Started

Please see the setup guide for more details on setting up your machine locally, on a data science virtual machine (DSVM) or on Azure Databricks.

The installation of the recommenders package has been tested with

and currently does not support version 3.8 and above. It is recommended to install the package and its dependencies inside a clean environment (such as conda, venv or virtualenv).

To set up on your local machine:

  • To install core utilities, CPU-based algorithms, and dependencies:

    1. Ensure software required for compilation and Python librariesis installed.

      • On Linux this can be supported by adding:

        sudo apt-get install -y build-essential libpython<version>

        where <version> should be the Python version (e.g. 3.6).

      • On Windows you will need Microsoft C++ Build Tools.

    2. Create a conda or virtual environment. See thesetup guide for more details.

    3. Within the created environment, install the package fromPyPI:

      pip install --upgrade pippip install --upgrade setuptoolspip install recommenders[examples]
    4. Register your (conda or virtual) environment with Jupyter:

      python -m ipykernel install --user --name my_environment_name --display-name "Python (reco)"
    5. Start the Jupyter notebook server

      jupyter notebook
    6. Run the SAR Python CPU MovieLensnotebook under the 00_quick_start folder. Make sure tochange the kernel to "Python (reco)".

  • For additional options to install the package (support for GPU,Spark etc.) see this guide.

NOTE - The Alternating Least Squares (ALS) notebooks require a PySpark environment to run. Please follow the steps in the setup guide to run these notebooks in a PySpark environment. For the deep learning algorithms, it is recommended to use a GPU machine and to follow the steps in the setup guide to set up Nvidia libraries.

NOTE for DSVM Users - Please follow the steps in the Dependencies setup - Set PySpark environment variables on Linux or MacOS and Troubleshooting for the DSVM sections if you encounter any issue.

DOCKER - Another easy way to try the recommenders repository and get started quickly is to build docker images suitable for different environments.

Algorithms

The table below lists the recommender algorithms currently available in the repository. Notebooks are linked under the Example column as Quick start, showcasing an easy to run example of the algorithm, or as Deep dive, explaining in detail the math and implementation of the algorithm.

AlgorithmTypeDescriptionExample
Alternating Least Squares (ALS)Collaborative FilteringMatrix factorization algorithm for explicit or implicit feedback in large datasets, optimized for scalability and distributed computing capability. It works in the PySpark environment.Quick start / Deep dive
Attentive Asynchronous Singular Value Decomposition (A2SVD)*Collaborative FilteringSequential-based algorithm that aims to capture both long and short-term user preferences using attention mechanism. It works in the CPU/GPU environment.Quick start
Cornac/Bayesian Personalized Ranking (BPR)Collaborative FilteringMatrix factorization algorithm for predicting item ranking with implicit feedback. It works in the CPU environment.Deep dive
Cornac/Bilateral Variational Autoencoder (BiVAE)Collaborative FilteringGenerative model for dyadic data (e.g., user-item interactions). It works in the CPU/GPU environment.Deep dive
Convolutional Sequence Embedding Recommendation (Caser)Collaborative FilteringAlgorithm based on convolutions that aim to capture both user’s general preferences and sequential patterns. It works in the CPU/GPU environment.Quick start
Deep Knowledge-Aware Network (DKN)*Content-Based FilteringDeep learning algorithm incorporating a knowledge graph and article embeddings for providing news or article recommendations. It works in the CPU/GPU environment.Quick start / Deep dive
Extreme Deep Factorization Machine (xDeepFM)*HybridDeep learning based algorithm for implicit and explicit feedback with user/item features. It works in the CPU/GPU environment.Quick start
FastAI Embedding Dot Bias (FAST)Collaborative FilteringGeneral purpose algorithm with embeddings and biases for users and items. It works in the CPU/GPU environment.Quick start
LightFM/Hybrid Matrix FactorizationHybridHybrid matrix factorization algorithm for both implicit and explicit feedbacks. It works in the CPU environment.Quick start
LightGBM/Gradient Boosting Tree*Content-Based FilteringGradient Boosting Tree algorithm for fast training and low memory usage in content-based problems. It works in the CPU/GPU/PySpark environments.Quick start in CPU / Deep dive in PySpark
LightGCNCollaborative FilteringDeep learning algorithm which simplifies the design of GCN for predicting implicit feedback. It works in the CPU/GPU environment.Deep dive
GeoIMC*HybridMatrix completion algorithm that has into account user and item features using Riemannian conjugate gradients optimization and following a geometric approach. It works in the CPU environment.Quick start
GRU4RecCollaborative FilteringSequential-based algorithm that aims to capture both long and short-term user preferences using recurrent neural networks. It works in the CPU/GPU environment.Quick start
Multinomial VAECollaborative FilteringGenerative model for predicting user/item interactions. It works in the CPU/GPU environment.Deep dive
Neural Recommendation with Long- and Short-term User Representations (LSTUR)*Content-Based FilteringNeural recommendation algorithm for recommending news articles with long- and short-term user interest modeling. It works in the CPU/GPU environment.Quick start
Neural Recommendation with Attentive Multi-View Learning (NAML)*Content-Based FilteringNeural recommendation algorithm for recommending news articles with attentive multi-view learning. It works in the CPU/GPU environment.Quick start
Neural Collaborative Filtering (NCF)Collaborative FilteringDeep learning algorithm with enhanced performance for user/item implicit feedback. It works in the CPU/GPU environment.Quick start
Neural Recommendation with Personalized Attention (NPA)*Content-Based FilteringNeural recommendation algorithm for recommending news articles with personalized attention network. It works in the CPU/GPU environment.Quick start
Neural Recommendation with Multi-Head Self-Attention (NRMS)*Content-Based FilteringNeural recommendation algorithm for recommending news articles with multi-head self-attention. It works in the CPU/GPU environment.Quick start
Next Item Recommendation (NextItNet)Collaborative FilteringAlgorithm based on dilated convolutions and residual network that aims to capture sequential patterns. It considers both user/item interactions and features. It works in the CPU/GPU environment.Quick start
Restricted Boltzmann Machines (RBM)Collaborative FilteringNeural network based algorithm for learning the underlying probability distribution for explicit or implicit user/item feedback. It works in the CPU/GPU environment.Quick start / Deep dive
Riemannian Low-rank Matrix Completion (RLRMC)*Collaborative FilteringMatrix factorization algorithm using Riemannian conjugate gradients optimization with small memory consumption to predict user/item interactions. It works in the CPU environment.Quick start
Simple Algorithm for Recommendation (SAR)*Collaborative FilteringSimilarity-based algorithm for implicit user/item feedback. It works in the CPU environment.Quick start / Deep dive
Self-Attentive Sequential Recommendation (SASRec)Collaborative FilteringTransformer based algorithm for sequential recommendation. It works in the CPU/GPU environment.Quick start
Short-term and Long-term Preference Integrated Recommender (SLi-Rec)*Collaborative FilteringSequential-based algorithm that aims to capture both long and short-term user preferences using attention mechanism, a time-aware controller and a content-aware controller. It works in the CPU/GPU environment.Quick start
Multi-Interest-Aware Sequential User Modeling (SUM)*Collaborative FilteringAn enhanced memory network-based sequential user model which aims to capture users' multiple interests. It works in the CPU/GPU environment.Quick start
Sequential Recommendation Via Personalized Transformer (SSEPT)Collaborative FilteringTransformer based algorithm for sequential recommendation with User embedding. It works in the CPU/GPU environment.Quick start
Standard VAECollaborative FilteringGenerative Model for predicting user/item interactions. It works in the CPU/GPU environment.Deep dive
Surprise/Singular Value Decomposition (SVD)Collaborative FilteringMatrix factorization algorithm for predicting explicit rating feedback in small datasets. It works in the CPU/GPU environment.Deep dive
Term Frequency - Inverse Document Frequency (TF-IDF)Content-Based FilteringSimple similarity-based algorithm for content-based recommendations with text datasets. It works in the CPU environment.Quick start
Vowpal Wabbit (VW)*Content-Based FilteringFast online learning algorithms, great for scenarios where user features / context are constantly changing. It uses the CPU for online learning.Deep dive
Wide and DeepHybridDeep learning algorithm that can memorize feature interactions and generalize user features. It works in the CPU/GPU environment.Quick start
xLearn/Factorization Machine (FM) & Field-Aware FM (FFM)HybridQuick and memory efficient algorithm to predict labels with user/item features. It works in the CPU/GPU environment.Deep dive

NOTE: * indicates algorithms invented/contributed by Microsoft.

Independent or incubating algorithms and utilities are candidates for the contrib folder. This will house contributions which may not easily fit into the core repository or need time to refactor or mature the code and add necessary tests.

AlgorithmTypeDescriptionExample
SARplus *Collaborative FilteringOptimized implementation of SAR for SparkQuick start

Algorithm Comparison

We provide a benchmark notebook to illustrate how different algorithms could be evaluated and compared. In this notebook, the MovieLens dataset is split into training/test sets at a 75/25 ratio using a stratified split. A recommendation model is trained using each of the collaborative filtering algorithms below. We utilize empirical parameter values reported in literature here. For ranking metrics we use k=10 (top 10 recommended items). We run the comparison on a Standard NC6s_v2 Azure DSVM (6 vCPUs, 112 GB memory and 1 P100 GPU). Spark ALS is run in local standalone mode. In this table we show the results on Movielens 100k, running the algorithms for 15 epochs.

AlgoMAPnDCG@kPrecision@kRecall@kRMSEMAER2Explained Variance
ALS0.0047320.0442390.0484620.0177960.9650380.7530010.2556470.251648
BiVAE0.1461260.4750770.4117710.219145N/AN/AN/AN/A
BPR0.1324780.4419970.3882290.212522N/AN/AN/AN/A
FastAI0.0255030.1478660.1303290.0538240.9430840.7443370.2853080.287671
LightGCN0.0885260.4198460.3796260.144336N/AN/AN/AN/A
NCF0.1077200.3961180.3472960.180775N/AN/AN/AN/A
SAR0.1105910.3824610.3307530.1763851.2538051.048484-0.5693630.030474
SVD0.0128730.0959300.0911980.0327830.9386810.7426900.2919670.291971

Code of Conduct

This project adheres to Microsoft's Open Source Code of Conduct in order to foster a welcoming and inspiring community for all.

Contributing

This project welcomes contributions and suggestions. Before contributing, please see our contribution guidelines.

Build Status

These tests are the nightly builds, which compute the smoke and integration tests. main is our principal branch and staging is our development branch. We use pytest for testing python utilities in recommenders and papermill for the notebooks. For more information about the testing pipelines, please see the test documentation.

DSVM Build Status

The following tests run on a Linux DSVM daily.

Build TypeBranchStatusBranchStatus
Linux CPUmainBuild StatusstagingBuild Status
Linux GPUmainBuild StatusstagingBuild Status
Linux SparkmainBuild StatusstagingBuild Status

Related projects

Reference papers

  • A. Argyriou, M. González-Fierro, and L. Zhang, "Microsoft Recommenders: Best Practices for Production-Ready Recommendation Systems", WWW 2020: International World Wide Web Conference Taipei, 2020. Available online: https://dl.acm.org/doi/abs/10.1145/3366424.3382692
  • L. Zhang, T. Wu, X. Xie, A. Argyriou, M. González-Fierro and J. Lian, "Building Production-Ready Recommendation System at Scale", ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2019 (KDD 2019), 2019.
  • S. Graham, J.K. Min, T. Wu, "Microsoft recommenders: tools to accelerate developing recommender systems", RecSys '19: Proceedings of the 13th ACM Conference on Recommender Systems, 2019. Available online: https://dl.acm.org/doi/10.1145/3298689.3346967

鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
playground: Ascend open source playground.发布时间:2022-03-24
下一篇:
DI-orchestrator: OpenDILab强化学习容器化调度和协调框架发布时间:2022-03-24
热门推荐
热门话题
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap