推荐系统介绍:(协同过滤)—Intro to Recommender Systems: Collaborative Filtering

本文主要是介绍推荐系统介绍:(协同过滤)—Intro to Recommender Systems: Collaborative Filtering,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

本文试验前期准备:

  1. MovieLens  ml-100k数据集
  2. Jupyter notebook
  3. themoviedb.org API key

 本文试验内容翻译自:http://blog.ethanrosenthal.com/2015/11/02/intro-to-collaborative-filtering/

 

  1. 添加python引用
    import numpy as np
    import pandas as pd
  2. 进入MovieLens  ml-100k数据存放目录
    cd F:\Master\MachineLearning\kNN\ml-100k
  3. 读取数据:u.data每行数据分为userid,itemid,rating,时间戳四部分
    names = ['user_id', 'item_id', 'rating', 'timestamp']
    df = pd.read_csv('u.data', sep='\t', names=names)
    df.head()

     

     user_iditem_idratingtimestamp
    01962423881250949
    11863023891717742
    2223771878887116
    3244512880606923
    41663461886397596
  4. 统计文件中用户总数与电影总数
    n_users = df.user_id.unique().shape[0]
    n_items = df.item_id.unique().shape[0]
    print str(n_users) + ' users'
    print str(n_items) + ' items'
    943 users
    1682 items
  5. 构造 用户-电影评分矩阵
    ratings = np.zeros((n_users, n_items))
    for row in df.itertuples():ratings[row[1]-1, row[2]-1] = row[3]
    ratings
    array([[ 5.,  3.,  4., ...,  0.,  0.,  0.],[ 4.,  0.,  0., ...,  0.,  0.,  0.],[ 0.,  0.,  0., ...,  0.,  0.,  0.],..., [ 5.,  0.,  0., ...,  0.,  0.,  0.],[ 0.,  0.,  0., ...,  0.,  0.,  0.],[ 0.,  5.,  0., ...,  0.,  0.,  0.]])
  6. 计算数据稀疏度
    sparsity = float(len(ratings.nonzero()[0]))
    sparsity /= (ratings.shape[0] * ratings.shape[1])
    sparsity *= 100
    print 'Sparsity: {:4.2f}%'.format(sparsity)

    Sparsity: 6.30% 
    数据稀疏度:6.3%

  7.  数据稀疏度为6.3%,943个user,1682个item,每个用户平均需要做出100条评论,随机抽取10%数据,将数据分为训练集与测试机两部分
    def train_test_split(ratings):test = np.zeros(ratings.shape)train = ratings.copy()for user in xrange(ratings.shape[0]):test_ratings = np.random.choice(ratings[user, :].nonzero()[0], size=10, replace=False)train[user, test_ratings] = 0.test[user, test_ratings] = ratings[user, test_ratings]# Test and training are truly disjointassert(np.all((train * test) == 0)) return train, test
    train, test = train_test_split(ratings)

     

  8. 计算user或item的余弦相似性可以用代码通过for循环实现,但是这样Python代码会运行非常慢,这里可以使用NumPy的科学计算函数来表达方程式,提高计算速度
    def slow_similarity(ratings, kind='user'):if kind == 'user':axmax = 0axmin = 1elif kind == 'item':axmax = 1axmin = 0sim = np.zeros((ratings.shape[axmax], ratings.shape[axmax]))for u in xrange(ratings.shape[axmax]):for uprime in xrange(ratings.shape[axmax]):rui_sqrd = 0.ruprimei_sqrd = 0.for i in xrange(ratings.shape[axmin]):sim[u, uprime] = ratings[u, i] * ratings[uprime, i]rui_sqrd += ratings[u, i] ** 2ruprimei_sqrd += ratings[uprime, i] ** 2sim[u, uprime] /= rui_sqrd * ruprimei_sqrdreturn simdef fast_similarity(ratings, kind='user', epsilon=1e-9):# epsilon -> small number for handling dived-by-zero errorsif kind == 'user':sim = ratings.dot(ratings.T) + epsilonelif kind == 'item':sim = ratings.T.dot(ratings) + epsilonnorms = np.array([np.sqrt(np.diagonal(sim))])return (sim / norms / norms.T)
    %timeit fast_similarity(train, kind='user')
    1 loop, best of 3: 171 ms per loop
  9.  分别计算user相似性和item相似性,并输出item相似性矩阵的前4行

    user_similarity = fast_similarity(train, kind='user')
    item_similarity = fast_similarity(train, kind='item')
    print item_similarity[:4, :4]
    [[ 1.          0.42176871  0.3440934   0.4551558 ][ 0.42176871  1.          0.2889324   0.48827863][ 0.3440934   0.2889324   1.          0.33718518][ 0.4551558   0.48827863  0.33718518  1.        ]]
  10.  预测评分,predict_fast_simple使用NumPy数学函数,计算更块

    def predict_slow_simple(ratings, similarity, kind='user'):pred = np.zeros(ratings.shape)if kind == 'user':for i in xrange(ratings.shape[0]):for j in xrange(ratings.shape[1]):pred[i, j] = similarity[i, :].dot(ratings[:, j])\/np.sum(np.abs(similarity[i, :]))return predelif kind == 'item':for i in xrange(ratings.shape[0]):for j in xrange(ratings.shape[1]):pred[i, j] = similarity[j, :].dot(ratings[i, :].T)\/np.sum(np.abs(similarity[j, :]))return preddef predict_fast_simple(ratings, similarity, kind='user'):if kind == 'user':return similarity.dot(ratings) / np.array([np.abs(similarity).sum(axis=1)]).Telif kind == 'item':return ratings.dot(similarity) / np.array([np.abs(similarity).sum(axis=1)])
    %timeit predict_slow_simple(train, user_similarity, kind='user')
    1 loop, best of 3: 1min 52s per loop
    %timeit predict_fast_simple(train, user_similarity, kind='user')
    1 loop, best of 3: 279 ms per loop 
  11.  使用sklearn计算MSE,首先去除数据矩阵中的无效0值,然后直接调用sklearn里面的mean_squared_error函数计算MSE

    from sklearn.metrics import mean_squared_errordef get_mse(pred, actual):# Ignore nonzero terms.pred = pred[actual.nonzero()].flatten()actual = actual[actual.nonzero()].flatten()return mean_squared_error(pred, actual)
    item_prediction = predict_fast_simple(train, item_similarity, kind='item')
    user_prediction = predict_fast_simple(train, user_similarity, kind='user')print 'User-based CF MSE: ' + str(get_mse(user_prediction, test))
    print 'Item-based CF MSE: ' + str(get_mse(item_prediction, test))
    User-based CF MSE: 8.44170489251
    Item-based CF MSE: 11.5717812485
  12.  为提高预测的MSE,可以只考虑使用与目标用户最相似的k个用户的数据,进行Top-k预测并进行MSE计算

    def predict_topk(ratings, similarity, kind='user', k=40):pred = np.zeros(ratings.shape)if kind == 'user':for i in xrange(ratings.shape[0]):top_k_users = [np.argsort(similarity[:,i])[:-k-1:-1]]for j in xrange(ratings.shape[1]):pred[i, j] = similarity[i, :][top_k_users].dot(ratings[:, j][top_k_users]) pred[i, j] /= np.sum(np.abs(similarity[i, :][top_k_users]))if kind == 'item':for j in xrange(ratings.shape[1]):top_k_items = [np.argsort(similarity[:,j])[:-k-1:-1]]for i in xrange(ratings.shape[0]):pred[i, j] = similarity[j, :][top_k_items].dot(ratings[i, :][top_k_items].T) pred[i, j] /= np.sum(np.abs(similarity[j, :][top_k_items]))        return pred
    pred = predict_topk(train, user_similarity, kind='user', k=40)
    print 'Top-k User-based CF MSE: ' + str(get_mse(pred, test))pred = predict_topk(train, item_similarity, kind='item', k=40)
    print 'Top-k Item-based CF MSE: ' + str(get_mse(pred, test))

     

    计算结果为:

    Top-k User-based CF MSE: 6.47059807493
    Top-k Item-based CF MSE: 7.75559095568

    相比之前的方法,MSE已经降低了不少。

  13. 为进一步降低MSE,这里尝试使用不同的k值寻找最小的MSE,使用matplotlib 可视化输出结果
    k_array = [5, 15, 30, 50, 100, 200]
    user_train_mse = []
    user_test_mse = []
    item_test_mse = []
    item_train_mse = []def get_mse(pred, actual):pred = pred[actual.nonzero()].flatten()actual = actual[actual.nonzero()].flatten()return mean_squared_error(pred, actual)for k in k_array:user_pred = predict_topk(train, user_similarity, kind='user', k=k)item_pred = predict_topk(train, item_similarity, kind='item', k=k)user_train_mse += [get_mse(user_pred, train)]user_test_mse += [get_mse(user_pred, test)]item_train_mse += [get_mse(item_pred, train)]item_test_mse += [get_mse(item_pred, test)]  
    %matplotlib inline
    import matplotlib.pyplot as plt
    import seaborn as sns
    sns.set()pal = sns.color_palette("Set2", 2)plt.figure(figsize=(8, 8))
    plt.plot(k_array, user_train_mse, c=pal[0], label='User-based train', alpha=0.5, linewidth=5)
    plt.plot(k_array, user_test_mse, c=pal[0], label='User-based test', linewidth=5)
    plt.plot(k_array, item_train_mse, c=pal[1], label='Item-based train', alpha=0.5, linewidth=5)
    plt.plot(k_array, item_test_mse, c=pal[1], label='Item-based test', linewidth=5)
    plt.legend(loc='best', fontsize=20)
    plt.xticks(fontsize=16);
    plt.yticks(fontsize=16);
    plt.xlabel('k', fontsize=30);
    plt.ylabel('MSE', fontsize=30);

     

     
    从图中可以看出,在测试数据集中,k为15和50时分别产生一个最小值对基于用户和基于项目的协同过滤

     

  14.  计算无偏置下均方根误差MSE
    def predict_nobias(ratings, similarity, kind='user'):if kind == 'user':user_bias = ratings.mean(axis=1)ratings = (ratings - user_bias[:, np.newaxis]).copy()pred = similarity.dot(ratings) / np.array([np.abs(similarity).sum(axis=1)]).Tpred += user_bias[:, np.newaxis]elif kind == 'item':item_bias = ratings.mean(axis=0)ratings = (ratings - item_bias[np.newaxis, :]).copy()pred = ratings.dot(similarity) / np.array([np.abs(similarity).sum(axis=1)])pred += item_bias[np.newaxis, :]return pred

     

    user_pred = predict_nobias(train, user_similarity, kind='user')
    print 'Bias-subtracted User-based CF MSE: ' + str(get_mse(user_pred, test))item_pred = predict_nobias(train, item_similarity, kind='item')
    print 'Bias-subtracted Item-based CF MSE: ' + str(get_mse(item_pred, test))
    Bias-subtracted User-based CF MSE: 8.67647634245
    Bias-subtracted Item-based CF MSE: 9.71148412222



  15. 将Top-k和偏置消除算法结合起来,计算基于User的和基于Item的MSE,并分别取k=5,15,30,50,100,200,将计算的MSE结果运用matplotlib 可视化输出
    def predict_topk_nobias(ratings, similarity, kind='user', k=40):pred = np.zeros(ratings.shape)if kind == 'user':user_bias = ratings.mean(axis=1)ratings = (ratings - user_bias[:, np.newaxis]).copy()for i in xrange(ratings.shape[0]):top_k_users = [np.argsort(similarity[:,i])[:-k-1:-1]]for j in xrange(ratings.shape[1]):pred[i, j] = similarity[i, :][top_k_users].dot(ratings[:, j][top_k_users]) pred[i, j] /= np.sum(np.abs(similarity[i, :][top_k_users]))pred += user_bias[:, np.newaxis]if kind == 'item':item_bias = ratings.mean(axis=0)ratings = (ratings - item_bias[np.newaxis, :]).copy()for j in xrange(ratings.shape[1]):top_k_items = [np.argsort(similarity[:,j])[:-k-1:-1]]for i in xrange(ratings.shape[0]):pred[i, j] = similarity[j, :][top_k_items].dot(ratings[i, :][top_k_items].T) pred[i, j] /= np.sum(np.abs(similarity[j, :][top_k_items])) pred += item_bias[np.newaxis, :]return pred
    k_array = [5, 15, 30, 50, 100, 200]
    user_train_mse = []
    user_test_mse = []
    item_test_mse = []
    item_train_mse = []for k in k_array:user_pred = predict_topk_nobias(train, user_similarity, kind='user', k=k)item_pred = predict_topk_nobias(train, item_similarity, kind='item', k=k)user_train_mse += [get_mse(user_pred, train)]user_test_mse += [get_mse(user_pred, test)]item_train_mse += [get_mse(item_pred, train)]item_test_mse += [get_mse(item_pred, test)]  
    In [29]:
    pal = sns.color_palette("Set2", 2)plt.figure(figsize=(8, 8))
    plt.plot(k_array, user_train_mse, c=pal[0], label='User-based train', alpha=0.5, linewidth=5)
    plt.plot(k_array, user_test_mse, c=pal[0], label='User-based test', linewidth=5)
    plt.plot(k_array, item_train_mse, c=pal[1], label='Item-based train', alpha=0.5, linewidth=5)
    plt.plot(k_array, item_test_mse, c=pal[1], label='Item-based test', linewidth=5)
    plt.legend(loc='best', fontsize=20)
    plt.xticks(fontsize=16);
    plt.yticks(fontsize=16);
    plt.xlabel('k', fontsize=30);
    plt.ylabel('MSE', fontsize=30);



  16. 导入requests引用,通过requests.get方法获取链接地址
    import requests
    import jsonresponse = requests.get('http://us.imdb.com/M/title-exact?Toy%20Story%20(1995)')
    print response.url.split('/')[-2]
    Movie ID 输出结果:tt0114709
  17. 这里需要使用themoviedb的API,通过查询themoviedb.org的API获取指定movie id 的海报文件存放路径
    # Get base url filepath structure. w185 corresponds to size of movie poster.
    headers = {'Accept': 'application/json'}
    payload = {'api_key': '这里填入你的API'} 
    response = requests.get("http://api.themoviedb.org/3/configuration", params=payload, headers=headers)
    response = json.loads(response.text)
    base_url = response['images']['base_url'] + 'w185'def get_poster(imdb_url, base_url):# Get IMDB movie IDresponse = requests.get(imdb_url)movie_id = response.url.split('/')[-2]# Query themoviedb.org API for movie poster path.movie_url = 'http://api.themoviedb.org/3/movie/{:}/images'.format(movie_id)headers = {'Accept': 'application/json'}payload = {'api_key': '这里填入你的API'} response = requests.get(movie_url, params=payload, headers=headers)try:file_path = json.loads(response.text)['posters'][0]['file_path']except:# IMDB movie ID is sometimes no good. Need to get correct one.movie_title = imdb_url.split('?')[-1].split('(')[0]payload['query'] = movie_titleresponse = requests.get('http://api.themoviedb.org/3/search/movie', params=payload, headers=headers)movie_id = json.loads(response.text)['results'][0]['id']payload.pop('query', None)movie_url = 'http://api.themoviedb.org/3/movie/{:}/images'.format(movie_id)response = requests.get(movie_url, params=payload, headers=headers)file_path = json.loads(response.text)['posters'][0]['file_path']return base_url + file_path
    from IPython.display import Image
    from IPython.display import displaytoy_story = 'http://us.imdb.com/M/title-exact?Toy%20Story%20(1995)'
    Image(url=get_poster(toy_story, base_url))

     

    直接输出了电影的海报图片

     

  18. 加载MovieLens中u.data文件中的电影信息,根据给定的电影信息,计算最相似的k个电影,输出它们的海报

    # Load in movie data
    idx_to_movie = {}
    with open('u.item', 'r') as f:for line in f.readlines():info = line.split('|')idx_to_movie[int(info[0])-1] = info[4]def top_k_movies(similarity, mapper, movie_idx, k=6):return [mapper[x] for x in np.argsort(similarity[movie_idx,:])[:-k-1:-1]]
    idx = 0 # Toy Story
    movies = top_k_movies(item_similarity, idx_to_movie, idx)
    posters = tuple(Image(url=get_poster(movie, base_url)) for movie in movies)

     

    display(*posters)


  19. 输出id为1的电影(GoldenEye)的最相似的k(k默认为6)部电影海报
    idx = 1 # GoldenEye
    movies = top_k_movies(item_similarity, idx_to_movie, idx)
    posters = tuple(Image(url=get_poster(movie, base_url)) for movie in movies)
    display(*posters)

     

  20. 输出id为2的电影(Muppet Treasure Island)的最相似的k(k默认为6)部电影海报
    idx = 20 # Muppet Treasure Island
    movies = top_k_movies(item_similarity, idx_to_movie, idx)
    posters = tuple(Image(url=get_poster(movie, base_url)) for movie in movies)
    display(*posters)

     

  21. 输出id为20的电影(Muppet Treasure Island)的最相似的k(k默认为6)部电影海报
    idx = 20 # Muppet Treasure Island
    movies = top_k_movies(item_similarity, idx_to_movie, idx)
    posters = tuple(Image(url=get_poster(movie, base_url)) for movie in movies)
    display(*posters)

     

  22. 输出id为40的电影(Billy Madison)的最相似的k(k默认为6)部电影海报
    idx = 40 # Billy Madison
    movies = top_k_movies(item_similarity, idx_to_movie, idx)
    posters = tuple(Image(url=get_poster(movie, base_url)) for movie in movies)
    display(*posters)
  23. 有时候现在这个的推荐结果似乎并不总是很好,Star Wars最相似的电影是Toy Story?Star Wars这类很受欢迎的电影在系统中预测评分很高,可以考虑运用一个不同的相似度度量方法——pearson相关度来移除一些偏置
    from sklearn.metrics import pairwise_distances
    # Convert from distance to similarity
    item_correlation = 1 - pairwise_distances(train.T, metric='correlation')
    item_correlation[np.isnan(item_correlation)] = 0.

     

  24. 再此分别对id为0,1,20,40的电影进行最相似的k部电影预测
    idx = 0 # Toy Story
    movies = top_k_movies(item_correlation, idx_to_movie, idx)
    posters = tuple(Image(url=get_poster(movie, base_url)) for movie in movies)
    display(*posters)
    idx = 1 # GoldenEye
    movies = top_k_movies(item_correlation, idx_to_movie, idx)
    posters = tuple(Image(url=get_poster(movie, base_url)) for movie in movies)
    display(*posters)
    idx = 20 # Muppet Treasure Island
    movies = top_k_movies(item_correlation, idx_to_movie, idx)
    posters = tuple(Image(url=get_poster(movie, base_url)) for movie in movies)
    display(*posters)
    idx = 40 # Billy Madison
    movies = top_k_movies(item_correlation, idx_to_movie, idx)
    posters = tuple(Image(url=get_poster(movie, base_url)) for movie in movies)
    display(*posters)

     

 

sim(u,u)=cos(θ)=ru˙rururu=iruiruiir2uiir2ui

这篇关于推荐系统介绍:(协同过滤)—Intro to Recommender Systems: Collaborative Filtering的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/987936

相关文章

Knife4j+Axios+Redis前后端分离架构下的 API 管理与会话方案(最新推荐)

《Knife4j+Axios+Redis前后端分离架构下的API管理与会话方案(最新推荐)》本文主要介绍了Swagger与Knife4j的配置要点、前后端对接方法以及分布式Session实现原理,... 目录一、Swagger 与 Knife4j 的深度理解及配置要点Knife4j 配置关键要点1.Spri

Qt QCustomPlot库简介(最新推荐)

《QtQCustomPlot库简介(最新推荐)》QCustomPlot是一款基于Qt的高性能C++绘图库,专为二维数据可视化设计,它具有轻量级、实时处理百万级数据和多图层支持等特点,适用于科学计算、... 目录核心特性概览核心组件解析1.绘图核心 (QCustomPlot类)2.数据容器 (QCPDataC

Java内存分配与JVM参数详解(推荐)

《Java内存分配与JVM参数详解(推荐)》本文详解JVM内存结构与参数调整,涵盖堆分代、元空间、GC选择及优化策略,帮助开发者提升性能、避免内存泄漏,本文给大家介绍Java内存分配与JVM参数详解,... 目录引言JVM内存结构JVM参数概述堆内存分配年轻代与老年代调整堆内存大小调整年轻代与老年代比例元空

深度解析Java DTO(最新推荐)

《深度解析JavaDTO(最新推荐)》DTO(DataTransferObject)是一种用于在不同层(如Controller层、Service层)之间传输数据的对象设计模式,其核心目的是封装数据,... 目录一、什么是DTO?DTO的核心特点:二、为什么需要DTO?(对比Entity)三、实际应用场景解析

Python中win32包的安装及常见用途介绍

《Python中win32包的安装及常见用途介绍》在Windows环境下,PythonWin32模块通常随Python安装包一起安装,:本文主要介绍Python中win32包的安装及常见用途的相关... 目录前言主要组件安装方法常见用途1. 操作Windows注册表2. 操作Windows服务3. 窗口操作

Go语言中nil判断的注意事项(最新推荐)

《Go语言中nil判断的注意事项(最新推荐)》本文给大家介绍Go语言中nil判断的注意事项,本文给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的朋友参考下吧... 目录1.接口变量的特殊行为2.nil的合法类型3.nil值的实用行为4.自定义类型与nil5.反射判断nil6.函数返回的

c++中的set容器介绍及操作大全

《c++中的set容器介绍及操作大全》:本文主要介绍c++中的set容器介绍及操作大全,本文通过实例代码给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的朋友参考下吧... 目录​​一、核心特性​​️ ​​二、基本操作​​​​1. 初始化与赋值​​​​2. 增删查操作​​​​3. 遍历方

python 常见数学公式函数使用详解(最新推荐)

《python常见数学公式函数使用详解(最新推荐)》文章介绍了Python的数学计算工具,涵盖内置函数、math/cmath标准库及numpy/scipy/sympy第三方库,支持从基础算术到复杂数... 目录python 数学公式与函数大全1. 基本数学运算1.1 算术运算1.2 分数与小数2. 数学函数

Python Pillow 库详解文档(最新推荐)

《PythonPillow库详解文档(最新推荐)》Pillow是Python中最流行的图像处理库,它是PythonImagingLibrary(PIL)的现代分支和继承者,本文给大家介绍Pytho... 目录python Pillow 库详解文档简介安装核心模块架构Image 模块 - 核心图像处理基本导入

linux重启命令有哪些? 7个实用的Linux系统重启命令汇总

《linux重启命令有哪些?7个实用的Linux系统重启命令汇总》Linux系统提供了多种重启命令,常用的包括shutdown-r、reboot、init6等,不同命令适用于不同场景,本文将详细... 在管理和维护 linux 服务器时,完成系统更新、故障排查或日常维护后,重启系统往往是必不可少的步骤。本文