scrapy 教程 MySQL_详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库

本文主要是介绍scrapy 教程 MySQL_详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

获取要爬取的URL

773cecffaacae937b98c20e2c693ce8e.png

149ead1e9d9c6e6e99f45bfac9f5056d.png

432af78e253e3ca6efc9dd6c36391062.png

819a159bedcaf0a8f3303ecce2fd64e7.png

799afe868bc134b9840af2a081b7c019.png

d0f3bbb92a5996947732ff095ed4de8d.png

爬虫前期工作

c0906de3d638a492c6889a783383adbd.png

用Pycharm打开项目开始写爬虫文件

字段文件items

# Define here the models for your scraped items

#

# See documentation in:

# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy

class NbaprojectItem(scrapy.Item):

# define the fields for your item here like:

# name = scrapy.Field()

# pass

# 创建字段的固定格式-->scrapy.Field()

# 英文名

engName = scrapy.Field()

# 中文名

chName = scrapy.Field()

# 身高

height = scrapy.Field()

# 体重

weight = scrapy.Field()

# 国家英文名

contryEn = scrapy.Field()

# 国家中文名

contryCh = scrapy.Field()

# NBA球龄

experience = scrapy.Field()

# 球衣号码

jerseyNo = scrapy.Field()

# 入选年

draftYear = scrapy.Field()

# 队伍英文名

engTeam = scrapy.Field()

# 队伍中文名

chTeam = scrapy.Field()

# 位置

position = scrapy.Field()

# 东南部

displayConference = scrapy.Field()

# 分区

division = scrapy.Field()

爬虫文件

import scrapy

import json

from nbaProject.items import NbaprojectItem

class NbaspiderSpider(scrapy.Spider):

name = 'nbaSpider'

allowed_domains = ['nba.com']

# 第一次爬取的网址,可以写多个网址

# start_urls = ['http://nba.com/']

start_urls = ['https://china.nba.com/static/data/league/playerlist.json']

# 处理网址的response

def parse(self, response):

# 因为访问的网站返回的是json格式,首先用第三方包处理json数据

data = json.loads(response.text)['payload']['players']

# 以下列表用来存放不同的字段

# 英文名

engName = []

# 中文名

chName = []

# 身高

height = []

# 体重

weight = []

# 国家英文名

contryEn = []

# 国家中文名

contryCh = []

# NBA球龄

experience = []

# 球衣号码

jerseyNo = []

# 入选年

draftYear = []

# 队伍英文名

engTeam = []

# 队伍中文名

chTeam = []

# 位置

position = []

# 东南部

displayConference = []

# 分区

division = []

# 计数

count = 1

for i in data:

# 英文名

engName.append(str(i['playerProfile']['firstNameEn'] + i['playerProfile']['lastNameEn']))

# 中文名

chName.append(str(i['playerProfile']['firstName'] + i['playerProfile']['lastName']))

# 国家英文名

contryEn.append(str(i['playerProfile']['countryEn']))

# 国家中文

contryCh.append(str(i['playerProfile']['country']))

# 身高

height.append(str(i['playerProfile']['height']))

# 体重

weight.append(str(i['playerProfile']['weight']))

# NBA球龄

experience.append(str(i['playerProfile']['experience']))

# 球衣号码

jerseyNo.append(str(i['playerProfile']['jerseyNo']))

# 入选年

draftYear.append(str(i['playerProfile']['draftYear']))

# 队伍英文名

engTeam.append(str(i['teamProfile']['code']))

# 队伍中文名

chTeam.append(str(i['teamProfile']['displayAbbr']))

# 位置

position.append(str(i['playerProfile']['position']))

# 东南部

displayConference.append(str(i['teamProfile']['displayConference']))

# 分区

division.append(str(i['teamProfile']['division']))

# 创建item字段对象,用来存储信息 这里的item就是对应上面导的NbaprojectItem

item = NbaprojectItem()

item['engName'] = str(i['playerProfile']['firstNameEn'] + i['playerProfile']['lastNameEn'])

item['chName'] = str(i['playerProfile']['firstName'] + i['playerProfile']['lastName'])

item['contryEn'] = str(i['playerProfile']['countryEn'])

item['contryCh'] = str(i['playerProfile']['country'])

item['height'] = str(i['playerProfile']['height'])

item['weight'] = str(i['playerProfile']['weight'])

item['experience'] = str(i['playerProfile']['experience'])

item['jerseyNo'] = str(i['playerProfile']['jerseyNo'])

item['draftYear'] = str(i['playerProfile']['draftYear'])

item['engTeam'] = str(i['teamProfile']['code'])

item['chTeam'] = str(i['teamProfile']['displayAbbr'])

item['position'] = str(i['playerProfile']['position'])

item['displayConference'] = str(i['teamProfile']['displayConference'])

item['division'] = str(i['teamProfile']['division'])

# 打印爬取信息

print("传输了",count,"条字段")

count += 1

# 将字段交回给引擎 -> 管道文件

yield item

配置文件->开启管道文件

387a6b273abefb6342894be859cbf2b7.png

e8df7bdab48b4365a00c7213f92166cc.png

# Scrapy settings for nbaProject project

#

# For simplicity, this file contains only settings considered important or

# commonly used. You can find more settings consulting the documentation:

#

# https://docs.scrapy.org/en/latest/topics/settings.html

# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html

# https://docs.scrapy.org/en/latest/topics/spider-middleware.html

# ----------不做修改部分---------

BOT_NAME = 'nbaProject'

SPIDER_MODULES = ['nbaProject.spiders']

NEWSPIDER_MODULE = 'nbaProject.spiders'

# ----------不做修改部分---------

# Crawl responsibly by identifying yourself (and your website) on the user-agent

#USER_AGENT = 'nbaProject (+http://www.yourdomain.com)'

# Obey robots.txt rules

# ----------修改部分(可以自行查这是啥东西)---------

# ROBOTSTXT_OBEY = True

# ----------修改部分---------

# Configure maximum concurrent requests performed by Scrapy (default: 16)

#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)

# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay

# See also autothrottle settings and docs

#DOWNLOAD_DELAY = 3

# The download delay setting will honor only one of:

#CONCURRENT_REQUESTS_PER_DOMAIN = 16

#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)

#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)

#TELNETCONSOLE_ENABLED = False

# Override the default request headers:

#DEFAULT_REQUEST_HEADERS = {

# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',

# 'Accept-Language': 'en',

#}

# Enable or disable spider middlewares

# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html

#SPIDER_MIDDLEWARES = {

# 'nbaProject.middlewares.NbaprojectSpiderMiddleware': 543,

#}

# Enable or disable downloader middlewares

# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html

#DOWNLOADER_MIDDLEWARES = {

# 'nbaProject.middlewares.NbaprojectDownloaderMiddleware': 543,

#}

# Enable or disable extensions

# See https://docs.scrapy.org/en/latest/topics/extensions.html

#EXTENSIONS = {

# 'scrapy.extensions.telnet.TelnetConsole': None,

#}

# Configure item pipelines

# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html

# 开启管道文件

# ----------修改部分---------

ITEM_PIPELINES = {

'nbaProject.pipelines.NbaprojectPipeline': 300,

}

# ----------修改部分---------

# Enable and configure the AutoThrottle extension (disabled by default)

# See https://docs.scrapy.org/en/latest/topics/autothrottle.html

#AUTOTHROTTLE_ENABLED = True

# The initial download delay

#AUTOTHROTTLE_START_DELAY = 5

# The maximum download delay to be set in case of high latencies

#AUTOTHROTTLE_MAX_DELAY = 60

# The average number of requests Scrapy should be sending in parallel to

# each remote server

#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0

# Enable showing throttling stats for every response received:

#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)

# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings

#HTTPCACHE_ENABLED = True

#HTTPCACHE_EXPIRATION_SECS = 0

#HTTPCACHE_DIR = 'httpcache'

#HTTPCACHE_IGNORE_HTTP_CODES = []

#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

管道文件 -> 将字段写进mysql

# Define your item pipelines here

#

# Don't forget to add your pipeline to the ITEM_PIPELINES setting

# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html

# useful for handling different item types with a single interface

from itemadapter import ItemAdapter

import pymysql

class NbaprojectPipeline:

# 初始化函数

def __init__(self):

# 连接数据库 注意修改数据库信息

self.connect = pymysql.connect(host='域名', user='用户名', passwd='密码',

db='数据库', port=端口号)

# 获取游标

self.cursor = self.connect.cursor()

# 创建一个表用于存放item字段的数据

createTableSql = """

create table if not exists `nbaPlayer`(

playerId INT UNSIGNED AUTO_INCREMENT,

engName varchar(80),

chName varchar(20),

height varchar(20),

weight varchar(20),

contryEn varchar(50),

contryCh varchar(20),

experience int,

jerseyNo int,

draftYear int,

engTeam varchar(50),

chTeam varchar(50),

position varchar(50),

displayConference varchar(50),

division varchar(50),

primary key(playerId)

)charset=utf8;

"""

# 执行sql语句

self.cursor.execute(createTableSql)

self.connect.commit()

print("完成了创建表的工作")

#每次yield回来的字段会在这里做处理

def process_item(self, item, spider):

# 打印item增加观赏性

print(item)

# sql语句

insert_sql = """

insert into nbaPlayer(

playerId, engName,

chName,height,

weight,contryEn,

contryCh,experience,

jerseyNo,draftYear

,engTeam,chTeam,

position,displayConference,

division

) VALUES (null,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)

"""

# 执行插入数据到数据库操作

# 参数(sql语句,用item字段里的内容替换sql语句的占位符)

self.cursor.execute(insert_sql, (item['engName'], item['chName'], item['height'], item['weight']

, item['contryEn'], item['contryCh'], item['experience'], item['jerseyNo'],

item['draftYear'], item['engTeam'], item['chTeam'], item['position'],

item['displayConference'], item['division']))

# 提交,不进行提交无法保存到数据库

self.connect.commit()

print("数据提交成功!")

启动爬虫

87a84a0fb807dcf15ee87f453442fbab.png

屏幕上滚动的数据

9bb7b28c47e3e39fc891ceb046d71f1a.png

去数据库查看数据

7ef2cfb1fd6a41983bd7947c4f9ce3c1.png

简简单单就把球员数据爬回来啦~

到此这篇关于详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库的文章就介绍到这了,更多相关Scrapy爬虫员数据存放到Mysql内容请搜索我们以前的文章或继续浏览下面的相关文章希望大家以后多多支持我们!

本文标题: 详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库

本文地址: http://www.cppcns.com/shujuku/mysql/374912.html

这篇关于scrapy 教程 MySQL_详解Python之Scrapy爬虫教程NBA球员数据存放到Mysql数据库的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/322859

相关文章

MySQL 8 中的一个强大功能 JSON_TABLE示例详解

《MySQL8中的一个强大功能JSON_TABLE示例详解》JSON_TABLE是MySQL8中引入的一个强大功能,它允许用户将JSON数据转换为关系表格式,从而可以更方便地在SQL查询中处理J... 目录基本语法示例示例查询解释应用场景不适用场景1. ‌jsON 数据结构过于复杂或动态变化‌2. ‌性能要

Python实现终端清屏的几种方式详解

《Python实现终端清屏的几种方式详解》在使用Python进行终端交互式编程时,我们经常需要清空当前终端屏幕的内容,本文为大家整理了几种常见的实现方法,有需要的小伙伴可以参考下... 目录方法一:使用 `os` 模块调用系统命令方法二:使用 `subprocess` 模块执行命令方法三:打印多个换行符模拟

Python实现MQTT通信的示例代码

《Python实现MQTT通信的示例代码》本文主要介绍了Python实现MQTT通信的示例代码,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一... 目录1. 安装paho-mqtt库‌2. 搭建MQTT代理服务器(Broker)‌‌3. pytho

MySQL字符串常用函数详解

《MySQL字符串常用函数详解》本文给大家介绍MySQL字符串常用函数,本文结合实例代码给大家介绍的非常详细,对大家学习或工作具有一定的参考借鉴价值,需要的朋友参考下吧... 目录mysql字符串常用函数一、获取二、大小写转换三、拼接四、截取五、比较、反转、替换六、去空白、填充MySQL字符串常用函数一、

Java中Arrays类和Collections类常用方法示例详解

《Java中Arrays类和Collections类常用方法示例详解》本文总结了Java中Arrays和Collections类的常用方法,涵盖数组填充、排序、搜索、复制、列表转换等操作,帮助开发者高... 目录Arrays.fill()相关用法Arrays.toString()Arrays.sort()A

基于Python开发一个图像水印批量添加工具

《基于Python开发一个图像水印批量添加工具》在当今数字化内容爆炸式增长的时代,图像版权保护已成为创作者和企业的核心需求,本方案将详细介绍一个基于PythonPIL库的工业级图像水印解决方案,有需要... 目录一、系统架构设计1.1 整体处理流程1.2 类结构设计(扩展版本)二、核心算法深入解析2.1 自

MySQL中比较运算符的具体使用

《MySQL中比较运算符的具体使用》本文介绍了SQL中常用的符号类型和非符号类型运算符,符号类型运算符包括等于(=)、安全等于(=)、不等于(/!=)、大小比较(,=,,=)等,感兴趣的可以了解一下... 目录符号类型运算符1. 等于运算符=2. 安全等于运算符<=>3. 不等于运算符<>或!=4. 小于运

虚拟机Centos7安装MySQL数据库实践

《虚拟机Centos7安装MySQL数据库实践》用户分享在虚拟机安装MySQL的全过程及常见问题解决方案,包括处理GPG密钥、修改密码策略、配置远程访问权限及防火墙设置,最终通过关闭防火墙和停止Net... 目录安装mysql数据库下载wget命令下载MySQL安装包安装MySQL安装MySQL服务安装完成

从入门到进阶讲解Python自动化Playwright实战指南

《从入门到进阶讲解Python自动化Playwright实战指南》Playwright是针对Python语言的纯自动化工具,它可以通过单个API自动执行Chromium,Firefox和WebKit... 目录Playwright 简介核心优势安装步骤观点与案例结合Playwright 核心功能从零开始学习

Python 字典 (Dictionary)使用详解

《Python字典(Dictionary)使用详解》字典是python中最重要,最常用的数据结构之一,它提供了高效的键值对存储和查找能力,:本文主要介绍Python字典(Dictionary)... 目录字典1.基本特性2.创建字典3.访问元素4.修改字典5.删除元素6.字典遍历7.字典的高级特性默认字典