Scrapy+Selenium的使用:获取huya/lol板块直播间信息

2024-02-18 17:10

本文主要是介绍Scrapy+Selenium的使用:获取huya/lol板块直播间信息,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

目录

系列文章目录

第一章 Scrapy+Selenium的使用:获取huya/lol板块直播间信息

前言

一、代码

        huya_novideo.py

        items.py

        middlewares.py

        pipelines.py

        selenium_request.py

        settings.py

二、结果图示

总结


前言

  • 本次交流主要包含了selenium在scrapy的使用,具体如下。
    • 创建selenium_request.py文件,构造的selenium_request.py方法。
    • 在解析文件(huya_novideo.py)中判断是否需要使用selenium。
    • 配置items.py/middlewares.py/pipelines.py/settings.py
  • 本代码仅用于学习/交流,请勿商用。
  • 如有侵权,请及时联系删除。

一、代码

huya_novideo.py

import scrapy
from ..items import HuyaItem
from huya.selenium_request import SeleniumRequest
import datetime
# import json
# from scrapy.linkextractor import LinkExtractorclass HuyaNovideoSpider(scrapy.Spider):name = 'huya_novideo'# allowed_domains = ['www.huya.com']start_urls = ['https://www.huya.com/g/lol']# le = LinkExtractor(restrict_xpath='')def start_requests(self):print('*处理起始URL')yield SeleniumRequest(url=self.start_urls[0],callback=self.parse,)def parse(self, response):print(str(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')) + '正在解析' + str(response.url) + '……')live_infos = response.xpath('//li[@class="game-live-item"]')for info in live_infos:huya = HuyaItem()huya['liver'] = info.xpath('.//span[@class="txt"]//i[@class="nick"]/text()').extract_first()huya['title'] = info.xpath('.//a[@class="title"]/text()').extract_first()huya['tag_right'] = str(info.xpath('string(.//*[@class="tag-right"])').extract_first()).strip()huya['tag_left'] = str(info.xpath('.//*[@class="tag-leftTopImg"]/@src').extract_first()).strip()huya['popularity'] = info.xpath('.//i[@class="js-num"]//text()').extract_first()huya['href'] = info.xpath('.//a[contains(@class,"video-info")]/@href').extract_first()huya['tag_recommend'] = info.xpath('.//em[@class="tag tag-recommend"]/text()').extract_first()# imgs = info.xpath('.//img[@class="pic"]/@src').extract_first()# if 'https' not in imgs:#     huya['imgs'] = 'https:' + str(imgs)# else:#     huya['imgs'] = imgsyield huyadata_page = response.xpath('//div[@id="js-list-page"]/@data-pages').extract_first()for page in range(2, int(data_page) + 1):yield SeleniumRequest(url='https://www.huya.com/cache.php?m=LiveList&do=getLiveListByPage&gameId=1&tagAll=0&callback=getLiveListJsonpCallback&page=' + str(page), callback=self.parse)

items.py

import scrapyclass HuyaItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()liver = scrapy.Field()title = scrapy.Field()tag_right = scrapy.Field()tag_left = scrapy.Field()popularity = scrapy.Field()href = scrapy.Field()tag_recommend = scrapy.Field()

middlewares.py

# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlfrom scrapy import signals# useful for handling different item types with a single interface
from itemadapter import is_item, ItemAdapter
from huya.selenium_request import SeleniumRequest
from selenium.webdriver import Chrome
from selenium import webdriver
from scrapy.http.response.html import HtmlResponse
import time
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.by import Byclass HuyaSpiderMiddleware:# Not all methods need to be defined. If a method is not defined,# scrapy acts as if the spider middleware does not modify the# passed objects.@classmethoddef from_crawler(cls, crawler):# This method is used by Scrapy to create your spiders.s = cls()crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)return sdef process_spider_input(self, response, spider):# Called for each response that goes through the spider# middleware and into the spider.# Should return None or raise an exception.return Nonedef process_spider_output(self, response, result, spider):# Called with the results returned from the Spider, after# it has processed the response.# Must return an iterable of Request, or item objects.for i in result:yield idef process_spider_exception(self, response, exception, spider):# Called when a spider or process_spider_input() method# (from other spider middleware) raises an exception.# Should return either None or an iterable of Request or item objects.passdef process_start_requests(self, start_requests, spider):# Called with the start requests of the spider, and works# similarly to the process_spider_output() method, except# that it doesn’t have a response associated.# Must return only requests (not items).for r in start_requests:yield rdef spider_opened(self, spider):spider.logger.info('Spider opened: %s' % spider.name)class HuyaDownloaderMiddleware:# Not all methods need to be defined. If a method is not defined,# scrapy acts as if the downloader middleware does not modify the# passed objects.@classmethoddef from_crawler(cls, crawler):# This method is used by Scrapy to create your spiders.s = cls()crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)return sdef process_request(self, request, spider):# Called for each request that goes through the downloader# middleware.# Must either:# - return None: continue processing this request# - or return a Response object# - or return a Request object# - or raise IgnoreRequest: process_exception() methods of#   installed downloader middleware will be calledif 'cache' in request.url:next_page = self.web.find_element(By.XPATH, '//div[@id="js-list-page"]/div//a[@class="laypage_next"]')action = ActionChains(self.web)action.move_to_element(next_page)action.click()action.perform()time.sleep(3)return HtmlResponse(url=request.url, status=200, headers=None, body=self.web.page_source, request=request, encoding="utf-8")elif isinstance(request, SeleniumRequest):# SeleniumRequest处理请求self.web.get(request.url)print('我用selenium获取网页信息了'+str(request.url))time.sleep(3)return HtmlResponse(url=request.url, status=200, headers=None, body=self.web.page_source, request=request, encoding="utf-8")else:return Nonedef process_response(self, request, response, spider):# Called with the response returned from the downloader.# Must either;# - return a Response object# - return a Request object# - or raise IgnoreRequestreturn responsedef process_exception(self, request, exception, spider):# Called when a download handler or a process_request()# (from other downloader middleware) raises an exception.# Must either:# - return None: continue processing this exception# - return a Response object: stops process_exception() chain# - return a Request object: stops process_exception() chainpassdef spider_opened(self, spider):spider.logger.info('Spider opened: %s' % spider.name)option = webdriver.ChromeOptions()option.add_experimental_option("excludeSwitches", ['enable-automation', 'enable-logging'])self.web = Chrome(options=option)print('我开始用selenium了')

pipelines.py

from itemadapter import ItemAdapter
from scrapy.pipelines.images import ImagesPipeline
import scrapy
import datetimeclass HuyaPipeline:def open_spider(self, spider):self.f = open('huya' + str(datetime.datetime.now().strftime('%Y%m%d %H%M%S')) + '.csv', mode='w', encoding='utf-8-sig')# self.f.write(f"'liver', 'title', 'tag_right', 'tag_left', 'popularity', 'href', 'tag_recommend'")self.f.write('liver, title, tag_right, tag_left, popularity, href, tag_recommend\n')def process_item(self, item, spider):self.f.write(f"{item['liver']}, {item['title']}, {item['tag_right']},{item['tag_left']},{item['popularity']},{item['href']},{item['tag_recommend']}\n")return itemdef close_spider(self, spider):self.f.close()class FigurePipeline(ImagesPipeline):def get_media_requests(self, item, info):return scrapy.Request(item['imgs'])def file_path(self, item, request, response=None, info=None):file_name = str(item['liver'])+str(datetime.datetime.now().strftime('%Y%m%d %H%M%S'))return f"img/{file_name}"def item_completed(self, results, item, info):print(results)

selenium_request.py

from scrapy import Requestclass SeleniumRequest(Request):pass

settings.py

# Scrapy settings for huya project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'huya'SPIDER_MODULES = ['huya.spiders']
NEWSPIDER_MODULE = 'huya.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36'# Obey robots.txt rules
# ROBOTSTXT_OBEY = True# Configure maximum concurrent requests performed by Scrapy (default: 16)
CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
# CONCURRENT_REQUESTS_PER_DOMAIN = 16
# CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
# COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
# TELNETCONSOLE_ENABLED = False# Override the default request headers:
# DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
# }# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
# SPIDER_MIDDLEWARES = {
#    'huya.middlewares.HuyaSpiderMiddleware': 543,
# }# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {'huya.middlewares.HuyaDownloaderMiddleware': 543,
}# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
# EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
# }# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {'huya.pipelines.HuyaPipeline': 300,# 'huya.pipelines.FigurePipeline': 301,
}# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
# AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
# HTTPCACHE_ENABLED = True
# HTTPCACHE_EXPIRATION_SECS = 0
# HTTPCACHE_DIR = 'httpcache'
# HTTPCACHE_IGNORE_HTTP_CODES = []
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
IMAGES_STORE = './imgs_huya'
LOG_LEVEL = 'ERROR'

二、结果图示

考虑信息隐私问题,码住部分信息~ 


总结

本文仅简单介绍了scrapy+selenium的使用,而scrapy提供了更多自定义的功能仍待探究学习。

附Scrapy中文文档:Scrapy 2.5.0中文文档

这篇关于Scrapy+Selenium的使用:获取huya/lol板块直播间信息的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/721855

相关文章

Java使用Mail构建邮件功能的完整指南

《Java使用Mail构建邮件功能的完整指南》JavaMailAPI是一个功能强大的工具,它可以帮助开发者轻松实现邮件的发送与接收功能,本文将介绍如何使用JavaMail发送和接收邮件,希望对大家有所... 目录1、简述2、主要特点3、发送样例3.1 发送纯文本邮件3.2 发送 html 邮件3.3 发送带

Win32下C++实现快速获取硬盘分区信息

《Win32下C++实现快速获取硬盘分区信息》这篇文章主要为大家详细介绍了Win32下C++如何实现快速获取硬盘分区信息,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一下... 实现代码CDiskDriveUtils.h#pragma once #include <wtypesbase

使用DeepSeek搭建个人知识库(在笔记本电脑上)

《使用DeepSeek搭建个人知识库(在笔记本电脑上)》本文介绍了如何在笔记本电脑上使用DeepSeek和开源工具搭建个人知识库,通过安装DeepSeek和RAGFlow,并使用CherryStudi... 目录部署环境软件清单安装DeepSeek安装Cherry Studio安装RAGFlow设置知识库总

Python FastAPI入门安装使用

《PythonFastAPI入门安装使用》FastAPI是一个现代、快速的PythonWeb框架,用于构建API,它基于Python3.6+的类型提示特性,使得代码更加简洁且易于绶护,这篇文章主要介... 目录第一节:FastAPI入门一、FastAPI框架介绍什么是ASGI服务(WSGI)二、FastAP

Spring-AOP-ProceedingJoinPoint的使用详解

《Spring-AOP-ProceedingJoinPoint的使用详解》:本文主要介绍Spring-AOP-ProceedingJoinPoint的使用方式,具有很好的参考价值,希望对大家有所帮... 目录ProceedingJoinPoijsnt简介获取环绕通知方法的相关信息1.proceed()2.g

Android如何获取当前CPU频率和占用率

《Android如何获取当前CPU频率和占用率》最近在优化App的性能,需要获取当前CPU视频频率和占用率,所以本文小编就来和大家总结一下如何在Android中获取当前CPU频率和占用率吧... 最近在优化 App 的性能,需要获取当前 CPU视频频率和占用率,通过查询资料,大致思路如下:目前没有标准的

Maven pom.xml文件中build,plugin标签的使用小结

《Mavenpom.xml文件中build,plugin标签的使用小结》本文主要介绍了Mavenpom.xml文件中build,plugin标签的使用小结,文中通过示例代码介绍的非常详细,对大家的学... 目录<build> 标签Plugins插件<build> 标签<build> 标签是 pom.XML

JAVA虚拟机中 -D, -X, -XX ,-server参数使用

《JAVA虚拟机中-D,-X,-XX,-server参数使用》本文主要介绍了JAVA虚拟机中-D,-X,-XX,-server参数使用,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有... 目录一、-D参数二、-X参数三、-XX参数总结:在Java开发过程中,对Java虚拟机(JVM)的启动参数进

Java中使用注解校验手机号格式的详细指南

《Java中使用注解校验手机号格式的详细指南》在现代的Web应用开发中,数据校验是一个非常重要的环节,本文将详细介绍如何在Java中使用注解对手机号格式进行校验,感兴趣的小伙伴可以了解下... 目录1. 引言2. 数据校验的重要性3. Java中的数据校验框架4. 使用注解校验手机号格式4.1 @NotBl

Python使用DeepSeek进行联网搜索功能详解

《Python使用DeepSeek进行联网搜索功能详解》Python作为一种非常流行的编程语言,结合DeepSeek这一高性能的深度学习工具包,可以方便地处理各种深度学习任务,本文将介绍一下如何使用P... 目录一、环境准备与依赖安装二、DeepSeek简介三、联网搜索与数据集准备四、实践示例:图像分类1.