本文主要是介绍python crawler -利用XPath获取B站推荐视频封面,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
推荐页封面抓取不需要考虑JS
,直接用XPath定位<img>
即可。
推荐页url:https://www.bilibili.com/list/recommend/1.html
翻到x页
就是x.html
抓取封面,定位到<img>
中的src
,获取这个src
访问下载到本地就行了。
用XPath
获取src
路径:
"//div[@class='zr_recomd']/ul/li/div/a/img/@src"
完整代码:
# 抓取B站推荐页视频封面import requests
from lxml import etreeheader = {'User-Agent': 'chrome'} # request headerpic_save_ad = 'F:\\pyCharm\\spider\\pic_dirs'def save_to_disk(pic):if not pic:return Nonefor pic_item in pic :pic_name = pic_item.lstrip('http://').replace('.','').replace('/','').rstrip('jpg') + '.jpg'file_path = "{}\\{}".format(pic_save_ad,pic_name)print(file_path)try:res = requests.get(pic_item) # 加了headers会502,不加反而可以,可能是之前用过这个headers被反爬机制识别了?if res.ok:img = res.contentwith open( file_path, 'wb') as f1:f1.write(img)except :print('Failed to load this img !')def solve(page):urls = [u'https://www.bilibili.com/list/recommend/{}.html'.format(str(i)) for i in range(1, page+1)]for url in urls:text = requests.get(url, headers=header).texthtml = etree.HTML(text)links = []links = html.xpath("//div[@class='zr_recomd']/ul/li/div/a/img/@src")# print(len(links))save_to_disk(links)if __name__ == '__main__' :k = int(input('Input the page you scrap'))solve(k)
这篇关于python crawler -利用XPath获取B站推荐视频封面的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!