本文主要是介绍python 爬虫——scrapy框架爬取新浪娱乐文本初探,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
preface: 最近师兄的任务需求,需要点新浪娱乐的新闻文本,通过半监督找到人物与人物之间的关系以充实训练语料库,方便后续实验,并且爬取文本的技能也是做研究的基础,之前也用过scrapy试着爬取动态网页的内容但是未成功,这次爬取新浪娱乐文本稍微弄懂了些,故记录下来。
上一篇博客:爬取动态网页未成功
环境:ubuntu14.04、anaconda下的python2.7、scrapy
一、安装
用pip安装可破
pip install scrapy
二、创建项目
scrapy startproject sinaent
Figure 2-1: 创建项目
文件的意义
- scrapy.cfg: 项目的配置文件.
- sinaent/: 项目的python模块.
- sinaent/items.py: 项目中的item文件,可在爬取处理的py文件里通过from sinaent.items import 类名导入,.
- sinaent/pipelines.py: 项目中的pipelines文件.
- sinaent/settings.py: 项目的设置文件.
- sinaent/spiders/: 放置spider代码的目录.
scrapy -h 可查看scrapy7个全局命令
Figure 2-2: scrapy命令
Scrapy提供了两种类型的命令。一种必须在Scrapy项目中运行(针对项目的命令),另一种则不需要(全局命令)。全局命令:startproject、settings、runspider、shell、fetch、view、version,其中startproject、shell常用。项目命令:crawl、check、list、edit、parse、genspider、deploy、bench。具体可通过官方教程中的命令行工具查看其作用。
shell命令如scrapy shell http://ent.sina.com.cn进入shell交互式命令环境中,能够通过一些函数自由获取想要的内容,是个比较好的地方学习怎么获取自己想要的内容。如:
Figure 2-3: scrapy shell命令的作用
三、定义item
Item是保存爬取到的数据的容器,使用的方法和python字典类似,通过创建一个scrapy.Item类,并定义类型为scrapy.Field的类属性来定义item。具体可参考中文官方教程中的items。
textItem为自己定义的类,里面的Field类的实例根据自己所需定义,比如说要保存爬取title, link, text等等。卤煮不太熟悉这块,所以直接在爬取获取解析之后直接写入文档,未用到这块,但这样似乎不太安全。在爬取py文件里可通过from sinaent.items import textItem来使用这个textItem这个类,items为这个文件的名字,sinaent为这个文件的所在文件夹的名字,在爬取py文件item = textItem()来实例化这个类,item["title"] = "爬去到的title",return item即可。# -*- coding: utf-8 -*-# Define here the models for your scraped items # # See documentation in: # http://doc.scrapy.org/en/latest/topics/items.htmlimport scrapy from scrapy.item import Item, Fieldclass SinaentItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()passclass textItem(Item):text = Field()title = Field()link = Field()desc = Field()
四、Spider爬取
在spiders文件夹下自己定义爬取的文件,添加自己的代码,需要继承Spider类,并且需要必须要有name、start_urls等变量,需要有实现parse的方法。name为爬取指定的名字,爬取的时候scrapy crawl name,这个name便是在类中定义的name,start_urls为url列表,从这里开始爬取内容,第一个被爬取到的页面的url将是该列表之一。后续url将从获取到的数据中提取。
另外有可选域名限制allowed_domains变量,allowed_domains = ["example.com"]表示爬取的文档需要在这个域名下;
rules设置爬取规则,结合link_extractor函数对整个html中提取的url进行正则限制。
parse函数,从初始start_urls开始,爬去到的url,其负责处理response并返回处理的数据,必须返回一个包含Request或Item的可迭代的对象。
具体可参考中文官方文档Spiders章节,并且里面也介绍了不少例子。
另外解析html提取文本,根据官方文档即博友的例子大多都用xpath这个神器,scrapy.selector的选择器Selector可以调用xpath方法,提取html文本,xpath教程在w3c里面有,通俗易懂,在scrapy shell交互环境下可以自由学习提取自己想要的文本。将其加入spider中。
#!/usr/bin/env python # coding=utf-8 from scrapy.spiders import Spider from scrapy.selector import Selector from scrapy.linkextractors import LinkExtractorfrom scrapy.http import Request from sinaent.items import textItem from scrapy.contrib.spiders import CrawlSpider, Ruleimport reclass textSpider(Spider):name = "sinaenttext" #name of spidersallowed_domains = ["ent.sina.com.cn"]#start_urls = ["http://ent.sina.com.cn/s/m/2015-10-09/doc-ifxirmqc4955025.shtml"]#http://ent.sina.com.cnstart_urls = ["http://ent.sina.com.cn"]#rules = [Rule(LinkExtractor(allow=[])),"parse_content"]def parse(self,response):#rules得到的转移到这里,在Rule里面没有callback="parse",follow=Truelinks = LinkExtractor(allow=()).extract_links(response)for link in links:if "//ent.sina.com.cn" in link.url: #如果是包含“//ent.sina.com.cn”的url,那么对其继续连接爬取分析yield Request(url = link.url, callback = self.parse_page)def parse_page(self,response):for link in LinkExtractor(allow=()).extract_links(response):if "//ent.sina.com.cn" in link.url: yield Request(url = link.url, callback = self.parse_content)yield Request(url = link.url, callback = self.parse_page)def parse_content(self, response):sel = Selector(response)url = response.urlpattern = re.compile("(\w+)")write_name = pattern.findall(url)[-2] #将url倒数第二个数字字母作为文件名,texts = sel.xpath("//p/text()").extract()[:-5] #将<p></p>标签之间的文本提取出来,不包含“Corpright”等内容write_text = open("texts3/"+write_name,"wb")for i in texts:write_text.write(i) #write to filewrite_text.close()items = []item = textItem()for i in texts:if len(i)<5:continueitem["text"] = iitems.append(item)return items
里边有些然并卵的代码,也一并贴出来了,不过值爬取到了一丁点的新浪娱乐的文章,不知问题出在哪,先写出来记录下来。后续改进。
五、一些设置
Scrapy在setting配置文件中,可以定抓取的速率、是否在桌面显示抓取过程信息等。
参考:http://chenqx.github.io/2014/11/09/Scrapy-Tutorial-for-BBSSpider/及官方文档避免被禁止(ban)# -*- coding: utf-8 -*-# Scrapy settings for sinaent project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # http://doc.scrapy.org/en/latest/topics/settings.html # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'sinaent'CONCURRENT_REQUESTS = 200 LOG_LEVEL = "INFO" COOKIES_ENABLED = True RETRY_ENABLED = True SPIDER_MODULES = ['sinaent.spiders'] NEWSPIDER_MODULE = 'sinaent.spiders'ITEM_PIPELINES = {} # Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'sinaent (+http://www.yourdomain.com)'# Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS=32# Configure a delay for requests for the same website (default: 0) # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY=3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN=16 #CONCURRENT_REQUESTS_PER_IP=16# Disable cookies (enabled by default) #COOKIES_ENABLED=False# Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED=False# Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #}# Enable or disable spider middlewares # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'sinaent.middlewares.MyCustomSpiderMiddleware': 543, #}# Enable or disable downloader middlewares # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'sinaent.middlewares.MyCustomDownloaderMiddleware': 543, #}# Enable or disable extensions # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.telnet.TelnetConsole': None, #}# Configure item pipelines # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html #ITEM_PIPELINES = { # 'sinaent.pipelines.SomePipeline': 300, #}# Enable and configure the AutoThrottle extension (disabled by default) # See http://doc.scrapy.org/en/latest/topics/autothrottle.html # NOTE: AutoThrottle will honour the standard settings for concurrency and delay #AUTOTHROTTLE_ENABLED=True # The initial download delay #AUTOTHROTTLE_START_DELAY=5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY=60 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG=False# Enable and configure HTTP caching (disabled by default) # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED=True #HTTPCACHE_EXPIRATION_SECS=0 #HTTPCACHE_DIR='httpcache' #HTTPCACHE_IGNORE_HTTP_CODES=[] #HTTPCACHE_STORAGE='scrapy.extensions.httpcache.FilesystemCacheStorage'
六、效果
Figure 6-1: 爬取效果
Figure 6-2: 爬取的文章
七、出现的问题
只爬取到了6M多点的文章,2000篇而已,太少了。新浪娱乐少说也有几个G的文章,问了同仁加上自己的理解,是只爬取到了第一页,后面的没有爬到,因为后面的是基于JS回调,不是静态的。所以没有爬取到,涉及到JS,还是需要解决。将url重新放到初始url不知道怎么样,全栈爬虫scrapy应该有这功能,但是看到的例子中都没实现涉及到这块。
参考:
scrapy中文教程:http://scrapy-chs.readthedocs.org/zh_CN/latest/intro/tutorial.html
scrapy youtubu视频:https://www.youtube.com/watch?v=758KrjCgkN8&list=PLiSJ-0KobHCKnku5ZuypaSwoTYjOJLjWL
开源中国用scrapy抓取豆瓣小组数据例子:http://my.oschina.net/chengye/blog/124162
scrapy爬虫抓取饮水思源BBS网站数据:http://chenqx.github.io/2014/11/09/Scrapy-Tutorial-for-BBSSpider/
scarpy抓取动态网页的数据:http://chenqx.github.io/2014/12/23/Spider-Advanced-for-Dynamic-Website-Crawling/
w3c中的Xpath教程:http://www.w3school.com.cn/xpath/index.asp
CSDN爬取python/book, Resources:http://blog.csdn.net/pleasecallmewhy/article/details/19642329
CSDN定向批量获取职位招聘信息:http://blog.csdn.net/HanTangSongMing/article/details/24454453
开源中国爬取页面并递归存取:http://my.oschina.net/HappyRoad/blog/173510
开源中国scrapy爬取例子:http://my.oschina.net/lpe234/blog/304566
博客园爬取自己博客的例子:http://www.cnblogs.com/huhuuu/p/3706994.html
这篇关于python 爬虫——scrapy框架爬取新浪娱乐文本初探的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!