本文主要是介绍python爬取知网论文关键词_Python爬虫根据关键词爬取知网论文摘要并保存到数据库中...,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
由于实验室需要一些语料做研究,语料要求是知网上的论文摘要,但是目前最新版的知网爬起来有些麻烦,所以我利用的是知网的另外一个搜索接口
搜索出来的结果和知网上的结果几乎一样
在这个基础上,我简单看了些网页的结构,很容易就能写出爬取得代码(是最基础的,相当不完善,增加其他功能可自行增加)
网页的结构还是很清晰的
摘要信息也很清晰
我使用的是 pymysql 连接的数据库,效率也还可以
下面直接贴代码:
# -*- coding: utf-8 -*-
import time
import re
import random
import requests
from bs4 import BeautifulSoup
import pymysql
connection = pymysql.connect(host='',
user='',
password='',
db='',
port=3306,
charset='utf8') # 注意是utf8不是utf-8
# 获取游标
cursor = connection.cursor()
#url = 'http://epub.cnki.net/grid2008/brief/detailj.aspx?filename=RLGY201806014&dbname=CJFDLAST2018'
#这个headers信息必须包含,否则该网站会将你的请求重定向到其它页面
headers = {
'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Encoding':'gzip, deflate, sdch',
'Accept-Language':'zh-CN,zh;q=0.8',
'Connection':'keep-alive',
'Host':'www.cnki.net',
'Referer':'http://search.cnki.net/search.aspx?q=%E4%BD%9C%E8%80%85%E5%8D%95%E4%BD%8D%3a%E6%AD%A6%E6%B1%89%E5%A4%A7%E5%AD%A6&rank=relevant&cluster=zyk&val=CDFDTOTAL',
'Upgrade-Insecure-Requests':'1',
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'
}
headers1 = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36'
}
def get_url_list(start_url):
depth = 20
url_list = []
for i in range(depth):
try:
url = start_url + "&p=" + str(i * 15)
search = requests.get(url.replace('\n', ''), headers=headers1)
soup = BeautifulSoup(search.text, 'html.parser')
for art in soup.find_all('div', class_='wz_tab'):
print(art.find('a')['href'])
if art.find('a')['href'] not in url_list:
url_list.append(art.find('a')['href'])
print("爬取第" + str(i) + "页成功!")
time.sleep(random.randint(1, 3))
except:
print("爬取第" + str(i) + "页失败!")
return url_list
def get_data(url_list, wordType):
try:
# 通过url_results.txt读取链接进行访问
for url in url_list:
i = 1;
if url == pymysql.NULL or url == '':
continue
try:
html = requests.get(url.replace('\n', ''), headers=headers)
soup = BeautifulSoup(html.text, 'html.parser')
except:
print("获取网页失败")
try:
print(url)
if soup is None:
continue
# 获取标题
title = soup.find('title').get_text().split('-')[0]
# 获取作者
author = ''
for a in soup.find('div', class_='summary pad10').find('p').find_all('a', class_='KnowledgeNetLink'):
author += (a.get_text() + ' ')
# 获取摘要
abstract = soup.find('span', id='ChDivSummary').get_text()
# 获取关键词,存在没有关键词的情况
except:
print("部分获取失败")
pass
try:
key = ''
for k in soup.find('span', id='ChDivKeyWord').find_all('a', class_='KnowledgeNetLink'):
key += (k.get_text() + ' ')
except:
pass
print("第" + str(i) + "个url")
print("【Title】:" + title)
print("【author】:" + author)
print("【abstract】:" + abstract)
print("【key】:" + key)
# 执行SQL语句
cursor.execute('INSERT INTO cnki VALUES (NULL, %s, %s, %s, %s, %s)', (wordType, title, author, abstract, key))
# 提交到数据库执行
connection.commit()
print()
print("爬取完毕")
finally:
print()
if __name__ == '__main__':
try:
for wordType in {"大肠杆菌", "菌群总落", "胭脂红", "日落黄"}:
wordType = "肉+" + wordType
start_url = "http://search.cnki.net/search.aspx?q=%s&rank=relevant&cluster=zyk&val=" % wordType
url_list = get_url_list(start_url)
print("开始爬取")
get_data(url_list, wordType)
print("一种类型爬取完毕")
print("全部爬取完毕")
finally:
connection.close()1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
在这里的关键词我简单的选了几个,作为实验,如果爬取的很多,可以写在txt文件里,直接读取就可以,非常方便。
这篇关于python爬取知网论文关键词_Python爬虫根据关键词爬取知网论文摘要并保存到数据库中...的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!