Nutch1.8+Hadoop1.2+Solr4.3分布式集群配置

2024-05-15 04:32

本文主要是介绍Nutch1.8+Hadoop1.2+Solr4.3分布式集群配置,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

[b][color=green][size=large]Nutch 是一个开源Java 实现的搜索引擎。它提供了我们运行自己的搜索引擎所需的全部工具。包括全文搜索和Web爬虫。当然在百度百科上这种方法在Nutch1.2之后,已经不再适合这样描述Nutch了,因为在1.2版本之后,Nutch专注的只是爬取数据,而全文检索的部分彻底的交给Lucene和Solr,ES来做了,当然因为他们都是近亲关系,所以Nutch抓取完后的数据,非常easy的就能生成全文索引。
[/size][/color][/b]
[b][color=olive][size=large]下面散仙,进入正题,Nutch目前最新的版本是2.2.1,其中2.x的版本支持gora提供多种存储方式,1.x版本最新的是1.8只支持HDFS存储,散仙在这里用的还是Nutch1.8,那么,散仙为什么选择1.x系列呢? 这其实和自己的Hadoop环境有关系,2.x的Nutch用的Hadoop2.x的版本,当然如果你不嫌麻烦,你完全可以改改jar的配置,使Nutch2.x跑在Hadoop1.x的集群上。使用1.x的Nutch就可以很轻松的跑在1.x的hadoop里。下面是散仙,本次测试Nutch+Hadoop+Solr集群的配置情况:

[table]
|序号|名称|职责描述|
|1|Nutch1.8|主要负责爬取数据,支持分布式
|2|Hadoop1.2.0|使用MapReduce进行并行爬取,使用HDFS存储数据,Nutch的任务提交在Hadoop集群上,支持分布式
|3|Solr4.3.1|主要负责检索,对爬完后的数据进行搜索,查询,海量数据支持分布式
|4|IK4.3|主要负责,对网页内容与标题进行分词,便于全文检索
|5|Centos6.5|Linux系统,在上面运行nutch,hadoop等应用
|6|Tomcat7.0|应用服务器,给Solr提供容器运行
|7|JDK1.7|提供JAVA运行环境
|8|Ant1.9|提供Nutch等源码编译
|9|屌丝软件工程师一名|主角
[/table]
[/size][/color][/b]

[b][color=green][size=large]下面开始,正式的启程
1, 首先确保你的ant环境配置成功,一切的进行,最好在Linux下进行,windows上出问题的几率比较大,下载完的nutch源码,进入nutch的根目录下,执行ant,等待编译完成。编译完后,会有runtime目录,里面有Nutch启动的命令,local模式和deploy分布式集群模式[/size][/color][/b]


[img]http://dl2.iteye.com/upload/attachment/0097/1142/8d7bd2cd-648e-3888-b281-5a82904e0c71.jpg[/img]

[img]http://dl2.iteye.com/upload/attachment/0097/1144/cea57152-03cb-389d-9013-db5e299e887d.jpg[/img]
[b][color=green][size=large]2, 配置nutch-site.xml加入如下内容:[/size][/color][/b]


<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
<name>http.agent.name</name>
<value>mynutch</value>
</property>


<property>
<name>http.robots.agents</name>
<value>mynutch,*</value>
<description>The agent strings we'll look for in robots.txt files,
comma-separated, in decreasing order of precedence. You should
put the value of http.agent.name as the first agent name, and keep the
default * at the end of the list. E.g.: BlurflDev,Blurfl,*
</description>
</property>

<property>
<name>plugin.folders</name>
<!-- local模式下使用下面的配置 -->
<value>./src/plugin</value>
<!-- 集群模式下,使用下面的配置 -->
<value>plugins</value>

<!-- <value>D:\nutch编译好的1.8\Nutch1.8\src\plugin</value> -->
<description>Directories where nutch plugins are located. Each
element may be a relative or absolute path. If absolute, it is used
as is. If relative, it is searched for on the classpath.</description>
</property>

</configuration>

[b][color=green][size=large]3, 在hadoop集群上创建urls文件夹和mydir文件夹
,前者用于存储种子文件地址,后者存放爬取完后的数据。
hadoop fs -mkdir urls --创建文件夹
hadoop fs -put HDFS路径 本地路径 --上传种子文件到HDFS上。
hadoop fs -ls / ---查看路径下内容
[/size][/color][/b]
[img]http://dl2.iteye.com/upload/attachment/0097/1149/1e610d34-f427-381e-9c62-8bf3a515d07a.jpg[/img]
[b][color=green][size=large]4,配置好hadoop集群,以及它的环境变量HADOOP_HOME这个很重要,Nutch运行时候,会根据Hadoop的环境变量,提交作业。
[/size][/color][/b]

export HADOOP_HOME=/root/hadoop1.2
export PATH=$HADOOP_HOME/bin:$PATH
ANT_HOME=/root/apache-ant-1.9.2
export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL
export JAVA_HOME=/root/jdk1.7
export PATH=$JAVA_HOME/bin:$ANT_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

[b][color=olive][size=large]配置完成之后,可以使用which hadoop命令,检测是否配置正确:[/size][/color][/b]
[code="java"]# which hadoop
/root/hadoop1.2/bin/hadoop
# [/code]
[b][color=olive]5, 配置solr服务,需要将Nutch的conf下的schema.xml文件,拷贝到solr的里面,覆盖掉solr原来的schema.xml文件,并加入IK分词,内容如下:[/color][/b]

<?xml version="1.0" encoding="UTF-8" ?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or
more contributor license agreements. See the NOTICE file
distributed with this work for additional information regarding
copyright ownership. The ASF licenses this file to You under the
Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0 Unless required by
applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions
and limitations under the License.
-->
<!--
Description: This document contains Solr 3.1 schema definition to
be used with Solr integration currently build into Nutch. See
https://issues.apache.org/jira/browse/NUTCH-442
https://issues.apache.org/jira/browse/NUTCH-699
https://issues.apache.org/jira/browse/NUTCH-994
https://issues.apache.org/jira/browse/NUTCH-997
https://issues.apache.org/jira/browse/NUTCH-1058
https://issues.apache.org/jira/browse/NUTCH-1232
and
http://svn.apache.org/viewvc/lucene/dev/branches/branch_3x/solr/
example/solr/conf/schema.xml?view=markup
for more info.
-->
<schema name="nutch" version="1.5">

<types>
<fieldType name="string" class="solr.StrField" sortMissingLast="true"
omitNorms="true"/>
<fieldType name="long" class="solr.TrieLongField" precisionStep="0"
omitNorms="true" positionIncrementGap="0"/>
<fieldType name="float" class="solr.TrieFloatField" precisionStep="0"
omitNorms="true" positionIncrementGap="0"/>
<fieldType name="date" class="solr.TrieDateField" precisionStep="0"
omitNorms="true" positionIncrementGap="0"/>


<!-- 配置IK分词 -->

<fieldType name="ik" class="solr.TextField" positionIncrementGap="100">
<analyzer type="index">
<tokenizer class="org.wltea.analyzer.lucene.IKTokenizerFactory" useSmart="false" />

<!-- in this example, we will only use synonyms at query time
<filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-->
<!-- <filter class="solr.LowerCaseFilterFactory"/> -->
</analyzer>
<analyzer type="query">
<tokenizer class="org.wltea.analyzer.lucene.IKTokenizerFactory" useSmart="false" />
<!--<filter class="org.wltea.analyzer.lucene.IKSynonymFilterFactory" autoupdate="false" synonyms="synonyms.txt" flushtime="20" /> -->
<!--<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> -->
<!--<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> -->
<!-- <filter class="solr.LowerCaseFilterFactory"/> -->
</analyzer>
</fieldType>





<fieldType name="text" class="solr.TextField" positionIncrementGap="100">
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
<!-- in this example, we will only use synonyms at query time
<filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-->
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
<analyzer type="query">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
</fieldType>


<fieldType name="url" class="solr.TextField"
positionIncrementGap="100">
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.WordDelimiterFilterFactory"
generateWordParts="1" generateNumberParts="1"/>
</analyzer>
</fieldType>
</types>
<fields>

<field name="_version_" type="long" indexed="true" stored="true"/>

<field name="id" type="string" stored="true" indexed="true"/>

<!-- core fields -->
<field name="segment" type="string" stored="true" indexed="false"/>
<field name="text" type="string" stored="true" indexed="false"/>
<field name="digest" type="string" stored="true" indexed="false"/>
<field name="boost" type="float" stored="true" indexed="false"/>

<!-- fields for index-basic plugin -->
<field name="host" type="string" stored="false" indexed="true"/>
<field name="url" type="url" stored="true" indexed="true"
required="true"/>
<field name="content" type="ik" stored="true" indexed="true"/>
<field name="title" type="ik" stored="true" indexed="true"/>
<field name="cache" type="string" stored="true" indexed="false"/>
<field name="tstamp" type="date" stored="true" indexed="false"/>

<!-- fields for index-anchor plugin -->
<field name="anchor" type="string" stored="true" indexed="true"
multiValued="true"/>

<!-- fields for index-more plugin -->
<field name="type" type="string" stored="true" indexed="true"
multiValued="true"/>
<field name="contentLength" type="long" stored="true"
indexed="false"/>
<field name="lastModified" type="date" stored="true"
indexed="false"/>
<field name="date" type="date" stored="true" indexed="true"/>

<!-- fields for languageidentifier plugin -->
<field name="lang" type="string" stored="true" indexed="true"/>

<!-- fields for subcollection plugin -->
<field name="subcollection" type="string" stored="true"
indexed="true" multiValued="true"/>

<!-- fields for feed plugin (tag is also used by microformats-reltag)-->
<field name="author" type="string" stored="true" indexed="true"/>
<field name="tag" type="string" stored="true" indexed="true" multiValued="true"/>
<field name="feed" type="string" stored="true" indexed="true"/>
<field name="publishedDate" type="date" stored="true"
indexed="true"/>
<field name="updatedDate" type="date" stored="true"
indexed="true"/>

<!-- fields for creativecommons plugin -->
<field name="cc" type="string" stored="true" indexed="true"
multiValued="true"/>

<!-- fields for tld plugin -->
<field name="tld" type="string" stored="false" indexed="false"/>
</fields>
<uniqueKey>id</uniqueKey>
<defaultSearchField>content</defaultSearchField>
<solrQueryParser defaultOperator="OR"/>
</schema>

[color=green][size=large]6, 配置好后,进行nutch的/root/apache-nutch-1.8/runtime/deploy/bin目录下

执行如下命令:
./crawl urls mydir http://192.168.211.36:9001/solr/ 2
启动集群抓取任务。
抓取中的MapReduce截图如下:
[/size][/color]
[img]http://dl2.iteye.com/upload/attachment/0097/1154/d437f874-f0f5-3b26-b9e4-a960cb41a05e.jpg[/img]
[b][color=olive][size=large]抓取完,我们就可以去solr中查看抓取的内容了,截图如下:[/size][/color][/b]

[img]http://dl2.iteye.com/upload/attachment/0097/1162/99ea1193-b72c-387f-b83a-e5424f30aeba.jpg[/img]
[b][color=olive][size=large]至此,一个简单的抓取,搜索系统就搞定了,非常轻松,使用都是Lucene系列开源的工程。[/size][/color][/b]

[b][color=olive][size=large]总结:配置过程中遇到几个比较典型的错误,记录如下:[/size][/color][/b]
java.lang.Exception: java.lang.RuntimeException: Error in configuring object
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354)
Caused by: java.lang.RuntimeException: Error in configuring object
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
... 11 more
Caused by: java.lang.RuntimeException: Error in configuring object
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:34)
... 16 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
... 19 more
Caused by: java.lang.RuntimeException: x point org.apache.nutch.net.URLNormalizer not found.
at org.apache.nutch.net.URLNormalizers.<init>(URLNormalizers.java:123)
at org.apache.nutch.crawl.Injector$InjectMapper.configure(Injector.java:74)
... 24 more
2013-09-05 20:40:49,329 INFO mapred.JobClient (JobClient.java:monitorAndPrintJob(1393)) - map 0% reduce 0%
2013-09-05 20:40:49,332 INFO mapred.JobClient (JobClient.java:monitorAndPrintJob(1448)) - Job complete: job_local1315110785_0001
2013-09-05 20:40:49,332 INFO mapred.JobClient (Counters.java:log(585)) - Counters: 0
2013-09-05 20:40:49,333 INFO mapred.JobClient (JobClient.java:runJob(1356)) - Job Failed: NA
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357)
at org.apache.nutch.crawl.Injector.inject(Injector.java:281)
at org.apache.nutch.crawl.Crawl.run(Crawl.java:132)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.crawl.Crawl.main(Crawl.java:55)
==========================================================================
解决方法:在nutch-site.xml里面加入如下配置。
<property>
<name>plugin.folders</name>
<value>./src/plugin</value>
<description>Directories where nutch plugins are located. Each
element may be a relative or absolute path. If absolute, it is used
as is. If relative, it is searched for on the classpath.</description>
</property>



[b][color=green]在执行抓取的shell命令时,发现
使用 bin/crawl urls mydir http://192.168.211.36:9001/solr/ 2 命令有时候会出现,一些HDFS上的目录不能正确访问的问题,所以推荐使用下面的这个命令:
./crawl urls mydir http://192.168.211.36:9001/solr/ 2
[/color][/b]

这篇关于Nutch1.8+Hadoop1.2+Solr4.3分布式集群配置的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/990815

相关文章

SQL Server配置管理器无法打开的四种解决方法

《SQLServer配置管理器无法打开的四种解决方法》本文总结了SQLServer配置管理器无法打开的四种解决方法,文中通过图文示例介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的... 目录方法一:桌面图标进入方法二:运行窗口进入检查版本号对照表php方法三:查找文件路径方法四:检查 S

Linux中SSH服务配置的全面指南

《Linux中SSH服务配置的全面指南》作为网络安全工程师,SSH(SecureShell)服务的安全配置是我们日常工作中不可忽视的重要环节,本文将从基础配置到高级安全加固,全面解析SSH服务的各项参... 目录概述基础配置详解端口与监听设置主机密钥配置认证机制强化禁用密码认证禁止root直接登录实现双因素

嵌入式数据库SQLite 3配置使用讲解

《嵌入式数据库SQLite3配置使用讲解》本文强调嵌入式项目中SQLite3数据库的重要性,因其零配置、轻量级、跨平台及事务处理特性,可保障数据溯源与责任明确,详细讲解安装配置、基础语法及SQLit... 目录0、惨痛教训1、SQLite3环境配置(1)、下载安装SQLite库(2)、解压下载的文件(3)、

Linux如何快速检查服务器的硬件配置和性能指标

《Linux如何快速检查服务器的硬件配置和性能指标》在运维和开发工作中,我们经常需要快速检查Linux服务器的硬件配置和性能指标,本文将以CentOS为例,介绍如何通过命令行快速获取这些关键信息,... 目录引言一、查询CPU核心数编程(几C?)1. 使用 nproc(最简单)2. 使用 lscpu(详细信

Redis分片集群、数据读写规则问题小结

《Redis分片集群、数据读写规则问题小结》本文介绍了Redis分片集群的原理,通过数据分片和哈希槽机制解决单机内存限制与写瓶颈问题,实现分布式存储和高并发处理,但存在通信开销大、维护复杂及对事务支持... 目录一、分片集群解android决的问题二、分片集群图解 分片集群特征如何解决的上述问题?(与哨兵模

SpringBoot连接Redis集群教程

《SpringBoot连接Redis集群教程》:本文主要介绍SpringBoot连接Redis集群教程,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录1. 依赖2. 修改配置文件3. 创建RedisClusterConfig4. 测试总结1. 依赖 <de

Nginx 重写与重定向配置方法

《Nginx重写与重定向配置方法》Nginx重写与重定向区别:重写修改路径(客户端无感知),重定向跳转新URL(客户端感知),try_files检查文件/目录存在性,return301直接返回永久重... 目录一.try_files指令二.return指令三.rewrite指令区分重写与重定向重写: 请求

Nginx 配置跨域的实现及常见问题解决

《Nginx配置跨域的实现及常见问题解决》本文主要介绍了Nginx配置跨域的实现及常见问题解决,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来... 目录1. 跨域1.1 同源策略1.2 跨域资源共享(CORS)2. Nginx 配置跨域的场景2.1

gitlab安装及邮箱配置和常用使用方式

《gitlab安装及邮箱配置和常用使用方式》:本文主要介绍gitlab安装及邮箱配置和常用使用方式,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录1.安装GitLab2.配置GitLab邮件服务3.GitLab的账号注册邮箱验证及其分组4.gitlab分支和标签的

MySQL MCP 服务器安装配置最佳实践

《MySQLMCP服务器安装配置最佳实践》本文介绍MySQLMCP服务器的安装配置方法,本文结合实例代码给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的朋友参考下... 目录mysql MCP 服务器安装配置指南简介功能特点安装方法数据库配置使用MCP Inspector进行调试开发指