本文主要是介绍How To Configure Elasticsearch on Hadoop with HDP,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
原文地址:http://www.tuicool.com/articles/Jryyme
Elasticsearch’s engine integrates with Hortonworks Data Platform 2.0 and YARN to provide real-time search and access to information in Hadoop.
See it in action: register for the Hortonworks and Elasticsearch webinar on March 5th 2014 at 10 am PST/1pm EST to see the demo and an outline for best practices when integrating Elasticsearch and HDP 2.0 to extract maximum insights from your data. Click here to register for this exciting and informative webinar!
Try it yourself: Get started with this tutorial using Elasticsearch and Hortonworks Data Platform, or Hortonworks Sandbox to access server logs in Kibana using Apache Flume for ingestion.
Architecture
Following diagram depicts the proposed architecture to index the logs in near real-time into Elasticsearch and also save to Hadoop for long-term batch analytics.
Components
Elasticsearch
Elasticsearch is a search engine that can index new documents in near real-time and make them immediately available for querying. Elasticsearch is based on Apache Lucene and allows for setting up clusters of nodes that store any number of indices in a distributed, fault-tolerant way. If a node disappears, the cluster will rebalance the (shards of) indices over the remaining nodes. You can configure how many shards make up each index and how many replicas of these shards there should be. If a master shard goes offline, one of the replicas is promoted to master and used to repopulate another node.
Flume
Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of streaming data into different storage destinations like Hadoop Distributed File System. It has a simple and flexible architecture based on streaming data flows; and is robust and fault tolerant with tunable reliability mechanisms for failover and recovery.
Kibana
Kibana is an open source (Apache Licensed), browser based analytics and search interface to Logstash and other timestamped data sets stored in ElasticSearch. Kibana strives to be easy to get started with, while also being flexible and powerful
System Requirements
- Hadoop: Hortonworks Data Platform 2.0(HDP 2.0) or HDP Sandbox for HDP 2.0
- OS: 64 bit RHEL (Red Hat Enterprise Linux) 6, CentOS, Oracle Linux 6
- Software: yum, rpm, unzip, tar, wget, java
- JDK: Oracle 1.7 64, Oracle 1.6 update 31, Open JDK 7
Java Installation
Note: Define the JAVA_HOME environment variable and add the Java Virtual Machine and the Java binaries to your PATH environment variable.
Execute the following command to verify that the Java is in the PATH:
export JAVA_HOME=/usr/java/default
export PATH=$JAVA_HOME/bin:$PATH
java -version
Flume Installation
Execute the following commands to install flume binaries and agent scripts
yum install flume-agent flume
Elasticsearch Installation
Latest Elasticsearch can be downloaded from the following URL http://www.elasticsearch.org/download/
RPM Downloads can be found in https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.7.noarch.rpm
To install Elasticsearch on data nodes:
wget https:
//download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.7.noarch.rpm
rpm -ivh elasticsearch-
0.90
.
7
.noarch.rpm
Setup and configure Elasticsearch
Update the following properties in /etc/elasticsearch/elasticsearch.yml
- Set cluster name
node.name: "logsearch"
- Set node name
node.name: "node1"
- By default every node is eligible to be master and stores data. Properties can be adjusted by
node.master: true
node.data: true
- Number of shards can be adjusted by following property
index.number_of_shards: 5
- Number of replicas (Additional copies) can be set with
index.number_of_replicas : 1
- Adjust the path of data with
path.data: /data1,/data2,/data3,/data4
- Set to ensure a node sees N other master eligible nodes to be considered. This property needs to be set based on the size of the nodes
discovery.zen.minimum_master_nodes: 1
- Set the time to wait for ping responses from other nodes when discovering. Value needs to be higher for slow or congested network
discovery.zen.ping.timeout: 3s
- Disable the following, only if multicast is not supported in the network
discovery.zen.ping.multicast.enabled: false
Note: Configure an initial list of master nodes in the cluster, if multicast is disabled discovery.zen.ping.unicast.hosts: ["host1", "host2:port"]
Logging properties can be adjusted in /etc/elasticsearch/logging.yml
. The default log location is: /var/log/elasticsearch
Starting and Stopping Elasticsearch
- To start Elasticsearch
/etc/init.d/elasticsearch start
- To stop Elasticsearch
/etc/init.d/elasticsearch stop
Kibana Installation
Download the Kibana binaries from the following URL https://download.elasticsearch.org/kibana/kibana/kibana-3.0.0milestone4.tar.gz
wget https:
//download.elasticsearch.org/kibana/kibana/kibana-3.0.0milestone4.tar.gz
Extract archive with tar –zxvf kibana-
3.0
.0milestone4.tar.gz
Setup and configure Kibana
- Open
config.js
file under the extracted directory - Set the
elasticsearch
parameter to the fully qualified hostname or IP of your Elasticsearch server. elasticsearch: http://:9200
- Open
index.html
in your browser to access Kibana UI - Update the logstash index pattern to Flume supported index pattern
- Edit
app/dashboards/logstash.json
and replace all occurences of
[logstash-]YYYY.MM.DD
with[logstash-]YYYY-MM-DD
Setup and configure Flume
For demonstration purpose, lets setup and configure a Flume agent on a host where log file needs to be consumed with the following Flume configuration.
Create plugins.d directory and copy the Elasticsearch dependencies:
mkdir /usr/lib/flume/plugins.d
cp $elasticsearch_home/lib/elasticsearch-
0.90
*jar /usr/lib/flume/plugins.d
cp $elasticsearch_home/lib/lucene-core-*jar /usr/lib/flume/plugins.d
Update Flume configuration to consume a local file and index into Elasticsearch in logstash format. Note: in a real-world use cases, Flume Log4j Appender, Syslog TCP Source, Flume Client SDK, Spool Directory Source are preferred over tailing logs.
|
Prepare sample data for a simple test
Create a file /tmp/es_log.log
with the following data
|
Restart Flume
/etc/init.d/flume-agent restart
Searching and Dashboarding with Kibana
Open the $KIBANA_HOME/index.html
in browser. By default the welcome page is shown.
Click on “Logstash Dashboard” and select the appropriate time range to look at the charts based on the time stamped fields.
These screen shots show various available charts on search fields. e.g. Pie, Bar, Table charts
Content can be searched with custom filters and graphs can be plotted based on the search results as shown below.
Batch Indexing using MapReduce/Hive/Pig
Elasticsearch’s real-time search and analytics are natively integrated with Hadoop. and support MapReduce , Cascading , Hive and Pig .
Component | Implementation | notes |
MR2/YARN | ESInputFormatESOutputFormat | Mapreduce input and out formats are provided by the library |
Hive | org.elasticsearch.hadoop.hive.ESStorageHandler | Hive SerDe implementation |
Pig | org.elasticsearch.hadoop.pig.ESStorage | Pig storage handler |
Detailed Documentation with examples related to Elasticsearch hadoop integration can be found in the following URL https://github.com/elasticsearch/elasticsearch-hadoop
Thoughts on Best Practices
- Always set minimum_master_nodes to higher to avoid split brain (number of nodes / 2 + 1)
discovery.zen.minimum_master_nodes
should be set to something like N/2 + 1 where N is the number of available master nodes.- Set
action.disable_delete_all_indices
to disable accidental deletes - Set
gateway.recover_after_nodes
to appropriate number of nodes need to be up before the recovery process starts replicating data around the cluster. - Relax the real time aspect from 1 second to something a bit higher (
index.engine.robin.refresh_interval
). - Increase the memory allocated to Elasticsearch node. By default its 1g.
- Use Java 7 if possible for better performance with elastic search
- Set
index.fielddata.cache
: soft to avoid OutOfMemory errors - Use higher batch sizes in flume sink for higher throughput. E.g 1000
- Increase the open file limits for Elasticsearch
这篇关于How To Configure Elasticsearch on Hadoop with HDP的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!