本文主要是介绍Hadoop 搭建分布式环境 hadoop-3.0.0.tar.gz,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
1. 使用3台CentOS7虚拟机
关闭防火墙
# systemctl stop firewalld
# systemctl disable firewalld
主机名和IP地址
node4111:172.16.18.59
node4113:172.16.18.62
node4114:172.16.18.63
每个主机修改hosts
vim /etc/hosts
172.16.18.59 node4111
172.16.18.62 node4113
172.16.18.63 node4114
1.1 hadoop 角色分布
node4111:NameNode/DataNode ResourceManager/NodeManager
node4113:DataNode NodeManager
node4114:DataNode NodeManager
2.设置SSH免密钥
在每台主机上执行,一直回车
/root/目录下会生产一个隐藏目录.ssh
ssh-keygen -t rsa
在node4111主机上执行,
ssh-copy-id -i /root/.ssh/id_rsa.pub node4111
ssh-copy-id -i /root/.ssh/id_rsa.pub node4113
ssh-copy-id -i /root/.ssh/id_rsa.pub node4114
查看每个主机文件
cat /root/.ssh/authorized_keys
3. 下载JDK 和 Hadoop
JDK下载页面
Hadoop下载页面
安装软件版本为
hadoop-3.0.0.tar.gz
jdk-10_linux-x64_bin.tar.gz
/root/目录下新建目录had,解压软件
tar -zxvf hadoop-3.0.0.tar.gz -C had
tar -zxvf jdk-10_linux-x64_bin.tar.gz -C had
配置环境变量
export JAVA_HOME=/root/had/jdk-10
export PATH=$JAVA_HOME/bin:$PATHexport HADOOP_HOME=/root/had/hadoop-3.0.0
export PATH=$HADOOP_HOME:/bin:$PATH
export HADOOP_HOME=/root/had/hadoop-3.0.0
export HADOOP_HDFS_HOME=/root/had/hadoop-3.0.0
export HADOOP_CONF_DIR=/root/had/hadoop-3.0.0/etc/hadoop
source .bash_profile
测试
# java 有输出
4. hadoop 配置和分发
hadoop 集群安装
hadoop添加到环境变量
export HADOOP_HOME=/root/had/hadoop-3.0.0
export PATH=$HADOOP_HOME:/bin:$PATH
source .bash_profile
路径
/root/had/hadoop-3.0.0/etc/hadoop
1) vim hadoop-env.sh
修改
export JAVA_HOME=/root/had/jdk-1
2) vim core-site.xml
<configuration><property><name>fs.defaultFS</name><value>hdfs://node4111:9000</value></property>
</configuration>
3) vim hdfs-site.xml
/root/had 目录下新建目录tmp
<configuration><property><name>dfs.replication</name><value>2</value></property><property><name>dfs.namenode.name.dir</name><value>/root/had/tmp/dfs/name</value></property><property><name>dfs.datanode.data.dir</name><value>/root/had/tmp/dfs/data</value></property>
</configuration>
4) vim yarn-site.xml
<configuration><!-- Site specific YARN configuration properties --><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><property><name>yarn.resourcemanager.hostname</name><value>node4111</value></property><property><name>yarn.nodemanager.env-whitelist</name><value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
<property><name>yarn.nodemanager.vmem-check-enabled</name><value>false</value>
</property>
<property><name>yarn.nodemanager.resource.memory-mb</name><value>49152</value>
</property>
<property><name>yarn.scheduler.maximum-allocation-mb</name><value>49152</value>
</property></configuration>
5) vim mapred-site.xml
<configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property><property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value>
</property>
<property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property> </configuration>
6: vim workers
node4111
node4113
node4114
5. 分发安装包到其它2个节点
scp -r /root/had root@node4113:/root/
scp -r /root/had root@node4114:/root/
scp -r /root/.bash_profile root@node4113:/root/
scp -r /root/.bash_profile root@node4114:/root/
在2个节点上执行
source .bash_profile
6. HFS NameNode 格式化
只在node4111上执行
/root/had/tmp
目录为空
/root/had/hadoop-3.0.0/bin
路径下执行
./hdfs namenode -format
打印信息
Storage directory /root/had/tmp/dfs/name has been successfully formatted
7. 启动 hadoop
/root/had/hadoop-3.0.0/etc/hadoop
vim hadoop-env.sh
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_JOURNALNODE_USER=rootexport YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=rootexport HDFS_SECONDARYNAMENODE_USER=root
路径
/root/had/hadoop-3.0.0/sbin
./start-all.sh
8. 验证
在各个节点上执行JPS 查看角色服务
# jps8065 DataNode
7908 NameNode
17101 Jps
16159 SecondaryNameNode
9. hadoop web登陆
浏览NameNode的Web界面; 默认情况
http://172.16.18.64:9870
# pwd
/root/had/hadoop-3.0.0/bin# ./hdfs dfs -mkdir /user# ./hdfs dfs -mkdir /input
# ./hdfs dfs -put /root/had/hadoop-3.0.0/etc/hadoop/*.xml /input# ./hdfs dfs -ls /input
hdfs dfs –cat
hdfs dfs –text
参考:
a.hadoop-3.0.0集群环境搭建、配置
b.linux 下安装hadoop hadoop-3.0.0-alpha4.tar.gz
c.Hadoop:设置单个节点群集
1.在 CentOS 7.2 下安装 Hadoop 2.7.5 并搭建伪分布式环境的方法
2.linux添加用户,用户组(centos7)
这篇关于Hadoop 搭建分布式环境 hadoop-3.0.0.tar.gz的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!