本文主要是介绍HiDataPlus 3.3.2-005 搭建(个人的一点心得体会 x86 平台),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
HDP 集群搭建
前置安装
yum -y install createrepo
yum install -y lrzsz
yum install -y wget
yum install -y vim
修改当前集群机器的主机名
hostnamectl set-hostname XXX
这里的 XXX 就是要设置的当前机器的主机名称。主机名称是集群唯一的,一定不要重复!
安装基础环境
rpm -qa | grep java rpm -e --nodeps 旧包
mkdir /opt/download /opt/software
echo 'export JAVA_HOME=/opt/software/jdk1.8.0_311' >> /etc/profile
echo 'export PATH=$JAVA_HOME/bin:$PATH:' >> /etc/profile
配置主机间的映射 - 3
echo '192.168.3.126 hdp3.node1' >> /etc/hosts
echo '192.168.3.127 hdp3.node2' >> /etc/hosts
echo '192.168.3.128 hdp3.node3' >> /etc/hosts
关闭防火墙及selinux - 3
systemctl stop firewalld.service | systemctl disable firewalld.service | systemctl status firewalld.service | setenforce 0 | sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
重启3台机器后进行验证是否修改成功
# sestatus -v
SELinux status: disabled
配置ssh互信 - 3
ssh-keygen -t rsa
ssh-copy-id hdp3.node1
ssh-copy-id hdp3.node2
ssh-copy-id hdp3.node3
配置ntp时钟同步 - 3台
卸载系统原装的chrony
yum -y remove chronyd
所有节点安装NTP服务
yum -y install ntp
systemctl restart ntpd
systemctl enable ntpd.service
rpm包检查关闭
sed -i 's/gpgcheck=1/gpgcheck=0/' /etc/yum.conf
安装http服务 - 本服务仅安装在解压了安装包的机器上
yum -y install httpd
systemctl start httpd
systemctl enable httpd.service
安装Ambari&HDP - 主
cd /opt/download/HDP3.3.2.0-005/
mkdir /var/www/html/ambari
mkdir /var/www/html/HDP
mkdir /var/www/html/HDP-UTILS
mkdir /var/www/html/HDP-GPL
tar -zxvf ambari-2.7.6.0-25-redhat7-x86_64.tar.gz -C /var/www/html/ambari
tar -zxvf HDP-3.3.2.0-005-redhat789-x86_64-2.tar.gz -C /var/www/html/HDP
tar -zxvf HDP-UTILS-1.1.0.22-centos7_8-x86_64.tar.gz -C /var/www/html/HDP-UTILS/
tar -zxvf HDP-GPL-3.3.2.0-005-redhat789-x86_64.tar.gz -C /var/www/html/HDP-GPL/
cd /var/www/html/
chown -R root:root HDP
chown -R root:root HDP-GPL
chown -R root:root HDP-UTILS
chmod -R 755 HDP
chmod -R 755 HDP-GPL
chmod -R 755 HDP-UTILScreaterepo /var/www/html/ambari/2.7.6.0-25/
安装mariadb - 主
rpm -qa |grep -i mysql
rpm -qa |grep -i mariadb
rpm -e --nodeps 旧包
yum install mariadb-server -ysystemctl enable mariadb
systemctl start mariadb# 初始化 mariadb
/usr/bin/mysql_secure_installation
按照以下过程进行 mariadb 的初始化操作
[...]
Enter current password for root (enter for none):
OK, successfully used password, moving on...
[...]
Set root password? [Y/n] Y
New password:123456
Re-enter new password:123456
[...]
Remove anonymous users? [Y/n] Y
[...]
Disallow root login remotely? [Y/n] N
[...]
Remove test database and access to it [Y/n] Y
[...]
Reload privilege tables now? [Y/n] Y
[...]
All done! If you've completed all of the above steps, your MariaDB 18 installation should now be secure.
Thanks for using MariaDB!
初始化完成后开始准备 mariadb 连接器
mkdir /usr/share/java/
# 上传mysql连接器
cp mysql-connector-java-5.1.40-bin.jar /usr/share/java/mysql-connector-java.jar
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%'WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '123456';FLUSH PRIVILEGES;
# 如果需要ranger,编辑以下⽂件: vi /etc/my.cnf 并添加以下⾏:
# vi /etc/my.cnfecho "log_bin_trust_function_creators = 1" >> /etc/my.cnf
# 重启并登陆
systemctl restart mariadb
mysql -uroot -p123456
制作本地源 - 3台机器
这里需要注意,baseurl 属性所对应的 http 地址就是之前我们安装 http 服务的机器。路径就是将 /var/www/http/ 替换为 http://hostname/ 后续路径不变。
touch /etc/yum.repos.d/ambari.repoecho "[Ambari]" >> /etc/yum.repos.d/ambari.repo
echo "name=ambari" >> /etc/yum.repos.d/ambari.repo
echo "baseurl=http://hdp3.node1/ambari/2.7.6.0-25/" >> /etc/yum.repos.d/ambari.repo
# 重新创建 Yum 源
yum clean all
yum makecatch
yum repolist
安装和配置ambari-server - 主
mkdir -p /var/lib/ambari-server/resources/
yum install -y ambari-server --nogpgcheckcp /usr/share/java/mysql-connector-java.jar /var/lib/ambari-server/resources/# --------------------------------- 添加
# vim /etc/ambari-server/conf/ambari.properties
echo "server.jdbc.driver.path=/usr/share/java/mysql-connector-java.jar" >> /etc/ambari-server/conf/ambari.properties
ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar
初始化ambari-server - 主
登录mariadb创建ambari安装所需要的库
mysql -uroot -pRhein.2023
CREATE DATABASE ambari;
use ambari; set global validate_password_policy=0;
set global validate_password_length=1;CREATE USER 'ambari'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'%';
CREATE USER 'ambari'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'localhost';
CREATE USER 'ambari'@'hdp3.node1' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'hdp3.node1'; source /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sqlshow tables;
use mysql;
select host,user from user where user='ambari';
CREATE DATABASE hive;
use hive;
CREATE USER 'hive'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%';
CREATE USER 'hive'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'localhost';
CREATE USER 'hive'@'hdp3.node1' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'hdp3.node1'; CREATE DATABASE oozie;
use oozie;
CREATE USER 'oozie'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'%';
CREATE USER 'oozie'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'localhost';
CREATE USER 'oozie'@'hdp3.node1' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'hdp3.node1'; CREATE DATABASE hue;
use hue;
CREATE USER 'hue'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'hue'@'%';
CREATE USER 'hue'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'hue'@'localhost';
CREATE USER 'hue'@'hdp3.node1' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'hue'@'hdp3.node1'; CREATE DATABASE dolphinscheduler;
use dolphinscheduler;
CREATE USER 'dolphinscheduler'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'dolphinscheduler'@'%';
CREATE USER 'dolphinscheduler'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'dolphinscheduler'@'localhost';
CREATE USER 'dolphinscheduler'@'hdp3.node1' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'dolphinscheduler'@'hdp3.node1'; CREATE DATABASE druid;
use druid;
CREATE USER 'druid'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'druid'@'%';
CREATE USER 'druid'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'druid'@'localhost';
CREATE USER 'druid'@'hdp3.node1' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'druid'@'hdp3.node1'; CREATE DATABASE superset;
use superset;
CREATE USER 'superset'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'superset'@'%';
CREATE USER 'superset'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'superset'@'localhost';
CREATE USER 'superset'@'hdp3.node1' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'superset'@'hdp3.node1'; CREATE DATABASE ranger;
use ranger;
CREATE USER 'rangeradmin'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'rangeradmin'@'%';
CREATE USER 'rangeradmin'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'rangeradmin'@'localhost';
CREATE USER 'rangeradmin'@'hdp3.node1' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'rangeradmin'@'hdp3.node1'; FLUSH PRIVILEGES;
service mysqld restart
ambari-server setup
# (1) 提示是否自定义设置。输入:y
Customize user account for ambari-server daemon [y/n] (n)? y
#(2)ambari-server 守护进程账号。如果直接回车就是默认选择root用户
Enter user account for ambari-server daemon (root):
Adjusting ambari-server permissions and ownership...
#(3)检查防火墙是否关闭
Adjusting ambari-server permissions and ownership...
Checking firewall...
WARNING: iptables is running. Confirm the necessary Ambari ports are accessible. Refer to the Ambari documentation for more details on ports.
OK to continue [y/n] (y)?
# 直接回车
Checking JDK...
Do you want to change Oracle JDK [y/n] (n)? y
[1] Oracle JDK 1.8 + Java Cryptography Extension (JCE) Policy Files 8
[2] Custom JDK
#==============================================================================
#(4)设置JDK。输入:2
Enter choice (1): 2
WARNING: JDK must be installed on all hosts and JAVA_HOME must be valid on all hosts.
WARNING: JCE Policy files are required for configuring Kerberos security. If you plan to use Kerberos,please make sure JCE Unlimited Strength Jurisdiction Policy Files are valid on all hosts.
# 如果上面选择 2 自定义JDK,则需要设置JAVA_HOME。输入:/opt/software/jdk1.8.0_311
Path to JAVA_HOME: /apps/software/jdk1.8.0_311
Validating JDK on Ambari Server...done.GPL License for LZO: https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
# 是否安装 GPL
Enable Ambari Server to download and install GPL Licensed LZO packages [y/n] (n)? yCompleting setup...
Configuring database...
#(5)数据库配置。选择:y
Enter advanced database configuration [y/n] (n)? y
Configuring database...
#==============================================================================
Choose one of the following options:
[1] - PostgreSQL (Embedded)
[2] - Oracle
[3] - MySQL/ MariaDB
[4] - PostgreSQL
[5] - Microsoft SQL Server (Tech Preview)
[6] - SQL Anywhere
#==============================================================================
#(6)选择数据库类型。输入:3
Enter choice (3): 3
#(7)设置数据库的具体配置信息,根据实际情况输入,如果和括号内相同,则可以直接回车。如果想重命名,就输入。
Hostname (localhost):node1
Port (3306): 3306
Database name (ambari): ambari
Username (ambari): ambari
Enter Database Password (bigdata):123456
Re-Enter password: 123456
#(8)将Ambari数据库脚本导入到数据库
WARNING: Before starting Ambari Server, you must run the following DDL against the database to create the schema: /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql 这个sql后面会用到,导入数据库
Proceed with configuring remote database connection properties [y/n] (y)? y
启动ambari服务 - 主节点
systemctl start ambari-server
安装ambari-agent - 3台机器
yum -y install ambari-agent --nogpgcheck
systemctl start ambari-agent
sed -i 's/hostname=localhost/hostname=hdp3.node1/' /etc/ambari-agent/conf/ambari-agent.ini
安装 libtirpc-devel - 3
yum -y install libtirpc-devel
WEB端集群部署
登录界面:http://hdp3.node1:8080
默认管理员账户登录, 账户:admin 密码:admin
开始创建相应的集群
在标签 1 的位置输入要创建的集群的名称,名称可以任意。
输入完成后点击标签 2 的 next 继续下一步的安装。
1 - 选择版本,配置yum源
2 - 选择版本并修改本地源地址
3 - 选HDP-3.1(Default Version Definition);
4 - 选Use Local Repository;
5 - 选redhat7(这里因为当前选择的系统是Centos7所以选择 Redhat7 ,如果此处是安装在其他系统上的就要选择对应的系统):
HDP-3.1: http://hdp3.node4/HDP/3.3.2.0-005/
HDP-3.1-GPL: http://hdp3.node4/HDP-GPL/gpl-3.3.2.0-005/
HDP-UTILS-1.1.0.22: http://hdp3.node4/HDP-UTILS/HDP-UTILS/centos7/1.1.0.22/
配置节点和密钥
在主节点执行以下命令:
cat /root/.ssh/id_rsa
将执行的内容放置到web页面指定位置即可
这里继续点击 CONTINUE 即可
之后就可以看到集群正在安装,等待安装成功后点击 NEXT 即可
安装成功后页面如下所示:
点击 Next 后出现如下页面:
Sqoop
Sqoop访问Hive异常:
error:
INFO hive.HiveImport: Connecting to jdbc:hive2://hdp3.node1:2181,hdp3.node2:2181,hdp3.node3:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
ERROR tool.ImportTool: Import failed: java.io.IOException: Hive exited with status 2at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:253)
解决方案:
1-在 hive 的 conf 目录下创建 beeline-hs2-connection.xml 文件:
hdp 的 conf 目录:/usr/hdp/3.3.2.0-005/hive/conf
vim /usr/hdp/3.3.2.0-005/hive/conf/beeline-hs2-connection.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property><name>beeline.hs2.connection.user</name><value>hive</value>
</property>
<property><name>beeline.hs2.connection.password</name><value>hive</value>
</property>
</configuration>
2-新建一个存储格式为textfile的临时表
create table hive_db.hive_01( id string comment 'Id')
row format delimited fields terminated by '\001'
stored as textFile;
3-将数据导入临时表中(Sqoop执行的Import写入到临时表中)
4-通过查询插入的方式将临时表数据导入目标表
insert overwrite table hive_db.news_detail_hive select * from hive_db.news_detail_hive_01;
HDFS 权限检查属性设置
dfs.permissions.enabled
yarn 资源只能使用 50%
增加以下属性的值大小(默认0.2,调整0.3-0.5)
yarn.scheduler.capacity.maximum-am-resource-percent
Hue
默认使用的pg数据库,在安装的时候替换成 mysql 及相关的参数即可。
SeaTunnel
需要前置安装 Hue
Dolphin Scheduler
默认登陆用户密码:
admin
dolphinscheduler123
phoenix连接HBase
/usr/hdp/3.3.2.0-005/phoenix/bin/sqlline.py hdp3.node1:2181
API删除服务
curl -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo": {"context":"Stop Service"},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' http://hdp3.node4:8080/api/v1/clusters/cluster/services/RANGERhdp3.node1 是 ambari 安装机器cluster 是集群名称RANGER 是服务名称curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://hdp3.node4:8080/api/v1/clusters/cluster/services/RANGER
Yarn 任务 kill
yarn application -kill application_1704037136405_0001
这篇关于HiDataPlus 3.3.2-005 搭建(个人的一点心得体会 x86 平台)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!