本文主要是介绍1.7.1 大数据-HUE可视化软件安装,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
版本
hue-3.9.0-cdh5.5.0
下载解压
http://archive.cloudera.com/cdh5/cdh/5/hue-3.9.0-cdh5.5.0.tar.gz
tar -zxf hue-3.9.0-cdh5.5.0.tar.gz -C /opt/modules
编译
- 联网虚拟机里面设置为自动连接
- 切换为root用户
- 安装相关依赖包
yum install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi gcc gcc-c++ krb5-devel libtidy libxml2-devel libxslt-devel openldap-devel python-devel sqlite-devel openssl-devel mysql-devel gmp-devel
根目录编译
make app
切换kfk用户并授权
sudo chmod -R 777 hue-3.9.0-cdh5.5.0/
配置
资料http://archive.cloudera.com/cdh5/cdh/5/hue-3.9.0-cdh5.5.0/manual.html
/opt/modules/hue-3.9.0-cdh5.5.0/desktop/conf/hue.ini
secret_key=jFE93j;2[290-eiw.KEiwN2s3['d;/.q[eIW^y#e=+Iei*@Mn<qW5o# Webserver listens on this address and porthttp_host=bigdata-pro03.kfk.comhttp_port=8888# Time zone nametime_zone=Asia/Shanghai
启动服务
[kfk@bigdata-pro03 hue-3.9.0-cdh5.5.0]$ ./build/env/bin/supervisor
登录
http://bigdata-pro03.kfk.com:8888/
kfk kfk
集成HDFS
/opt/modules/hue-3.9.0-cdh5.5.0/desktop/conf/hue.ini
fs_defaultfs=hdfs://ns
webhdfs_url=http://bigdata-pro01.kfk.com:50070/webhdfs/v1
hadoop_conf_dir=/opt/modules/hadoop-2.5.0/etc/hadoop
hadoop_bin=/opt/modules/hadoop-2.5.0/bin
hadoop_hdfs_home=/opt/modules/hadoop-2.5.0
三台配置
hadoop-2.5.0/etc/hadoop/core-site.xm
l 不配报没权限
Note: you are a Hue admin but not a HDFS superuser
, "hdfs" or part of HDFS supergroup, "supergroup"
.
default_hdfs_superuser=kfk
<!--hue-->
<property><name>hadoop.proxyuser.hue.hosts</name><value>*</value>
</property>
<property><name>hadoop.proxyuser.hue.groups</name><value>*</value>
</property>
重启服务
[kfk@bigdata-pro01 hadoop-2.5.0]$ sbin/stop-all.sh
[kfk@bigdata-pro01 hadoop-2.5.0]$ sbin/start-all.sh[kfk@bigdata-pro03 hue-3.9.0-cdh5.5.0]$ ./build/env/bin/supervisor
集成报错Address already in use
解决
[kfk@bigdata-pro03 lib]$ ps -aPID TTY TIME CMD
12991 pts/0 00:00:00 vim
18707 pts/0 00:03:00 java
18851 pts/0 00:00:00 bash
18864 pts/0 00:00:04 java
22839 pts/2 00:00:00 su
22844 pts/2 00:00:00 bash
27001 pts/0 00:00:00 supervisor
27007 pts/0 00:00:10 hue
27864 pts/1 00:00:00 vim
27964 pts/3 00:00:05 java
28058 pts/1 00:00:00 ps
杀掉进程 kill -9 27001
方案二 反复启动没杀好 用这个找hue supervisor
[kfk@bigdata-pro03 hue-3.9.0-cdh5.5.0]$ lsof -i
问题:StandbyException: Operation category READ is not supported in state standby
重启导致 namenode重置了 修改访问网址
/opt/modules/hue-3.9.0-cdh5.5.0/desktop/conf/hue.ini
webhdfs_url=http://bigdata-pro02.kfk.com:50070/webhdfs/v1
集成yarn
resourcemanager_host=rs# The port where the ResourceManager IPC listens onresourcemanager_port=8032# Whether to submit jobs to this clustersubmit_to=True# Resource Manager logical name (required for HA)## logical_name=# Change this if your YARN cluster is Kerberos-secured## security_enabled=false# URL of the ResourceManager APIresourcemanager_api_url=http://bigdata-pro02.kfk.com:8088# URL of the ProxyServer APIproxy_api_url=http://bigdata-pro02.kfk.com:8088# URL of the HistoryServer APIhistory_server_api_url=http://bigdata-pro02.kfk.com:19888
集成hive
[beeswax]# Host where HiveServer2 is running.# If Kerberos security is enabled, use fully-qualified domain name (FQDN).## hive_server_host=localhosthive_server_host=bigdata-pro03.kfk.com# Port where HiveServer2 Thrift server runs on.hive_server_port=10000# Hive configuration directory, where hive-site.xml is locatedhive_conf_dir=/opt/modules/hive-0.13.1-bin/conf
启动 nohup bin/hiveserver2 &
HiveServer2(HS2)是一个服务端接口,使远程客户端可以执行对Hive的查询并返回结果。目前基于Thrift RPC的实现是HiveServer的改进版本,并支持多客户端并发和身份验证
<property><name>hive.server2.thrift.port</name><value>10000</value>
</property><property><name>hive.server2.thrift.bind.host</name><value>bigdata-pro03.kfk.com</value></property>
hadoop core-site.xml
<property> <name>hadoop.proxyuser.kfk.hosts</name> <value>*</value>
</property>
<property> <name>hadoop.proxyuser.kfk.groups</name> <value>*</value>
</property>
集成mysql
[[[mysql]]]# Name to show in the UI.nice_name="MySQL-Sky"# For MySQL and PostgreSQL, name is the name of the database.# For Oracle, Name is instance of the Oracle server. For express edition# this is 'xe' by default.name=metastore# Database backend to use. This can be:# 1. mysql# 2. postgresql# 3. oracleengine=mysql# IP or hostname of the database to connect to.host=bigdata-pro01.kfk.com# Port the database server is listening to. Defaults are:# 1. MySQL: 3306# 2. PostgreSQL: 5432# 3. Oracle Express Edition: 1521## port=3306# Username to authenticate with when connecting to the database.user=root# Password matching the username to authenticate with when# connecting to the database.password=123456
集成HBASE
启动thrift服务
bin/hbase-daemon.sh start thrift
[hbase]# Comma-separated list of HBase Thrift servers for clusters in the format of '(name|host:port)'.# Use full hostname with security.# If using Kerberos we assume GSSAPI SASL, not PLAIN.hbase_clusters=(Cluster|bigdata-pro01.kfk.com:9090)# HBase configuration directory, where hbase-site.xml is located.hbase_conf_dir=/opt/modules/hbase-0.98.6-cdh5.3.0/conf
其他
下面版本=hue4.2 HIVE查询可联想 有进度条 另一个工具
tar -zxf hue-3.9.0-cdh5.12.1.tar.gz 联想 进度条
这篇关于1.7.1 大数据-HUE可视化软件安装的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!