本文主要是介绍CDH6配置atlas,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
1.前言
Atlas能够提供开放式的元数据管理和治理功能,能够构建表与表之间的血缘关系,并且支持对表和构建表的过程进行分类管理。对于平台数据量越来越大,元数据管理显得至关重要,元数据有效的表达了数据的来源和流向以及依赖,Atlas的出现,有效的解决了元数据的管理问题。
2.源码下载
mkdir /opt/software
cd /opt/software
wget https://archive.apache.org/dist/atlas/2.0.0/apache-atlas-2.0.0-sources.tar.gz
tar xzvf apache-atlas-2.0.0-sources.tar.gz
3.编译
1.编译说明
由于我的大数据环境是CDH6.2.1,需要修改Atlas的相关版本与CDH6.2.1组件版本一致,修改源码的主目录下的POM.XML文件中的内容(不调整直接编译应该也没有关系,2.0.0的atlas和CDH6的版本都差不多,我有配置过):
<hadoop.version>3.0.0</hadoop.version>
<hbase.version>2.1.0</hbase.version>
<kafka.version>2.1.0</kafka.version>
<zookeeper.version>3.4.5</zookeeper.version>
2.编译环境
Atlas2.0.0的编译,依赖环境如下:
-
JDK_8u151及以上版本
-
Maven3.5.0及以上
-
Python2.7(centos7.5自带,无需安装)
-
如果环境不满足需要修改源码的主目录下的POM.XML文件中下面对版本要求的内容
<requireMavenVersion><version>3.5.0</version><message>** MAVEN VERSION ERROR ** Maven 3.5.0 or above is required. See https://maven.apache.org/install.html </message></requireMavenVersion><requireJavaVersion><level>ERROR</level><version>1.8.0-151</version><message>** JAVA VERSION ERROR ** Java 8 (Update 151) or above is required.</message></requireJavaVersion>
3.Maven环境配置
1)Maven下载:https://maven.apache.org/download.cgi
2)把apache-maven-3.6.1-bin.tar.gz上传到linux的/opt/software目录下
3)解压apache-maven-3.6.1-bin.tar.gz到/opt/module/目录下面
[root@hadoop102 software]# tar -zxvf apache-maven-3.6.1-bin.tar.gz -C /opt/module/
4)修改apache-maven-3.6.1的名称为maven
[root@hadoop102 module]# mv apache-maven-3.6.1/ maven
5)添加环境变量到/etc/profile中
[root@hadoop102 module]#vim /etc/profile\#MAVEN_HOMEexport MAVEN_HOME=/opt/module/mavenexport PATH=$PATH:$MAVEN_HOME/bin
6)测试安装结果
[root@hadoop102 module]# source /etc/profile[root@hadoop102 module]# mvn -v
7)修改setting.xml,指定为阿里云
[root@hadoop101 module]# cd /opt/module/maven/conf/[root@hadoop102 maven]# vim settings.xml
将下面内容添加的标签下面
<!-- 添加阿里云镜像--><mirror><id>nexus-aliyun</id><mirrorOf>central</mirrorOf><name>Nexus aliyun</name><url>http://maven.aliyun.com/nexus/content/groups/public</url></mirror><mirror><id>UK</id><name>UK Central</name><url>http://uk.maven.org/maven2</url><mirrorOf>central</mirrorOf></mirror><mirror><id>repo1</id><mirrorOf>central</mirrorOf><name>Human Readable Name for this Mirror.</name><url>http://repo1.maven.org/maven2/</url></mirror><mirror><id>repo2</id><mirrorOf>central</mirrorOf><name>Human Readable Name for this Mirror.</name><url>http://repo2.maven.org/maven2/</url></mirror>
8)在/root目录下创建.m2文件夹(我用的是root用户编译,外部jar都会下载到root目录下的.m2文件下)
[root@hadoop102 ~]$ mkdir .m2
4.编译安装
1.编译优化
atlas可以用集成hbase和solr,也可以用外部的,我这边集群里面就有为了节省编译时间,编译前将\apache-atlas-sources-2.0.0\distro\src\conf\atlas-env.sh文件修改为
# indicates whether or not a local instance of HBase should be started for Atlas
export MANAGE_LOCAL_HBASE=false# indicates whether or not a local instance of Solr should be started for Atlas
export MANAGE_LOCAL_SOLR=false# indicates whether or not cassandra is the embedded backend for Atlas
export MANAGE_EMBEDDED_CASSANDRA=false# indicates whether or not a local instance of Elasticsearch should be started for Atlas
export MANAGE_LOCAL_ELASTICSEARCH=false
2.编译命令
注意:编译命令必须要在下载源码的主目录下(进入apache-atlas-sources-2.0.0文件下)执行
export MAVEN_OPTS="-Xms4g -Xmx4g"
mvn clean -DskipTests -Drat.skip=true install
mvn clean -DskipTests package -Drat.skip=true -Pdist
编译后的文件在/opt/software/apache-atlas-sources-2.0.0/distro/target中
将apache-atlas-2.0.0-bin.tar.gz 解压到/opt/module中,改名为atlas
需要本人编译的jar包可以联系博主,免费发送tar包给你(下面是地址)
链接:https://pan.baidu.com/s/1oDy6w9DaltwOta6dbR-ETw
提取码:389m
复制这段内容后打开百度网盘手机App,操作更方便哦–来自百度网盘超级会员V3的分享
4.集成CDH组件
1.修改atlas-log4j.xml
vim /opt/module/atlas/conf/atlas-log4j.xml
#去掉如下代码的注释
<appender name="perf_appender" class="org.apache.log4j.DailyRollingFileAppender"><param name="file" value="${atlas.log.dir}/atlas_perf.log" /><param name="datePattern" value="'.'yyyy-MM-dd" /><param name="append" value="true" /><layout class="org.apache.log4j.PatternLayout"><param name="ConversionPattern" value="%d|%t|%m%n" /></layout>
</appender><logger name="org.apache.atlas.perf" additivity="false"><level value="debug" /><appender-ref ref="perf_appender" />
</logger>
2.修改访问端口
vim /opt/module/atlas/conf/atlas-application.properties
# 修改配置项如下
atlas.server.http.port=21001
atlas.rest.address=http://node01:21001
3.集成Hbase
vim /opt/module/atlas/conf/atlas-application.properties
# 配置项如下(修改atlas存储数据主机)
atlas.graph.storage.hostname=node01:2181,node02:2181,node03:2181# 将Hbase的配置文件软连接到Atlas的conf/hbase目录下
ln -s /etc/hbase/conf/ /opt/module/atlas/conf/hbase/vim /opt/module/atlas/conf/atlas-env.sh#添加HBase配置文件路径(使atlas能找到hbase的jar包)export HBASE_CONF_DIR=/opt/module/atlas/conf/hbase/conf#修改地址,是外部能够访问
atlas.rest.address=http://node01:21000
#访问hbase
atlas.audit.hbase.zookeeper.quorum=node01:2181,node02:2181,node03:2181
4.集成Solr
vim /opt/module/atlas/conf/atlas-application.properties #修改如下配置(使atlas能通过zookeeper找到solr集群)
atlas.graph.index.search.solr.zookeeper-url=node01:2181/solr#将Atlas的conf目录下Solr文件夹同步到Solr的目录下,更名,然后发到各个节点(使集群solr能读到atlas的solr配置)
cp -r /opt/module/atlas/conf/solr /opt/cloudera/parcels/CDH/lib/solr/cd /opt/cloudera/parcels/CDH/lib/solr/mv solr atlas_solrscp -r /opt/cloudera/parcels/CDH/lib/solr/atlas_solr node02:/opt/cloudera/parcels/CDH/lib/solr/scp -r /opt/cloudera/parcels/CDH/lib/solr/atlas_solr node03:/opt/cloudera/parcels/CDH/lib/solr/#Solr创建collection,atlas相关索引保存的目录
/opt/cloudera/parcels/CDH/lib/solr/bin/solr create -c vertex_index -d /opt/cloudera/parcels/CDH/lib/solr/atlas_solr -force -shards 3 -replicationFactor 2/opt/cloudera/parcels/CDH/lib/solr/bin/solr create -c edge_index -d /opt/cloudera/parcels/CDH/lib/solr/atlas_solr -force -shards 3 -replicationFactor 2/opt/cloudera/parcels/CDH/lib/solr/bin/solr create -c fulltext_index -d /opt/cloudera/parcels/CDH/lib/solr/atlas_solr -force -shards 3 -replicationFactor 2#如果需要删除collection,用一下命令
/opt/cloudera/parcels/CDH/lib/solr/bin/solr delete -c vertex_index
/opt/cloudera/parcels/CDH/lib/solr/bin/solr delete -c edge_index
/opt/cloudera/parcels/CDH/lib/solr/bin/solr delete -c fulltext_index
登录solr web控制台:http://node01:8983/solr/#/~cloud 看到如下图显示:
5.集成Kafka
vim /opt/module/atlas/conf/atlas-application.properties
# 配置项如下
#警告配置
atlas.notification.embedded=false
#连接zookeeper
atlas.kafka.zookeeper.connect=node01:2181,node02:2181,node03:2181
atlas.kafka.bootstrap.servers=node01:9092,node02:9092,node03:9092
#连接时长
atlas.kafka.zookeeper.session.timeout.ms=4000
atlas.kafka.zookeeper.connection.timeout.ms=2000
#是否自动提交
atlas.kafka.enable.auto.commit=true#创建主题(也可以不要创建,Kafka会自动创建,但是topic分区数是1,生产环境必须创建)
kafka-topics --zookeeper node02:2181 --create --replication-factor 2 --partitions 3 --topic ATLAS_HOOKkafka-topics --zookeeper node02:2181 --create --replication-factor 2 --partitions 3 --topic ATLAS_ENTITIESkafka-topics --zookeeper node02:2181 --create --replication-factor 3 --partitions 3 --topic _HOATLASOK
#_HOATLASOK这个主题有的文档没有创建,我创建了监控的时候也没有看到有数据,应该不没必要创建的主题
#查看创建主题
kafka-topics --zookeeper node01:2181 --list
做到这一步可以启动一下atlas,看是能起的来,如果能启动在集成hive,不行的话解决完在集成hive。能起来并且node01:21001能访问证明没有问题,然后停止后继续进行
#启动atlas
cd /opt/module/atlas/bin
./atlas_start.py#监控启动日志
cd /opt/module/atlas/logs
tail -100f application.log#停止atlas
cd /opt/module/atlas/bin
./atlas_start.py
6.集成hive
- 将配置文件atlas-application.properties添加到atlas-2.0.0/hook/hive的atlas-plugin-classloader-2.0.0.jar(添加前atlas-plugin-classloader-2.0.0.jar大小为17500,拷贝后大小为21000,如果不放心添加前可以将atlas-plugin-classloader-2.0.0.jar复制出来一份)
##必须在此路径打包,才能打到第一级目录下
cp /opt/module/atlas/conf/
zip -u /opt/module/atlas/hook/hive/atlas-plugin-classloader-2.0.0.jar atlas-application.properties# 将配置文件添加到hive的配置目录下
cp atlas-application.properties /etc/hive/conf
- 搜索hive-site,修改相关配置
- 环境高级配置代码段(安全阀)
- 辅助
# hive.exec.post.hooks配置内容(语句层面,整条 HQL 语句在执行完成后的 hook 类名,非常关键atlas能够生成表的血缘关系和字段的血缘关系全靠hook将sql解析后传为kafka)
名称:hive.exec.post.hooks
值:org.apache.atlas.hive.hook.HiveHook,org.apache.hadoop.hive.ql.hooks.LineageLogger#hive.reloadable.aux.jars.path配置内容
名称:hive.reloadable.aux.jars.path
值:/opt/module/atlas/hook/hive# HIVE_AUX_JARS_PATH配置内容
HIVE_AUX_JARS_PATH=/opt/module/atlas/hook/hive
修改完配置记得重启,重启后做下面的操做
#将配置好的atlas分发到每个节点
scp -r /opt/module/atlas node02:/opt/module/scp -r /opt/module/atlas node03:/opt/module/#将atlas配置文件copy到/etc/hive/conf下(集群各个节点)
cp /opt/module/atlas/conf/atlas-application.properties /etc/hive/conf
-
添加环境变量
vi /etc/profile # 添加如下 export HIVE_HOME=/opt/cloudera/parcels/CDH/lib/hive export HIVE_CONF_DIR=/etc/hive/conf export PATH=$HIVE_HOME/bin:$PATH# 使生效 source /etc/profile
-
导入Hive的元数据
cd /opt/module/atlas/bin ./import-hive.sh
输入账号密码都是admin/admin
7.启动atlas
#启动atlas
cd /opt/module/atlas/bin
./atlas_start.py#监控启动日志
cd /opt/module/atlas/logs
tail -100f application.log
- 执行完azkaban的调度后的图
5.遇到的问题
1.编译时错误
- 这个http://repo. typesafe. com/typesafe/releases网址不能连接,试了一下确实打不开,没有查到编译时报类似的错误的,查到原因为在源码的主目录的pom.xml文件中有设置编译下载时有先选择的的repository地址,在pom.xml将该标签删除后重新编译即可
#需要删除的标签 <repository><id>typesafe</id><name>Typesafe Repository</name><url>http://repo.typesafe.com/typesafe/releases/</url></repository></repositories>
2.import-hive.sh报错
09:40:16.944 [main] ERROR org.apache.atlas.hive.bridge.HiveMetaStoreBridge - Import failed
org.apache.atlas.atlasException: Failed to load application propertiesat org.apache.atlas.ApplicationProperties.get(ApplicationProperties.java:134) ~[atlas-intg-2.0.0.jar:2.0.0]at org.apache.atlas.ApplicationProperties.get(ApplicationProperties.java:86) ~[atlas-intg-2.0.0.jar:2.0.0]at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:120) [hive-bridge-2.0.0.jar:2.0.0]
Caused by: org.apache.commons.configuration.ConfigurationException: Cannot locate configuration source nullat org.apache.commons.configuration.AbstractFileConfiguration.load(AbstractFileConfiguration.java:259) ~[commons-configuration-1.10.jar:1.10]at org.apache.commons.configuration.AbstractFileConfiguration.load(AbstractFileConfiguration.java:238) ~[commons-configuration-1.10.jar:1.10]at org.apache.commons.configuration.AbstractFileConfiguration.<init>(AbstractFileConfiguration.java:197) ~[commons-configuration-1.10.jar:1.10]at org.apache.commons.configuration.PropertiesConfiguration.<init>(PropertiesConfiguration.java:284) ~[commons-configuration-1.10.jar:1.10]at org.apache.atlas.ApplicationProperties.<init>(ApplicationProperties.java:69) ~[atlas-intg-2.0.0.jar:2.0.0]at org.apache.atlas.ApplicationProperties.get(ApplicationProperties.java:125) ~[atlas-intg-2.0.0.jar:2.0.0]... 2 more
Failed to import Hive Meta Data!!!
解决
发现拷贝到/etc/hive/conf下的atlas-application.properties文件在hive重启后被删除
cp opt/module/atlas/conf/atlas-application.properties /etc/hive/conf
3.import-hive.sh报错
Enter username for atlas :- admin
Enter password for atlas :-
09:57:11.476 [main] ERROR org.apache.atlas.hive.bridge.HiveMetaStoreBridge - Import failed
org.apache.atlas.AtlasServiceException: Metadata service API org.apache.atlas.AtlasClientV2$API_V2@57540fd0 failed with status 500 (Internal Server Error) Response Body (There was an error processing your request. It has been logged (ID 84e9bade4a7a315c).)at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:427) ~[atlas-client-common-2.0.0.jar:2.0.0]at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:353) ~[atlas-client-common-2.0.0.jar:2.0.0]at org.apache.atlas.AtlasBaseClient.callAPI(AtlasBaseClient.java:246) ~[atlas-client-common-2.0.0.jar:2.0.0]at org.apache.atlas.AtlasClientV2.getEntityByAttribute(AtlasClientV2.java:285) ~[atlas-client-v2-2.0.0.jar:2.0.0]at org.apache.atlas.AtlasClientV2.getEntityByAttribute(AtlasClientV2.java:276) ~[atlas-client-v2-2.0.0.jar:2.0.0]at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.findEntity(HiveMetaStoreBridge.java:780) ~[hive-bridge-2.0.0.jar:2.0.0]at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.findDatabase(HiveMetaStoreBridge.java:745) ~[hive-bridge-2.0.0.jar:2.0.0]at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.registerDatabase(HiveMetaStoreBridge.java:399) ~[hive-bridge-2.0.0.jar:2.0.0]at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importDatabases(HiveMetaStoreBridge.java:281) ~[hive-bridge-2.0.0.jar:2.0.0]at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importHiveMetadata(HiveMetaStoreBridge.java:251) ~[hive-bridge-2.0.0.jar:2.0.0]at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:168) [hive-bridge-2.0.0.jar:2.0.0]
Failed to import Hive Meta Data!!!
解决:密码错误,重新输入(账号\密码:admin\admin)
4.hive客户端查询报错
hive> show databases;
hive.exec.post.hooks Class not found: org.apache.atlas.hive.hook.HiveHook
FAILED: Hive Internal Error: java.lang.ClassNotFoundException(org.apache.Atlas-2.0.0.hive.hook.HiveHook)
java.lang.ClassNotFoundException: org.apache.Atlas-2.0.0.hive.hook.HiveHookat java.net.URLClassLoader.findClass(URLClassLoader.java:381)at java.lang.ClassLoader.loadClass(ClassLoader.java:424)at java.lang.ClassLoader.loadClass(ClassLoader.java:357)at java.lang.Class.forName0(Native Method)at java.lang.Class.forName(Class.java:348)at org.apache.hadoop.hive.ql.hooks.HooksLoader.getHooks(HooksLoader.java:103)at org.apache.hadoop.hive.ql.hooks.HooksLoader.getHooks(HooksLoader.java:64)at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1956)at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563)at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339)at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1328)at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:836)at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:772)at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:699)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.util.RunJar.run(RunJar.java:313)at org.apache.hadoop.util.RunJar.main(RunJar.java:227)
- 解决
发现拷贝到/etc/hive/conf下的atlas-application.properties文件在hive重启后被删除,因此每次遇到hive.exec.post.hooks Class not found: org.apache.atlas.hive.hook.HiveHook错误时,记得看下/etc/hive/conf目录下atlas-application.properties有没有被删除
cp opt/module/atlas/conf/atlas-application.properties /etc/hive/conf
5.执行shell脚本时报错
Exception in thread "main" java.lang.ExceptionInInitializerErrorat java.lang.Class.forName0(Native Method)at java.lang.Class.forName(Class.java:348)at org.apache.atlas.hive.hook.HiveHook.initialize(HiveHook.java:72)at org.apache.atlas.hive.hook.HiveHook.<init>(HiveHook.java:41)at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:423)at java.lang.Class.newInstance(Class.java:442)at org.apache.hadoop.hive.ql.hooks.HooksLoader.getHooks(HooksLoader.java:104)at org.apache.hadoop.hive.ql.hooks.HooksLoader.getHooks(HooksLoader.java:64)at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1956)at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563)at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339)at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1328)at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:342)at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:800)at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:772)at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:699)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.util.RunJar.run(RunJar.java:313)at org.apache.hadoop.util.RunJar.main(RunJar.java:227)
Caused by: java.lang.NullPointerExceptionat org.apache.atlas.hook.atlasHook.<clinit>(atlasHook.java:81)... 28 more
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0FAi1oL6-1598871733472)(C:\Users\刘建新\AppData\Roaming\Typora\typora-user-images\image-20200828101317745.png)]
-
解决
发现拷贝到/etc/hive/conf下的atlas-application.properties文件在hive重启后被删除
cp /opt/module/atlas/conf/atlas-application.properties /etc/hive/conf
6.通过hiveserver2连接时报错
Could not initialize class org.apache.atlas.hive.hook.HiveHook
解决:
hive辅助jar中添加配置项:
如果添加了这个hive报该错误,检查下这个步骤是否按一下方式操作了
##必须在此路径打包,才能打到第一级目录下
cp /opt/module/atlas/conf/
zip -u /opt/module/atlas/hook/hive/atlas-plugin-classloader-2.0.0.jar atlas-application.properties
7.正式启动日志报错
2020-08-31 17:02:17,232 WARN - [ReadOnlyZKClient-localhost:2181@0x41948c13:] ~ 0x41948c13 to localhost:2181 failed for get of /hbase/meta-region-server, code = CONNECTIONLOSS, retries = 26 (ReadOnlyZKClient$ZKTask$1:189)
2020-08-31 17:02:18,232 WARN - [ReadOnlyZKClient-localhost:2181@0x41948c13-SendThread(localhost:2181):] ~ Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (ClientCnxn$SendThread:1089)
java.net.ConnectException: Connection refusedat sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
解决:
测试集群本地就有zookeeper,正式集群本地没有zookeeper,需要配置以后才能找到hbase
vim /opt/module/atlas/conf/atlas-application.properties
atlas.audit.hbase.zookeeper.quorum=node04:2181,node02:2181,node03:2181
自己塔建时参考的文章:
https://blog.csdn.net/IT142546355/article/details/107884900#comments_13146081
https://www.codeleading.com/article/83754484002/
没有克服不了的困难,只有畏惧的心。 生活之所以耀眼,是因为磨难与辉煌会同时出现。所以,别畏惧暂时的困顿,即使无人鼓掌,也要全情投入,优雅坚持。请相信:不管多险峻的高山,总会给勇敢的人留一条攀登的路。只要你肯迈步,路就会在你脚下延伸。
这篇关于CDH6配置atlas的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!