本文主要是介绍用beeline连接SparkSQL,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
1. 在$SPARK_HOME/conf/hive-site.xml文件中添加下面的属性
vi $SPARK_HOME/conf/hive-site.xml
<configuration>
<property>
<name>hive.metastore.uris</name>
<value>thrift://master:9083</value>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>192.168.56.101</value>
<description>Bind host on which to run the HiveServer2 Thrift service.</description>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10001</value>
<description>Port number of HiveServer2 Thrift interface when hive.server2.transport.mode is 'binary'.</description>
</property>
<property>
<name>hive.server2.thrift.min.worker.threads</name>
<value>5</value>
<description>Minimum number of Thrift worker threads</description>
</property>
<property>
<name>hive.server2.thrift.max.worker.threads</name>
<value>500</value>
<description>Maximum number of Thrift worker threads</description>
</property>
<property>
<name>hive.server2.enable.doAs</name>
<value>false</value>
</property>
</configuration>
2. 复制mysql JDBC驱动文件到$SPARK_HOME/lib/
cp mysql-connector-java-5.1.31-bin.jar $SPARK_HOME/lib/
3. 启动hive元数据存储服务
hive --service metastore > /tmp/grid/hive_metastore.log 2>&1 &
4. 启动spark thriftserver服务
$SPARK_HOME/sbin/start-thriftserver.sh --master spark://192.168.56.101:7077 --executor-memory 30g
5. 登入beeline
$SPARK_HOME/bin/beeline -u jdbc:hive2://192.168.56.101:10001/
原文:https://blog.csdn.net/wzy0623/article/details/50999197
这篇关于用beeline连接SparkSQL的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!