本文主要是介绍Java实现SparkSQL查询Hive表数据,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
简单说:
项目代码
package test;import org.apache.log4j.Level;
import org.apache.log4j.Logger;
import org.apache.spark.sql.SparkSession;public class SparkSQLJob {private static final String WRITE_FORMAT = "csv";private static final String HDFS = "/data/sparksearch/";private static final Logger LOG = Logger.getLogger(SparkSQLJob.class);public static void main(String[] args) throws InterruptedException{LOG.setLevel(Level.INFO);if (args == null || args.length < 4){LOG.error("please input the parameter AppName, querySQL, savePath, LogPath, four parameters!");return;}SparkSQLJob searchObj = new SparkSQLJob();String querySQL = args[1];LOG.info("querySQL = " + querySQL);
这篇关于Java实现SparkSQL查询Hive表数据的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!