Spark (null) entry in command string: null chmod 0700

2024-04-16 18:48

本文主要是介绍Spark (null) entry in command string: null chmod 0700,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

使用ecplise来提交SPARK SQL的时候报错:

Exception in thread "main" org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: java.io.IOException: (null) entry in command string: null chmod 0700 C:\Users\xxxxx\AppData\Local\Temp\xxxxx\e8f9ea60-b5b4-40e2-8aa4-8ce5e50104b0;at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:108)at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:216)at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)at org.apache.spark.sql.internal.SharedState.globalTempViewManager$lzycompute(SharedState.scala:141)at org.apache.spark.sql.internal.SharedState.globalTempViewManager(SharedState.scala:136)at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$2.apply(HiveSessionStateBuilder.scala:55)at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$2.apply(HiveSessionStateBuilder.scala:55)at org.apache.spark.sql.catalyst.catalog.SessionCatalog.globalTempViewManager$lzycompute(SessionCatalog.scala:91)at org.apache.spark.sql.catalyst.catalog.SessionCatalog.globalTempViewManager(SessionCatalog.scala:91)at org.apache.spark.sql.catalyst.catalog.SessionCatalog.isTemporaryTable(SessionCatalog.scala:731)at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.isRunningDirectlyOnFiles(Analyzer.scala:749)at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:683)at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:715)at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:708)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$apply$1.apply(AnalysisHelper.scala:90)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$apply$1.apply(AnalysisHelper.scala:90)at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:89)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:86)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsUp(AnalysisHelper.scala:86)at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$1.apply(AnalysisHelper.scala:87)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$1.apply(AnalysisHelper.scala:87)at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:87)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:86)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsUp(AnalysisHelper.scala:86)at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29)at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:708)at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:654)at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)at scala.collection.immutable.List.foldLeft(List.scala:84)at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)at scala.collection.immutable.List.foreach(List.scala:392)at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79)at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)at com.trinasolar.spark_24.App.main(App.java:37)

错误的原因是因为winutils, 在程序里指点hadoop.home.dir属性,System.setProperty("hadoop.home.dir","D:\\eclipse-workspace\\spark-24" ); 并下载winutils放到hadoop.home.dir\bin目录下即可。

如果提示版本问题,要么多下载几个版本,试一下哪个版本能正常执行即可。

下载地址: https://github.com/steveloughran/winutils

这篇关于Spark (null) entry in command string: null chmod 0700的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/909611

相关文章

jenkins 插件执行shell命令时,提示“Command not found”处理方法

首先提示找不到“Command not found,可能我们第一反应是查看目标机器是否已支持该命令,不过如果相信能找到这里来的朋友估计遇到的跟我一样,其实目标机器是没有问题的通过一些远程工具执行shell命令是可以执行。奇怪的就是通过jenkinsSSH插件无法执行,经一番折腾各种搜索发现是jenkins没有加载/etc/profile导致。 【解决办法】: 需要在jenkins调用shell脚

Spark MLlib模型训练—聚类算法 PIC(Power Iteration Clustering)

Spark MLlib模型训练—聚类算法 PIC(Power Iteration Clustering) Power Iteration Clustering (PIC) 是一种基于图的聚类算法,用于在大规模数据集上进行高效的社区检测。PIC 算法的核心思想是通过迭代图的幂运算来发现数据中的潜在簇。该算法适用于处理大规模图数据,特别是在社交网络分析、推荐系统和生物信息学等领域具有广泛应用。Spa

string字符会调用new分配堆内存吗

gcc的string默认大小是32个字节,字符串小于等于15直接保存在栈上,超过之后才会使用new分配。

Cannot read property ‘length‘ of null while opening vscode terminal

同一问题地址:Cannot read property ‘length’ of null while opening vscode terminal 问题描述 One day, 我在ubuntu 18.04下用vscode打开一个项目,并想和往常一样在vscode使用终端,发现报错Cannot read property 'length' of null。 解决 打开setting.jso

非空约束(Not Null)

修改表添加非空约束 使用DDL语句添加非空约束 ALTER TABLE 表名 MODIFY 列名 类型 NOT NULL; 示例: 向emp表中的salary添加非空约束。 alter table emp modify salary float(8,2) not NULL; 删除非空约束 使用DDL语句删除非空约束 ALTER TABLE 表名 MODIFY 列名 类型 NULL;

hdu2072(string的应用)

单词数 Time Limit: 1000/1000 MS (Java/Others)    Memory Limit: 32768/32768 K (Java/Others) Total Submission(s): 25447    Accepted Submission(s): 5957 Problem Description lily的好朋友xiaoou333最近很空,他

【UVA】10739 - String to Palindrome(动态规划)

比较水的动态规划 dp[i][j] 将原串 i ~ j 之内的字符转化为回文字符所需要的最小操作次数 其中删除操作和添加操作本质上是一样的。 三个状态转移方程: dp[i][j] = min(dp[i][j] ,dp[i + 1][j]); dp[i][j] = min(dp[i][j] ,dp[i + 1][j - 1]); dp[i][j] = min(dp[i][j] ,dp[

理解String的compareTo()方法返回值

compareTo()的返回值是整型,它是先比较对应字符的大小(ASCII码顺序), 如果第一个字符和参数的第一个字符不等,结束比较,返回他们之间的差值。 如果第一个字符和参数的第一个字符相等,则以第二个字符和参数的第二个字符作比较, 以此类推,直至比较的字符或被比较的字符有一方全比较完,这时就比较字符的长度。 我们可以通过阅读源码加深对compareTo()的理解: comp

【JavaScript】基本数据类型与引用数据类型区别(及为什么String、Boolean、Number基本数据类型会有属性和方法?)

基本数据类型   JavaScript基本数据类型包括:undefined、null、number、boolean、string。基本数据类型是按值访问的,就是说我们可以操作保存在变量中的实际的值。 1)基本数据类型的值是不可变的 任何方法都无法改变一个基本类型的值,比如一个字符串: var name = "change";name.substr();//hangconsole.log

leetcode#541. Reverse String II

题目 Given a string and an integer k, you need to reverse the first k characters for every 2k characters counting from the start of the string. If there are less than k characters left, reverse all of