Spark (null) entry in command string: null chmod 0700

2024-04-16 18:48

本文主要是介绍Spark (null) entry in command string: null chmod 0700,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

使用ecplise来提交SPARK SQL的时候报错:

Exception in thread "main" org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: java.io.IOException: (null) entry in command string: null chmod 0700 C:\Users\xxxxx\AppData\Local\Temp\xxxxx\e8f9ea60-b5b4-40e2-8aa4-8ce5e50104b0;at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:108)at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:216)at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)at org.apache.spark.sql.internal.SharedState.globalTempViewManager$lzycompute(SharedState.scala:141)at org.apache.spark.sql.internal.SharedState.globalTempViewManager(SharedState.scala:136)at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$2.apply(HiveSessionStateBuilder.scala:55)at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$2.apply(HiveSessionStateBuilder.scala:55)at org.apache.spark.sql.catalyst.catalog.SessionCatalog.globalTempViewManager$lzycompute(SessionCatalog.scala:91)at org.apache.spark.sql.catalyst.catalog.SessionCatalog.globalTempViewManager(SessionCatalog.scala:91)at org.apache.spark.sql.catalyst.catalog.SessionCatalog.isTemporaryTable(SessionCatalog.scala:731)at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.isRunningDirectlyOnFiles(Analyzer.scala:749)at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:683)at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:715)at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:708)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$apply$1.apply(AnalysisHelper.scala:90)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$apply$1.apply(AnalysisHelper.scala:90)at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:89)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:86)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsUp(AnalysisHelper.scala:86)at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$1.apply(AnalysisHelper.scala:87)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$1.apply(AnalysisHelper.scala:87)at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:87)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:86)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsUp(AnalysisHelper.scala:86)at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29)at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:708)at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:654)at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)at scala.collection.immutable.List.foldLeft(List.scala:84)at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)at scala.collection.immutable.List.foreach(List.scala:392)at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79)at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)at com.trinasolar.spark_24.App.main(App.java:37)

错误的原因是因为winutils, 在程序里指点hadoop.home.dir属性,System.setProperty("hadoop.home.dir","D:\\eclipse-workspace\\spark-24" ); 并下载winutils放到hadoop.home.dir\bin目录下即可。

如果提示版本问题,要么多下载几个版本,试一下哪个版本能正常执行即可。

下载地址: https://github.com/steveloughran/winutils

这篇关于Spark (null) entry in command string: null chmod 0700的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/909611

相关文章

IDEA如何将String类型转json格式

《IDEA如何将String类型转json格式》在Java中,字符串字面量中的转义字符会被自动转换,但通过网络获取的字符串可能不会自动转换,为了解决IDEA无法识别JSON字符串的问题,可以在本地对字... 目录问题描述问题原因解决方案总结问题描述最近做项目需要使用Ai生成json,可生成String类型

Linux中chmod权限设置方式

《Linux中chmod权限设置方式》本文介绍了Linux系统中文件和目录权限的设置方法,包括chmod、chown和chgrp命令的使用,以及权限模式和符号模式的详细说明,通过这些命令,用户可以灵活... 目录设置基本权限命令:chmod1、权限介绍2、chmod命令常见用法和示例3、文件权限详解4、ch

SpringBoot操作spark处理hdfs文件的操作方法

《SpringBoot操作spark处理hdfs文件的操作方法》本文介绍了如何使用SpringBoot操作Spark处理HDFS文件,包括导入依赖、配置Spark信息、编写Controller和Ser... 目录SpringBoot操作spark处理hdfs文件1、导入依赖2、配置spark信息3、cont

jenkins 插件执行shell命令时,提示“Command not found”处理方法

首先提示找不到“Command not found,可能我们第一反应是查看目标机器是否已支持该命令,不过如果相信能找到这里来的朋友估计遇到的跟我一样,其实目标机器是没有问题的通过一些远程工具执行shell命令是可以执行。奇怪的就是通过jenkinsSSH插件无法执行,经一番折腾各种搜索发现是jenkins没有加载/etc/profile导致。 【解决办法】: 需要在jenkins调用shell脚

Spark MLlib模型训练—聚类算法 PIC(Power Iteration Clustering)

Spark MLlib模型训练—聚类算法 PIC(Power Iteration Clustering) Power Iteration Clustering (PIC) 是一种基于图的聚类算法,用于在大规模数据集上进行高效的社区检测。PIC 算法的核心思想是通过迭代图的幂运算来发现数据中的潜在簇。该算法适用于处理大规模图数据,特别是在社交网络分析、推荐系统和生物信息学等领域具有广泛应用。Spa

string字符会调用new分配堆内存吗

gcc的string默认大小是32个字节,字符串小于等于15直接保存在栈上,超过之后才会使用new分配。

Cannot read property ‘length‘ of null while opening vscode terminal

同一问题地址:Cannot read property ‘length’ of null while opening vscode terminal 问题描述 One day, 我在ubuntu 18.04下用vscode打开一个项目,并想和往常一样在vscode使用终端,发现报错Cannot read property 'length' of null。 解决 打开setting.jso

非空约束(Not Null)

修改表添加非空约束 使用DDL语句添加非空约束 ALTER TABLE 表名 MODIFY 列名 类型 NOT NULL; 示例: 向emp表中的salary添加非空约束。 alter table emp modify salary float(8,2) not NULL; 删除非空约束 使用DDL语句删除非空约束 ALTER TABLE 表名 MODIFY 列名 类型 NULL;

hdu2072(string的应用)

单词数 Time Limit: 1000/1000 MS (Java/Others)    Memory Limit: 32768/32768 K (Java/Others) Total Submission(s): 25447    Accepted Submission(s): 5957 Problem Description lily的好朋友xiaoou333最近很空,他

【UVA】10739 - String to Palindrome(动态规划)

比较水的动态规划 dp[i][j] 将原串 i ~ j 之内的字符转化为回文字符所需要的最小操作次数 其中删除操作和添加操作本质上是一样的。 三个状态转移方程: dp[i][j] = min(dp[i][j] ,dp[i + 1][j]); dp[i][j] = min(dp[i][j] ,dp[i + 1][j - 1]); dp[i][j] = min(dp[i][j] ,dp[