【kyuubi k8s】kyuubi发布k8s执行spark sql

2024-06-18 01:28

本文主要是介绍【kyuubi k8s】kyuubi发布k8s执行spark sql,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

背景

依据上一篇kyuubi与spark集成,并发布spark sql到k8s集群,上一篇的将kyuubi和spark环境放在本地某台服务器上的,为了高可用,本篇将其打包镜像,并发布到k8s。

其实就是将本地的kyuubi,spark,jdk等打包到镜像中,设置好环境变量,打包成镜像,过程也很简单,参考以下Dockerfile

Dockerfile的目录

准备好kyuubi的包,spark包,jdk包,还有上篇中重点提到的aws包:awsicelib目录下的jar,还有一点将k8s的config也要打包进来,否则无法通过k8s的认证,没法提交任务。

Dockerfile

FROM centosMAINTAINER 682556RUN mkdir -p /optRUN mkdir -p /root/.kube#将看k8s的配置加进来
ADD config /root/.kubeADD apache-kyuubi-1.9.0-bin.tgz /optADD spark-3.5.0-bin-hadoop3.tgz /optADD zulu11.52.13-ca-jdk11.0.13-linux_x64.tar.gz /opt/java#这个是busybox,在启动kyuubi后,让容器仍然启动着,不会completed
ADD busycommand.sh /opt/rcommand.shRUN ls /opt/javaRUN ln -s /opt/apache-kyuubi-1.9.0-bin /opt/kyuubiRUN ln -s /opt/spark-3.5.0-bin-hadoop3 /opt/spark#添加aws的包,访问minio
COPY awsicelib/ /opt/spark/jarsENV JAVA_HOME /opt/java/zulu11.52.13-ca-jdk11.0.13-linux_x64ENV SPARK_HOME /opt/sparkENV KYUUBI_HOME /opt/kyuubiENV PATH $PATH:$JAVA_HOME/bin:$SPARK_HOME/bin:$KYUUBI_HOME/binENV AWS_REGION=us-east-1EXPOSE 10009#RUN chmod -R 777 /optENTRYPOINT ["/opt/rcommand.sh"]

我的busycommand,10s钟检测一次,如果进程没了,重新启动kyuubi

#! /bin/bash
i=1
while ((i<100000000))
doflag="true"if((i<2)); then$KYUUBI_HOME/bin/kyuubi start
echo `date "+%Y-%m-%d %H:%M:%S"` "first started" >> /opt/kyuubi.log
pwd;
elsePID=$(ps -e|grep java|awk '{printf $1}')if [ $PID ]; thenecho `date "+%Y-%m-%d %H:%M:%S"` "process id:$PID" >> /opt/kyuubi.log
elseecho `date "+%Y-%m-%d %H:%M:%S"` "process kyuubi-server not exit" >> /opt/kyuubi.log$KYUUBI_HOME/bin/kyuubi startecho `date "+%Y-%m-%d %H:%M:%S"` "process kyuubi-server started" >> /opt/kyuubi.log
fi
sleep 10s
fi
let i++
done

打包镜像为10.38.199.203:1443/fhc/kyuubi:v1.0

编写yaml发布

kyuubi-condfigMap.yaml

apiVersion: v1
kind: ConfigMap
metadata:name: kyuubi-config-cm#namespace: ingress-nginxlabels:app: kyuubi
data:kyuubi-env.sh: |-export JAVA_HOME=/opt/java/zulu11.52.13-ca-jdk11.0.13-linux_x64export SPARK_HOME=/opt/spark#export KYUUBI_JAVA_OPTS="-Xmx10g -XX:MaxMetaspaceSize=512m -XX:MaxDirectMemorySize=1024m -XX:+UseG1GC -XX:+UseStringDeduplication -XX:+UnlockDiagnosticVMOptions -XX:+UseCondCardMark -XX:+UseGCOverheadLimit -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./logs -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -verbose:gc -Xloggc:./logs/kyuubi-server-gc-%t.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=20M"#export KYUUBI_BEELINE_OPTS="-Xmx2g -XX:+UseG1GC -XX:+UnlockDiagnosticVMOptions -XX:+UseCondCardMark"kyuubi-defaults.conf: |-spark.master=k8s://https://10.38.199.201:443/k8s/clusters/c-m-l7gflsx7spark.home=/opt/sparkspark.hadoop.fs.s3a.access.key=apPeWWr5KpXkzEW2jNKWspark.hadoop.fs.s3a.secret.key=cRt3inWAhDYtuzsDnKGLGg9EJSbJ083ekuW7PejMspark.hadoop.fs.s3a.endpoint=http://wuxdihadl01b.seagate.com:30009spark.hadoop.fs.s3a.path.style.access=truespark.hadoop.fs.s3a.aws.region=us-east-1spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystemspark.sql.catalog.default=spark_catalogspark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkCatalogspark.sql.catalog.spark_catalog.type=rest#spark.sql.catalog.spark_catalog.catalog-impl=org.apache.iceberg.rest.RESTCatalogspark.sql.catalog.spark_catalog.uri=http://10.40.8.42:31000spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensionsspark.sql.catalog.spark_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIOspark.sql.catalog.spark_catalog.warehouse=s3a://wux-hoo-dev-01/ice_warehousespark.sql.catalog.spark_catalog.s3.endpoint=http://wuxdihadl01b.seagate.com:30009spark.sql.catalog.spark_catalog.s3.path-style-access=truespark.sql.catalog.spark_catalog.s3.access-key-id=apPeWWr5KpXkzEW2jNKWspark.sql.catalog.spark_catalog.s3.secret-access-key=cRt3inWAhDYtuzsDnKGLGg9EJSbJ083ekuW7PejMspark.sql.catalog.spark_catalog.region=us-east-1spark.kubernetes.container.image=10.38.199.203:1443/fhc/spark350:v1.0spark.kubernetes.namespace=defaultspark.kubernetes.authenticate.driver.serviceAccountName=sparkspark.ssl.kubernetes.enabled=false

spark-configMap.yaml

apiVersion: v1
kind: ConfigMap
metadata:name: spark-config-cm#namespace: ingress-nginxlabels:app: kyuubi
data:spark-env.sh: |-spark-defaults.conf: |-spark.master=k8s://https://10.38.199.201:443/k8s/clusters/c-m-l7gflsx7spark.home=/opt/sparkspark.hadoop.fs.s3a.access.key=apPeWWr5KpXkzEW2jNKWspark.hadoop.fs.s3a.secret.key=cRt3inWAhDYtuzsDnKGLGg9EJSbJ083ekuW7PejMspark.hadoop.fs.s3a.endpoint=http://wuxdihadl01b.seagate.com:30009spark.hadoop.fs.s3a.path.style.access=truespark.hadoop.fs.s3a.aws.region=us-east-1spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystemspark.sql.catalog.default=spark_catalogspark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkCatalogspark.sql.catalog.spark_catalog.type=rest#spark.sql.catalog.spark_catalog.catalog-impl=org.apache.iceberg.rest.RESTCatalogspark.sql.catalog.spark_catalog.uri=http://10.40.8.42:31000spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensionsspark.sql.catalog.spark_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIOspark.sql.catalog.spark_catalog.warehouse=s3a://wux-hoo-dev-01/ice_warehousespark.sql.catalog.spark_catalog.s3.endpoint=http://wuxdihadl01b.seagate.com:30009spark.sql.catalog.spark_catalog.s3.path-style-access=truespark.sql.catalog.spark_catalog.s3.access-key-id=apPeWWr5KpXkzEW2jNKWspark.sql.catalog.spark_catalog.s3.secret-access-key=cRt3inWAhDYtuzsDnKGLGg9EJSbJ083ekuW7PejMspark.sql.catalog.spark_catalog.region=us-east-1spark.kubernetes.container.image=10.38.199.203:1443/fhc/spark350:v1.0spark.kubernetes.namespace=defaultspark.kubernetes.authenticate.driver.serviceAccountName=sparkspark.ssl.kubernetes.enabled=falselog4j2.properties: |-# Set everything to be logged to the consolerootLogger.level = inforootLogger.appenderRef.stdout.ref = consoleappender.console.type = Consoleappender.console.name = consoleappender.console.target = SYSTEM_ERRappender.console.layout.type = PatternLayoutappender.console.layout.pattern = %d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n%exlogger.repl.name = org.apache.spark.repl.Mainlogger.repl.level = warnlogger.thriftserver.name = org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriverlogger.thriftserver.level = warnlogger.jetty1.name = org.sparkproject.jettylogger.jetty1.level = warnlogger.jetty2.name = org.sparkproject.jetty.util.component.AbstractLifeCyclelogger.jetty2.level = errorlogger.replexprTyper.name = org.apache.spark.repl.SparkIMain$exprTyperlogger.replexprTyper.level = infologger.replSparkILoopInterpreter.name = org.apache.spark.repl.SparkILoop$SparkILoopInterpreterlogger.replSparkILoopInterpreter.level = infologger.parquet1.name = org.apache.parquetlogger.parquet1.level = errorlogger.parquet2.name = parquetlogger.parquet2.level = errorlogger.RetryingHMSHandler.name = org.apache.hadoop.hive.metastore.RetryingHMSHandlerlogger.RetryingHMSHandler.level = fatallogger.FunctionRegistry.name = org.apache.hadoop.hive.ql.exec.FunctionRegistrylogger.FunctionRegistry.level = errorappender.console.filter.1.type = RegexFilterappender.console.filter.1.regex = .*Thrift error occurred during processing of message.*appender.console.filter.1.onMatch = denyappender.console.filter.1.onMismatch = neutral

kyuubi-dev.yaml

apiVersion: apps/v1
kind: Deployment
metadata:name: kyuubi#namespace: ingress-nginx
spec:replicas: 1revisionHistoryLimit: 1selector:matchLabels:app: kyuubitemplate:metadata:labels:app: kyuubispec:#nodeSelector:#type: lab1#hostNetwork: truecontainers:- name: kyuubiimagePullPolicy: Alwaysimage: 10.38.199.203:1443/fhc/kyuubi:v1.0resources:requests:cpu: 1000mmemory: 5000Milimits:cpu: 2000mmemory: 20240Miports:- containerPort: 10009env:- name: COORDINATOR_NODEvalue: "true"volumeMounts:- name: kyuubi-config-volumemountPath: /opt/kyuubi/confreadOnly: false- name: spark-config-volumemountPath: /opt/spark/confreadOnly: falsevolumes:- name: kyuubi-config-volumeconfigMap:name: kyuubi-config-cm- name: spark-config-volumeconfigMap:name: spark-config-cm- name: kyuubi-data-volumeemptyDir: {}imagePullSecrets:- name: harbor

kyuubi-service.yaml

kind: Service
apiVersion: v1
metadata:labels:app: kyuubiname: kyuubi-service#namespace: ingress-nginx
spec:type: NodePort#sessionAffinity: ClientIPports:- port: 10009targetPort: 10009name: http#protocol: TCPnodePort: 30009selector:app: kyuubi

发布yaml

kubectl apply -f kyuubi-condfigMap.yaml
kubectl apply -f spark-configMap.yaml
kubectl apply -f kyuubi-dev.yaml
kubectl apply -f kyuubi-service.yaml

查看k8s pod

本地连接kyuubi jdbc,及执行sql

./beeline -u 'jdbc:hive2://10.38.199.201:30009'
[root@t001 bin]# ./beeline -u 'jdbc:hive2://10.38.199.201:30009'
Connecting to jdbc:hive2://10.38.199.201:30009
2024-06-17 07:33:45.668 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.operation.LaunchEngine: Processing anonymous's query[c612b8d0-8bc7-4efa-acc8-e6c477fbda20]: PENDING_STATE -> RUNNING_STATE, statement:
LaunchEngine
2024-06-17 07:33:45.673 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.shaded.curator.framework.imps.CuratorFrameworkImpl: Starting
2024-06-17 07:33:45.673 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.shaded.zookeeper.ZooKeeper: Initiating client connection, connectString=10.42.46.241:2181 sessionTimeout=60000 watcher=org.apache.kyuubi.shaded.curator.ConnectionState@59e92870
2024-06-17 07:33:45.676 INFO KyuubiSessionManager-exec-pool: Thread-54-SendThread(kyuubi-59b89b87b7-bjckz:2181) org.apache.kyuubi.shaded.zookeeper.ClientCnxn: Opening socket connection to server kyuubi-59b89b87b7-bjckz/10.42.46.241:2181. Will not attempt to authenticate using SASL (unknown error)
2024-06-17 07:33:45.678 INFO KyuubiSessionManager-exec-pool: Thread-54-SendThread(kyuubi-59b89b87b7-bjckz:2181) org.apache.kyuubi.shaded.zookeeper.ClientCnxn: Socket connection established to kyuubi-59b89b87b7-bjckz/10.42.46.241:2181, initiating session
2024-06-17 07:33:45.702 INFO KyuubiSessionManager-exec-pool: Thread-54-SendThread(kyuubi-59b89b87b7-bjckz:2181) org.apache.kyuubi.shaded.zookeeper.ClientCnxn: Session establishment complete on server kyuubi-59b89b87b7-bjckz/10.42.46.241:2181, sessionid = 0x101113150210001, negotiated timeout = 60000
2024-06-17 07:33:45.703 INFO KyuubiSessionManager-exec-pool: Thread-54-EventThread org.apache.kyuubi.shaded.curator.framework.state.ConnectionStateManager: State change: CONNECTED
2024-06-17 07:33:45.728 WARN KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.shaded.curator.utils.ZKPaths: The version of ZooKeeper being used doesn't support Container nodes. CreateMode.PERSISTENT will be used instead.
2024-06-17 07:33:45.829 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.engine.ProcBuilder: Creating anonymous's working directory at /opt/kyuubi/work/anonymous
2024-06-17 07:33:45.852 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.engine.ProcBuilder: Logging to /opt/kyuubi/work/anonymous/kyuubi-spark-sql-engine.log.0
2024-06-17 07:33:45.860 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.Utils: Loading Kyuubi properties from /opt/spark/conf/spark-defaults.conf
2024-06-17 07:33:45.869 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.engine.EngineRef: Launching engine:
/opt/spark-3.5.0-bin-hadoop3/bin/spark-submit \--class org.apache.kyuubi.engine.spark.SparkSQLEngine \--conf spark.hive.server2.thrift.resultset.default.fetch.size=1000 \--conf spark.kyuubi.client.ipAddress=10.38.199.201 \--conf spark.kyuubi.client.version=1.9.0 \--conf spark.kyuubi.engine.engineLog.path=/opt/kyuubi/work/anonymous/kyuubi-spark-sql-engine.log.0 \--conf spark.kyuubi.engine.submit.time=1718609625821 \--conf spark.kyuubi.ha.addresses=10.42.46.241:2181 \--conf spark.kyuubi.ha.engine.ref.id=50850fdf-6a15-4c47-bc5f-d8816d255ef8 \--conf spark.kyuubi.ha.namespace=/kyuubi_1.9.0_USER_SPARK_SQL/anonymous/default \--conf spark.kyuubi.ha.zookeeper.auth.type=NONE \--conf spark.kyuubi.server.ipAddress=10.42.46.241 \--conf spark.kyuubi.session.connection.url=kyuubi-59b89b87b7-bjckz:10009 \--conf spark.kyuubi.session.real.user=anonymous \--conf spark.app.name=kyuubi_USER_SPARK_SQL_anonymous_default_50850fdf-6a15-4c47-bc5f-d8816d255ef8 \--conf spark.hadoop.fs.s3a.access.key=apPeWWr5KpXkzEW2jNKW \--conf spark.hadoop.fs.s3a.aws.region=us-east-1 \--conf spark.hadoop.fs.s3a.endpoint=http://wuxdihadl01b.seagate.com:30009 \--conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem \--conf spark.hadoop.fs.s3a.path.style.access=true \--conf spark.hadoop.fs.s3a.secret.key=cRt3inWAhDYtuzsDnKGLGg9EJSbJ083ekuW7PejM \--conf spark.home=/opt/spark \--conf spark.kubernetes.authenticate.driver.oauthToken=kubeconfig-user-8g2wl8jd6v:t6fm8dphfzr8r2dhhz9f9m57nzcmbcjrmmx6txj4vpfvv799c4sj84 \--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \--conf spark.kubernetes.container.image=10.38.199.203:1443/fhc/spark350:v1.0 \--conf spark.kubernetes.driver.label.kyuubi-unique-tag=50850fdf-6a15-4c47-bc5f-d8816d255ef8 \--conf spark.kubernetes.executor.podNamePrefix=kyuubi-user-spark-sql-anonymous-default-50850fdf-6a15-4c47-bc5f-d8816d255ef8 \--conf spark.kubernetes.namespace=default \--conf spark.master=k8s://https://10.38.199.201:443/k8s/clusters/c-m-l7gflsx7 \--conf spark.sql.catalog.default=spark_catalog \--conf spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkCatalog \--conf spark.sql.catalog.spark_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO \--conf spark.sql.catalog.spark_catalog.region=us-east-1 \--conf spark.sql.catalog.spark_catalog.s3.access-key-id=apPeWWr5KpXkzEW2jNKW \--conf spark.sql.catalog.spark_catalog.s3.endpoint=http://wuxdihadl01b.seagate.com:30009 \--conf spark.sql.catalog.spark_catalog.s3.path-style-access=true \--conf spark.sql.catalog.spark_catalog.s3.secret-access-key=cRt3inWAhDYtuzsDnKGLGg9EJSbJ083ekuW7PejM \--conf spark.sql.catalog.spark_catalog.type=rest \--conf spark.sql.catalog.spark_catalog.uri=http://10.40.8.42:31000 \--conf spark.sql.catalog.spark_catalog.warehouse=s3a://wux-hoo-dev-01/ice_warehouse \--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \--conf spark.ssl.kubernetes.enabled=false \--conf spark.kubernetes.driverEnv.SPARK_USER_NAME=anonymous \--conf spark.executorEnv.SPARK_USER_NAME=anonymous \--proxy-user anonymous /opt/apache-kyuubi-1.9.0-bin/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.9.0.jar
24/06/17 07:33:48 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
24/06/17 07:33:49 INFO SignalRegister: Registering signal handler for TERM
24/06/17 07:33:49 INFO SignalRegister: Registering signal handler for HUP
24/06/17 07:33:49 INFO SignalRegister: Registering signal handler for INT
24/06/17 07:33:49 INFO HiveConf: Found configuration file null
24/06/17 07:33:49 INFO SparkContext: Running Spark version 3.5.0
24/06/17 07:33:49 INFO SparkContext: OS info Linux, 3.10.0-1160.114.2.el7.x86_64, amd64
24/06/17 07:33:49 INFO SparkContext: Java version 11.0.13
24/06/17 07:33:49 INFO ResourceUtils: ==============================================================
24/06/17 07:33:49 INFO ResourceUtils: No custom resources configured for spark.driver.
24/06/17 07:33:49 INFO ResourceUtils: ==============================================================
24/06/17 07:33:49 INFO SparkContext: Submitted application: kyuubi_USER_SPARK_SQL_anonymous_default_50850fdf-6a15-4c47-bc5f-d8816d255ef8
24/06/17 07:33:49 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
24/06/17 07:33:49 INFO ResourceProfile: Limiting resource is cpus at 1 tasks per executor
24/06/17 07:33:49 INFO ResourceProfileManager: Added ResourceProfile id: 0
24/06/17 07:33:49 INFO SecurityManager: Changing view acls to: root,anonymous
24/06/17 07:33:49 INFO SecurityManager: Changing modify acls to: root,anonymous
24/06/17 07:33:49 INFO SecurityManager: Changing view acls groups to:
24/06/17 07:33:49 INFO SecurityManager: Changing modify acls groups to:
24/06/17 07:33:49 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: root, anonymous; groups with view permissions: EMPTY; users with modify permissions: root, anonymous; groups with modify permissions: EMPTY
24/06/17 07:33:50 INFO Utils: Successfully started service 'sparkDriver' on port 38583.
24/06/17 07:33:50 INFO SparkEnv: Registering MapOutputTracker
24/06/17 07:33:50 INFO SparkEnv: Registering BlockManagerMaster
24/06/17 07:33:50 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
24/06/17 07:33:50 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
24/06/17 07:33:50 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
24/06/17 07:33:50 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-533c82c5-bfd4-40d1-b8f1-0a348b15a1cf
24/06/17 07:33:50 INFO MemoryStore: MemoryStore started with capacity 434.4 MiB
24/06/17 07:33:50 INFO SparkEnv: Registering OutputCommitCoordinator
24/06/17 07:33:50 INFO JettyUtils: Start Jetty 0.0.0.0:0 for SparkUI
24/06/17 07:33:50 INFO Utils: Successfully started service 'SparkUI' on port 42553.
24/06/17 07:33:50 INFO SparkContext: Added JAR file:/opt/apache-kyuubi-1.9.0-bin/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.9.0.jar at spark://10.42.46.241:38583/jars/kyuubi-spark-sql-engine_2.12-1.9.0.jar with timestamp 1718609629708
24/06/17 07:33:50 INFO SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file
24/06/17 07:33:51 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes for ResourceProfile Id: 0, target: 2, known: 0, sharedSlotFromPendingPods: 2147483647.
24/06/17 07:33:52 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : spark-env.sh,log4j2.properties
24/06/17 07:33:52 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
24/06/17 07:33:52 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : spark-env.sh,log4j2.properties
24/06/17 07:33:52 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 39580.
24/06/17 07:33:52 INFO NettyBlockTransferService: Server created on 10.42.46.241:39580
24/06/17 07:33:52 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
24/06/17 07:33:52 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.42.46.241, 39580, None)
24/06/17 07:33:52 INFO BlockManagerMasterEndpoint: Registering block manager 10.42.46.241:39580 with 434.4 MiB RAM, BlockManagerId(driver, 10.42.46.241, 39580, None)
24/06/17 07:33:52 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.42.46.241, 39580, None)
24/06/17 07:33:52 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.42.46.241, 39580, None)
24/06/17 07:33:52 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : spark-env.sh,log4j2.properties
24/06/17 07:33:52 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
24/06/17 07:33:57 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: No executor found for 10.42.235.247:52602
24/06/17 07:33:57 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.42.235.247:52608) with ID 1,  ResourceProfileId 0
24/06/17 07:33:57 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: No executor found for 10.42.46.243:51040
24/06/17 07:33:57 INFO BlockManagerMasterEndpoint: Registering block manager 10.42.235.247:46359 with 413.9 MiB RAM, BlockManagerId(1, 10.42.235.247, 46359, None)
24/06/17 07:33:58 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.42.46.243:51042) with ID 2,  ResourceProfileId 0
24/06/17 07:33:58 INFO KubernetesClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
24/06/17 07:33:58 INFO BlockManagerMasterEndpoint: Registering block manager 10.42.46.243:34050 with 413.9 MiB RAM, BlockManagerId(2, 10.42.46.243, 34050, None)
24/06/17 07:33:58 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir.
24/06/17 07:33:58 INFO SharedState: Warehouse path is 'file:/opt/apache-kyuubi-1.9.0-bin/work/anonymous/spark-warehouse'.
24/06/17 07:34:01 INFO CatalogUtil: Loading custom FileIO implementation: org.apache.iceberg.aws.s3.S3FileIO
24/06/17 07:34:02 INFO CodeGenerator: Code generated in 293.979616 ms
24/06/17 07:34:02 INFO CodeGenerator: Code generated in 10.25986 ms
24/06/17 07:34:03 INFO CodeGenerator: Code generated in 12.699915 ms
24/06/17 07:34:03 INFO SparkContext: Starting job: isEmpty at KyuubiSparkUtil.scala:49
24/06/17 07:34:03 INFO DAGScheduler: Got job 0 (isEmpty at KyuubiSparkUtil.scala:49) with 1 output partitions
24/06/17 07:34:03 INFO DAGScheduler: Final stage: ResultStage 0 (isEmpty at KyuubiSparkUtil.scala:49)
24/06/17 07:34:03 INFO DAGScheduler: Parents of final stage: List()
24/06/17 07:34:03 INFO DAGScheduler: Missing parents: List()
24/06/17 07:34:03 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at isEmpty at KyuubiSparkUtil.scala:49), which has no missing parents
24/06/17 07:34:03 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 7.0 KiB, free 434.4 MiB)
24/06/17 07:34:03 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 3.7 KiB, free 434.4 MiB)
24/06/17 07:34:03 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.42.46.241:39580 (size: 3.7 KiB, free: 434.4 MiB)
24/06/17 07:34:03 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1580
24/06/17 07:34:03 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at isEmpty at KyuubiSparkUtil.scala:49) (first 15 tasks are for partitions Vector(0))
24/06/17 07:34:03 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks resource profile 0
24/06/17 07:34:03 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (10.42.46.243, executor 2, partition 0, PROCESS_LOCAL, 10848 bytes)
24/06/17 07:34:03 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.42.46.243:34050 (size: 3.7 KiB, free: 413.9 MiB)
24/06/17 07:34:04 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1115 ms on 10.42.46.243 (executor 2) (1/1)
24/06/17 07:34:04 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
24/06/17 07:34:04 INFO DAGScheduler: ResultStage 0 (isEmpty at KyuubiSparkUtil.scala:49) finished in 1.414 s
24/06/17 07:34:04 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
24/06/17 07:34:04 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished
24/06/17 07:34:04 INFO DAGScheduler: Job 0 finished: isEmpty at KyuubiSparkUtil.scala:49, took 1.522348 s
24/06/17 07:34:04 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 10.42.46.241:39580 in memory (size: 3.7 KiB, free: 434.4 MiB)
24/06/17 07:34:04 INFO Utils: Loading Kyuubi properties from /opt/kyuubi/conf/kyuubi-defaults.conf
24/06/17 07:34:04 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 10.42.46.243:34050 in memory (size: 3.7 KiB, free: 413.9 MiB)
24/06/17 07:34:04 INFO ThreadUtils: SparkSQLSessionManager-exec-pool: pool size: 100, wait queue size: 100, thread keepalive time: 60000 ms
24/06/17 07:34:04 INFO SparkSQLOperationManager: Service[SparkSQLOperationManager] is initialized.
24/06/17 07:34:04 INFO SparkSQLSessionManager: Service[SparkSQLSessionManager] is initialized.
24/06/17 07:34:04 INFO SparkSQLBackendService: Service[SparkSQLBackendService] is initialized.
24/06/17 07:34:05 INFO SparkTBinaryFrontendService: Initializing SparkTBinaryFrontend on kyuubi-59b89b87b7-bjckz:42178 with [9, 999] worker threads
24/06/17 07:34:05 INFO CuratorFrameworkImpl: Starting
24/06/17 07:34:05 INFO ZooKeeper: Client environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT
24/06/17 07:34:05 INFO ZooKeeper: Client environment:host.name=kyuubi-59b89b87b7-bjckz
24/06/17 07:34:05 INFO ZooKeeper: Client environment:java.version=11.0.13
24/06/17 07:34:05 INFO ZooKeeper: Client environment:java.vendor=Azul Systems, Inc.
24/06/17 07:34:05 INFO ZooKeeper: Client environment:java.home=/opt/java/zulu11.52.13-ca-jdk11.0.13-linux_x64
24/06/17 07:34:05 INFO ZooKeeper: Client environment:java.class.path=/opt/spark/conf/:/opt/spark/jars/HikariCP-2.5.1.jar:/opt/spark/jars/JLargeArrays-1.5.jar:/opt/spark/jars/JTransforms-3.1.jar:/opt/spark/jars/RoaringBitmap-0.9.45.jar:/opt/spark/jars/ST4-4.0.4.jar:/opt/spark/jars/activation-1.1.1.jar:/opt/spark/jars/aircompressor-0.25.jar:/opt/spark/jars/algebra_2.12-2.0.1.jar:/opt/spark/jars/annotations-17.0.0.jar:/opt/spark/jars/antlr-runtime-3.5.2.jar:/opt/spark/jars/antlr4-runtime-4.9.3.jar:/opt/spark/jars/aopalliance-repackaged-2.6.1.jar:/opt/spark/jars/arpack-3.0.3.jar:/opt/spark/jars/arpack_combined_all-0.1.jar:/opt/spark/jars/arrow-format-12.0.1.jar:/opt/spark/jars/arrow-memory-core-12.0.1.jar:/opt/spark/jars/arrow-memory-netty-12.0.1.jar:/opt/spark/jars/arrow-vector-12.0.1.jar:/opt/spark/jars/audience-annotations-0.5.0.jar:/opt/spark/jars/avro-1.11.2.jar:/opt/spark/jars/avro-ipc-1.11.2.jar:/opt/spark/jars/avro-mapred-1.11.2.jar:/opt/spark/jars/blas-3.0.3.jar:/opt/spark/jars/bonecp-0.8.0.RELEASE.jar:/opt/spark/jars/breeze-macros_2.12-2.1.0.jar:/opt/spark/jars/breeze_2.12-2.1.0.jar:/opt/spark/jars/cats-kernel_2.12-2.1.1.jar:/opt/spark/jars/chill-java-0.10.0.jar:/opt/spark/jars/chill_2.12-0.10.0.jar:/opt/spark/jars/commons-cli-1.5.0.jar:/opt/spark/jars/commons-codec-1.16.0.jar:/opt/spark/jars/commons-collections-3.2.2.jar:/opt/spark/jars/commons-collections4-4.4.jar:/opt/spark/jars/commons-compiler-3.1.9.jar:/opt/spark/jars/commons-compress-1.23.0.jar:/opt/spark/jars/commons-crypto-1.1.0.jar:/opt/spark/jars/commons-dbcp-1.4.jar:/opt/spark/jars/commons-io-2.13.0.jar:/opt/spark/jars/commons-lang-2.6.jar:/opt/spark/jars/commons-lang3-3.12.0.jar:/opt/spark/jars/commons-logging-1.1.3.jar:/opt/spark/jars/commons-math3-3.6.1.jar:/opt/spark/jars/commons-pool-1.5.4.jar:/opt/spark/jars/commons-text-1.10.0.jar:/opt/spark/jars/compress-lzf-1.1.2.jar:/opt/spark/jars/curator-client-2.13.0.jar:/opt/spark/jars/curator-framework-2.13.0.jar:/opt/spark/jars/curator-recipes-2.13.0.jar:/opt/spark/jars/datanucleus-api-jdo-4.2.4.jar:/opt/spark/jars/datanucleus-core-4.1.17.jar:/opt/spark/jars/datanucleus-rdbms-4.1.19.jar:/opt/spark/jars/datasketches-java-3.3.0.jar:/opt/spark/jars/datasketches-memory-2.1.0.jar:/opt/spark/jars/derby-10.14.2.0.jar:/opt/spark/jars/dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:/opt/spark/jars/flatbuffers-java-1.12.0.jar:/opt/spark/jars/gson-2.2.4.jar:/opt/spark/jars/guava-14.0.1.jar:/opt/spark/jars/hadoop-client-api-3.3.4.jar:/opt/spark/jars/hadoop-client-runtime-3.3.4.jar:/opt/spark/jars/hadoop-shaded-guava-1.1.1.jar:/opt/spark/jars/hadoop-yarn-server-web-proxy-3.3.4.jar:/opt/spark/jars/hive-beeline-2.3.9.jar:/opt/spark/jars/hive-cli-2.3.9.jar:/opt/spark/jars/hive-common-2.3.9.jar:/opt/spark/jars/hive-exec-2.3.9-core.jar:/opt/spark/jars/hive-jdbc-2.3.9.jar:/opt/spark/jars/hive-llap-common-2.3.9.jar:/opt/spark/jars/hive-metastore-2.3.9.jar:/opt/spark/jars/hive-serde-2.3.9.jar:/opt/spark/jars/hive-service-rpc-3.1.3.jar:/opt/spark/jars/hive-shims-0.23-2.3.9.jar:/opt/spark/jars/hive-shims-2.3.9.jar:/opt/spark/jars/hive-shims-common-2.3.9.jar:/opt/spark/jars/hive-shims-scheduler-2.3.9.jar:/opt/spark/jars/hive-storage-api-2.8.1.jar:/opt/spark/jars/hk2-api-2.6.1.jar:/opt/spark/jars/hk2-locator-2.6.1.jar:/opt/spark/jars/hk2-utils-2.6.1.jar:/opt/spark/jars/httpclient-4.5.14.jar:/opt/spark/jars/httpcore-4.4.16.jar:/opt/spark/jars/istack-commons-runtime-3.0.8.jar:/opt/spark/jars/ivy-2.5.1.jar:/opt/spark/jars/jackson-annotations-2.15.2.jar:/opt/spark/jars/jackson-core-2.15.2.jar:/opt/spark/jars/jackson-core-asl-1.9.13.jar:/opt/spark/jars/jackson-databind-2.15.2.jar:/opt/spark/jars/jackson-dataformat-yaml-2.15.2.jar:/opt/spark/jars/jackson-datatype-jsr310-2.15.2.jar:/opt/spark/jars/jackson-mapper-asl-1.9.13.jar:/opt/spark/jars/jackson-module-scala_2.12-2.15.2.jar:/opt/spark/jars/jakarta.annotation-api-1.3.5.jar:/opt/spark/jars/jakarta.inject-2.6.1.jar:/opt/spark/jars/jakarta.servlet-api-4.0.3.jar:/opt/spark/jars/jakarta.validation-api-2.0.2.jar:/opt/spark/jars/jakarta.ws.rs-api-2.1.6.jar:/opt/spark/jars/jakarta.xml.bind-api-2.3.2.jar:/opt/spark/jars/janino-3.1.9.jar:/opt/spark/jars/javassist-3.29.2-GA.jar:/opt/spark/jars/jpam-1.1.jar:/opt/spark/jars/javax.jdo-3.2.0-m3.jar:/opt/spark/jars/javolution-5.5.1.jar:/opt/spark/jars/jaxb-runtime-2.3.2.jar:/opt/spark/jars/jcl-over-slf4j-2.0.7.jar:/opt/spark/jars/jdo-api-3.0.1.jar:/opt/spark/jars/jersey-client-2.40.jar:/opt/spark/jars/jersey-common-2.40.jar:/opt/spark/jars/jersey-container-servlet-2.40.jar:/opt/spark/jars/jersey-container-servlet-core-2.40.jar:/opt/spark/jars/jersey-hk2-2.40.jar:/opt/spark/jars/jersey-server-2.40.jar:/opt/spark/jars/jline-2.14.6.jar:/opt/spark/jars/joda-time-2.12.5.jar:/opt/spark/jars/jodd-core-3.5.2.jar:/opt/spark/jars/json-1.8.jar:/opt/spark/jars/json4s-ast_2.12-3.7.0-M11.jar:/opt/spark/jars/json4s-core_2.12-3.7.0-M11.jar:/opt/spark/jars/json4s-jackson_2.12-3.7.0-M11.jar:/opt/spark/jars/json4s-scalap_2.12-3.7.0-M11.jar:/opt/spark/jars/jsr305-3.0.0.jar:/opt/spark/jars/jta-1.1.jar:/opt/spark/jars/jul-to-slf4j-2.0.7.jar:/opt/spark/jars/kryo-shaded-4.0.2.jar:/opt/spark/jars/kubernetes-client-6.7.2.jar:/opt/spark/jars/kubernetes-client-api-6.7.2.jar:/opt/spark/jars/kubernetes-httpclient-okhttp-6.7.2.jar:/opt/spark/jars/kubernetes-model-admissionregistration-6.7.2.jar:/opt/spark/jars/kubernetes-model-apiextensions-6.7.2.jar:/opt/spark/jars/kubernetes-model-apps-6.7.2.jar:/opt/spark/jars/kubernetes-model-autoscaling-6.7.2.jar:/opt/spark/jars/kubernetes-model-batch-6.7.2.jar:/opt/spark/jars/kubernetes-model-certificates-6.7.2.jar:/opt/spark/jars/kubernetes-model-common-6.7.2.jar:/opt/spark/jars/kubernetes-model-coordination-6.7.2.jar:/opt/spark/jars/kubernetes-model-core-6.7.2.jar:/opt/spark/jars/kubernetes-model-discovery-6.7.2.jar:/opt/spark/jars/kubernetes-model-events-6.7.2.jar:/opt/spark/jars/kubernetes-model-extensions-6.7.2.jar:/opt/spark/jars/kubernetes-model-flowcontrol-6.7.2.jar:/opt/spark/jars/kubernetes-model-gatewayapi-6.7.2.jar:/opt/spark/jars/kubernetes-model-metrics-6.7.2.jar:/opt/spark/jars/kubernetes-model-networking-6.7.2.jar:/opt/spark/jars/kubernetes-model-node-6.7.2.jar:/opt/spark/jars/kubernetes-model-policy-6.7.2.jar:/opt/spark/jars/kubernetes-model-rbac-6.7.2.jar:/opt/spark/jars/kubernetes-model-resource-6.7.2.jar:/opt/spark/jars/kubernetes-model-scheduling-6.7.2.jar:/opt/spark/jars/kubernetes-model-storageclass-6.7.2.jar:/opt/spark/jars/lapack-3.0.3.jar:/opt/spark/jars/leveldbjni-all-1.8.jar:/opt/spark/jars/libfb303-0.9.3.jar:/opt/spark/jars/libthrift-0.12.0.jar:/opt/spark/jars/log4j-1.2-api-2.20.0.jar:/opt/spark/jars/log4j-api-2.20.0.jar:/opt/spark/jars/log4j-core-2.20.0.jar:/opt/spark/jars/log4j-slf4j2-impl-2.20.0.jar:/opt/spark/jars/logging-interceptor-3.12.12.jar:/opt/spark/jars/lz4-java-1.8.0.jar:/opt/spark/jars/mesos-1.4.3-shaded-protobuf.jar:/opt/spark/jars/metrics-core-4.2.19.jar:/opt/spark/jars/metrics-graphite-4.2.19.jar:/opt/spark/jars/metrics-jmx-4.2.19.jar:/opt/spark/jars/metrics-json-4.2.19.jar:/opt/spark/jars/metrics-jvm-4.2.19.jar:/opt/spark/jars/minlog-1.3.0.jar:/opt/spark/jars/netty-all-4.1.96.Final.jar:/opt/spark/jars/netty-buffer-4.1.96.Final.jar:/opt/spark/jars/netty-codec-4.1.96.Final.jar:/opt/spark/jars/netty-codec-http-4.1.96.Final.jar:/opt/spark/jars/netty-codec-http2-4.1.96.Final.jar:/opt/spark/jars/netty-codec-socks-4.1.96.Final.jar:/opt/spark/jars/netty-common-4.1.96.Final.jar:/opt/spark/jars/netty-handler-4.1.96.Final.jar:/opt/spark/jars/netty-handler-proxy-4.1.96.Final.jar:/opt/spark/jars/netty-resolver-4.1.96.Final.jar:/opt/spark/jars/netty-transport-4.1.96.Final.jar:/opt/spark/jars/netty-transport-classes-epoll-4.1.96.Final.jar:/opt/spark/jars/netty-transport-classes-kqueue-4.1.96.Final.jar:/opt/spark/jars/netty-transport-native-epoll-4.1.96.Final-linux-aarch_64.jar:/opt/spark/jars/netty-transport-native-epoll-4.1.96.Final-linux-x86_64.jar:/opt/spark/jars/netty-transport-native-kqueue-4.1.96.Final-osx-aarch_64.jar:/opt/spark/jars/netty-transport-native-kqueue-4.1.96.Final-osx-x86_64.jar:/opt/spark/jars/netty-transport-native-unix-common-4.1.96.Final.jar:/opt/spark/jars/objenesis-3.3.jar:/opt/spark/jars/okhttp-3.12.12.jar:/opt/spark/jars/okio-1.15.0.jar:/opt/spark/jars/opencsv-2.3.jar:/opt/spark/jars/orc-core-1.9.1-shaded-protobuf.jar:/opt/spark/jars/orc-shims-1.9.1.jar:/opt/spark/jars/orc-mapreduce-1.9.1-shaded-protobuf.jar:/opt/spark/jars/oro-2.0.8.jar:/opt/spark/jars/osgi-resource-locator-1.0.3.jar:/opt/spark/jars/paranamer-2.8.jar:/opt/spark/jars/parquet-column-1.13.1.jar:/opt/spark/jars/parquet-common-1.13.1.jar:/opt/spark/jars/parquet-encoding-1.13.1.jar:/opt/spark/jars/parquet-format-structures-1.13.1.jar:/opt/spark/jars/parquet-hadoop-1.13.1.jar:/opt/spark/jars/parquet-jackson-1.13.1.jar:/opt/spark/jars/pickle-1.3.jar:/opt/spark/jars/py4j-0.10.9.7.jar:/opt/spark/jars/rocksdbjni-8.3.2.jar:/opt/spark/jars/scala-collection-compat_2.12-2.7.0.jar:/opt/spark/jars/scala-compiler-2.12.18.jar:/opt/spark/jars/scala-library-2.12.18.jar:/opt/spark/jars/scala-parser-combinators_2.12-2.3.0.jar:/opt/spark/jars/scala-reflect-2.12.18.jar:/opt/spark/jars/scala-xml_2.12-2.1.0.jar:/opt/spark/jars/shims-0.9.45.jar:/opt/spark/jars/slf4j-api-2.0.7.jar:/opt/spark/jars/snakeyaml-2.0.jar:/opt/spark/jars/snakeyaml-engine-2.6.jar:/opt/spark/jars/snappy-java-1.1.10.3.jar:/opt/spark/jars/spark-catalyst_2.12-3.5.0.jar:/opt/spark/jars/spark-common-utils_2.12-3.5.0.jar:/opt/spark/jars/spark-core_2.12-3.5.0.jar:/opt/spark/jars/spark-graphx_2.12-3.5.0.jar:/opt/spark/jars/spark-hive-thriftserver_2.12-3.5.0.jar:/opt/spark/jars/spark-hive_2.12-3.5.0.jar:/opt/spark/jars/spark-kubernetes_2.12-3.5.0.jar:/opt/spark/jars/spark-kvstore_2.12-3.5.0.jar:/opt/spark/jars/spark-launcher_2.12-3.5.0.jar:/opt/spark/jars/spark-mesos_2.12-3.5.0.jar:/opt/spark/jars/spark-mllib-local_2.12-3.5.0.jar:/opt/spark/jars/spark-mllib_2.12-3.5.0.jar:/opt/spark/jars/spark-network-common_2.12-3.5.0.jar:/opt/spark/jars/spark-network-shuffle_2.12-3.5.0.jar:/opt/spark/jars/spark-repl_2.12-3.5.0.jar:/opt/spark/jars/spark-sketch_2.12-3.5.0.jar:/opt/spark/jars/spark-sql-api_2.12-3.5.0.jar:/opt/spark/jars/spark-sql_2.12-3.5.0.jar:/opt/spark/jars/spark-streaming_2.12-3.5.0.jar:/opt/spark/jars/spark-tags_2.12-3.5.0.jar:/opt/spark/jars/spark-unsafe_2.12-3.5.0.jar:/opt/spark/jars/spark-yarn_2.12-3.5.0.jar:/opt/spark/jars/spire-macros_2.12-0.17.0.jar:/opt/spark/jars/spire-platform_2.12-0.17.0.jar:/opt/spark/jars/spire-util_2.12-0.17.0.jar:/opt/spark/jars/spire_2.12-0.17.0.jar:/opt/spark/jars/stax-api-1.0.1.jar:/opt/spark/jars/stream-2.9.6.jar:/opt/spark/jars/super-csv-2.2.0.jar:/opt/spark/jars/threeten-extra-1.7.1.jar:/opt/spark/jars/tink-1.9.0.jar:/opt/spark/jars/transaction-api-1.1.jar:/opt/spark/jars/univocity-parsers-2.9.1.jar:/opt/spark/jars/xbean-asm9-shaded-4.23.jar:/opt/spark/jars/xz-1.9.jar:/opt/spark/jars/zjsonpatch-0.3.0.jar:/opt/spark/jars/zookeeper-3.6.3.jar:/opt/spark/jars/zookeeper-jute-3.6.3.jar:/opt/spark/jars/zstd-jni-1.5.5-4.jar:/opt/spark/jars/apache-client-2.25.65.jar:/opt/spark/jars/auth-2.25.65.jar:/opt/spark/jars/aws-core-2.25.65.jar:/opt/spark/jars/aws-java-sdk-bundle-1.12.262.jar:/opt/spark/jars/aws-json-protocol-2.25.65.jar:/opt/spark/jars/aws-query-protocol-2.25.65.jar:/opt/spark/jars/aws-xml-protocol-2.25.65.jar:/opt/spark/jars/checksums-2.25.65.jar:/opt/spark/jars/checksums-spi-2.25.65.jar:/opt/spark/jars/dynamodb-2.25.65.jar:/opt/spark/jars/endpoints-spi-2.25.65.jar:/opt/spark/jars/glue-2.25.65.jar:/opt/spark/jars/hadoop-aws-3.3.4.jar:/opt/spark/jars/http-auth-2.25.65.jar:/opt/spark/jars/http-auth-aws-2.25.65.jar:/opt/spark/jars/http-auth-spi-2.25.65.jar:/opt/spark/jars/http-client-spi-2.25.65.jar:/opt/spark/jars/iceberg-spark-runtime-3.5_2.12-1.5.0.jar:/opt/spark/jars/identity-spi-2.25.65.jar:/opt/spark/jars/json-utils-2.25.65.jar:/opt/spark/jars/kms-2.25.65.jar:/opt/spark/jars/metrics-spi-2.25.65.jar:/opt/spark/jars/profiles-2.25.65.jar:/opt/spark/jars/protocol-core-2.25.65.jar:/opt/spark/jars/reactive-streams-1.0.4.jar:/opt/spark/jars/regions-2.25.65.jar:/opt/spark/jars/s3-2.25.65.jar:/opt/spark/jars/sdk-core-2.25.65.jar:/opt/spark/jars/sts-2.25.65.jar:/opt/spark/jars/third-party-jackson-core-2.25.65.jar:/opt/spark/jars/utils-2.25.65.jar
24/06/17 07:34:05 INFO ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib
24/06/17 07:34:05 INFO ZooKeeper: Client environment:java.io.tmpdir=/tmp
24/06/17 07:34:05 INFO ZooKeeper: Client environment:java.compiler=<NA>
24/06/17 07:34:05 INFO ZooKeeper: Client environment:os.name=Linux
24/06/17 07:34:05 INFO ZooKeeper: Client environment:os.arch=amd64
24/06/17 07:34:05 INFO ZooKeeper: Client environment:os.version=3.10.0-1160.114.2.el7.x86_64
24/06/17 07:34:05 INFO ZooKeeper: Client environment:user.name=root
24/06/17 07:34:05 INFO ZooKeeper: Client environment:user.home=/root
24/06/17 07:34:05 INFO ZooKeeper: Client environment:user.dir=/opt/apache-kyuubi-1.9.0-bin/work/anonymous
24/06/17 07:34:05 INFO ZooKeeper: Initiating client connection, connectString=10.42.46.241:2181 sessionTimeout=60000 watcher=org.apache.kyuubi.shaded.curator.ConnectionState@13482361
24/06/17 07:34:05 INFO EngineServiceDiscovery: Service[EngineServiceDiscovery] is initialized.
24/06/17 07:34:05 INFO SparkTBinaryFrontendService: Service[SparkTBinaryFrontend] is initialized.
24/06/17 07:34:05 INFO SparkSQLEngine: Service[SparkSQLEngine] is initialized.
24/06/17 07:34:05 INFO ClientCnxn: Opening socket connection to server kyuubi-59b89b87b7-bjckz/10.42.46.241:2181. Will not attempt to authenticate using SASL (unknown error)
24/06/17 07:34:05 INFO ClientCnxn: Socket connection established to kyuubi-59b89b87b7-bjckz/10.42.46.241:2181, initiating session
24/06/17 07:34:05 INFO SparkSQLOperationManager: Service[SparkSQLOperationManager] is started.
24/06/17 07:34:05 INFO SparkSQLSessionManager: Service[SparkSQLSessionManager] is started.
24/06/17 07:34:05 INFO SparkSQLBackendService: Service[SparkSQLBackendService] is started.
24/06/17 07:34:05 INFO ClientCnxn: Session establishment complete on server kyuubi-59b89b87b7-bjckz/10.42.46.241:2181, sessionid = 0x101113150210002, negotiated timeout = 60000
24/06/17 07:34:05 INFO ConnectionStateManager: State change: CONNECTED
24/06/17 07:34:05 INFO ZookeeperDiscoveryClient: Zookeeper client connection state changed to: CONNECTED
24/06/17 07:34:05 INFO ZookeeperDiscoveryClient: Created a /kyuubi_1.9.0_USER_SPARK_SQL/anonymous/default/serverUri=10.42.46.241:42178;version=1.9.0;spark.driver.memory=;spark.executor.memory=;kyuubi.engine.id=spark-a13ae55c985c44168bd3fa6476c1ac1f;kyuubi.engine.url=10.42.46.241:42553;refId=50850fdf-6a15-4c47-bc5f-d8816d255ef8;sequence=0000000000 on ZooKeeper for KyuubiServer uri: 10.42.46.241:42178
24/06/17 07:34:05 INFO EngineServiceDiscovery: Registered EngineServiceDiscovery in namespace /kyuubi_1.9.0_USER_SPARK_SQL/anonymous/default.
24/06/17 07:34:05 INFO EngineServiceDiscovery: Service[EngineServiceDiscovery] is started.
24/06/17 07:34:05 INFO SparkTBinaryFrontendService: Service[SparkTBinaryFrontend] is started.
24/06/17 07:34:05 INFO SparkSQLEngine: Service[SparkSQLEngine] is started.
24/06/17 07:34:05 INFO SparkSQLEngine:Spark application name: kyuubi_USER_SPARK_SQL_anonymous_default_50850fdf-6a15-4c47-bc5f-d8816d255ef8application ID:  spark-a13ae55c985c44168bd3fa6476c1ac1fapplication tags:application web UI: http://10.42.46.241:42553master: k8s://https://10.38.199.201:443/k8s/clusters/c-m-l7gflsx7version: 3.5.0driver: [cpu: 1, mem: 1g]executor: [cpu: 2, mem: 1g, maxNum: 2]Start time: Mon Jun 17 07:33:49 UTC 2024User: anonymous (shared mode: USER)State: STARTED2024-06-17 07:34:05.979 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient: Get service instance:10.42.46.241:42178 engine id:spark-a13ae55c985c44168bd3fa6476c1ac1f and version:1.9.0 under /kyuubi_1.9.0_USER_SPARK_SQL/anonymous/default
24/06/17 07:34:06 INFO SparkTBinaryFrontendService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V10
24/06/17 07:34:06 INFO SparkSQLSessionManager: Opening session for anonymous@10.42.46.241
24/06/17 07:34:06 INFO CatalogUtil: Loading custom FileIO implementation: org.apache.iceberg.aws.s3.S3FileIO
24/06/17 07:34:06 INFO HiveUtils: Initializing HiveMetastoreConnection version 2.3.9 using Spark classes.
24/06/17 07:34:06 INFO HiveClientImpl: Warehouse location for Hive client (version 2.3.9) is file:/opt/apache-kyuubi-1.9.0-bin/work/anonymous/spark-warehouse
24/06/17 07:34:06 WARN HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
24/06/17 07:34:06 WARN HiveConf: HiveConf of name hive.stats.retries.wait does not exist
24/06/17 07:34:06 INFO HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
24/06/17 07:34:06 INFO ObjectStore: ObjectStore, initialize called
24/06/17 07:34:07 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
24/06/17 07:34:07 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
24/06/17 07:34:12 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
24/06/17 07:34:18 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
24/06/17 07:34:18 INFO ObjectStore: Initialized ObjectStore
24/06/17 07:34:19 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.3.0
24/06/17 07:34:19 WARN ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.3.0, comment = Set by MetaStore UNKNOWN@10.42.46.241
24/06/17 07:34:19 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
24/06/17 07:34:19 INFO HiveMetaStore: Added admin role in metastore
24/06/17 07:34:19 INFO HiveMetaStore: Added public role in metastore
24/06/17 07:34:20 INFO HiveMetaStore: No user is added in admin role, since config is empty
24/06/17 07:34:20 INFO HiveMetaStore: 0: get_database: default
24/06/17 07:34:20 INFO audit: ugi=anonymous     ip=unknown-ip-addr      cmd=get_database: default
24/06/17 07:34:20 INFO HiveMetaStore: 0: get_database: global_temp
24/06/17 07:34:20 INFO audit: ugi=anonymous     ip=unknown-ip-addr      cmd=get_database: global_temp
24/06/17 07:34:20 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
24/06/17 07:34:20 INFO HiveMetaStore: 0: get_database: default
24/06/17 07:34:20 INFO audit: ugi=anonymous     ip=unknown-ip-addr      cmd=get_database: default
2024-06-17 07:34:21.585 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.session.KyuubiSessionImpl: [anonymous:10.38.199.201] SessionHandle [50850fdf-6a15-4c47-bc5f-d8816d255ef8] - Connected to engine [10.42.46.241:42178]/[spark-a13ae55c985c44168bd3fa6476c1ac1f] with SessionHandle [50850fdf-6a15-4c47-bc5f-d8816d255ef8]]
2024-06-17 07:34:21.588 INFO Curator-Framework-0 org.apache.kyuubi.shaded.curator.framework.imps.CuratorFrameworkImpl: backgroundOperationsLoop exiting
2024-06-17 07:34:21.607 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.shaded.zookeeper.ZooKeeper: Session: 0x101113150210001 closed
2024-06-17 07:34:21.607 INFO KyuubiSessionManager-exec-pool: Thread-54-EventThread org.apache.kyuubi.shaded.zookeeper.ClientCnxn: EventThread shut down for session: 0x101113150210001
2024-06-17 07:34:21.613 INFO KyuubiSessionManager-exec-pool: Thread-54 org.apache.kyuubi.operation.LaunchEngine: Processing anonymous's query[c612b8d0-8bc7-4efa-acc8-e6c477fbda20]: RUNNING_STATE -> FINISHED_STATE, time taken: 35.942 seconds
24/06/17 07:34:21 INFO SparkSQLSessionManager: anonymous's SparkSessionImpl with SessionHandle [50850fdf-6a15-4c47-bc5f-d8816d255ef8] is opened, current opening sessions 1
Connected to: Spark SQL (version 3.5.0)
Driver: Kyuubi Project Hive JDBC Client (version 1.9.0)
Beeline version 1.9.0 by Apache Kyuubi
0: jdbc:hive2://10.38.199.201:30009>

执行sql日志

0: jdbc:hive2://10.38.199.201:30009> select * from p530_cimarronbp.attr_vals limit 10;
2024-06-17 06:31:19.099 INFO KyuubiSessionManager-exec-pool: Thread-63 org.apache.kyuubi.operation.ExecuteStatement: Processing anonymous's query[65d78d0c-0e89-4f2f-a41c-7a584520bc81]: PENDING_STATE -> RUNNING_STATE, statement:
select * from p530_cimarronbp.attr_vals limit 10
24/06/17 06:31:19 INFO ExecuteStatement: Processing anonymous's query[65d78d0c-0e89-4f2f-a41c-7a584520bc81]: PENDING_STATE -> RUNNING_STATE, statement:
select * from p530_cimarronbp.attr_vals limit 10
24/06/17 06:31:19 INFO ExecuteStatement:Spark application name: kyuubi_USER_SPARK_SQL_anonymous_default_6474780a-72b2-439a-82f0-79131d583346application ID: spark-6b4ff4ba67df455d94af062c51825801application web UI: http://10.42.235.246:45076master: k8s://https://10.38.199.201:443/k8s/clusters/c-m-l7gflsx7deploy mode: clientversion: 3.5.0Start time: 2024-06-17T06:30:04.865User: anonymous
24/06/17 06:31:19 INFO ExecuteStatement: Execute in full collect mode
24/06/17 06:31:19 INFO V2ScanRelationPushDown:
Output: serial_num#96, trans_seq#97, attr_name#98, pre_attr_value#99, post_attr_value#100, in_run_file#101, event_date#102, family#103, operation#10424/06/17 06:31:19 INFO SnapshotScan: Scanning table spark_catalog.p530_cimarronbp.attr_vals snapshot 4278818967903043345 created at 2024-06-15T05:35:09.281+00:00 with filter true
24/06/17 06:31:19 INFO BaseDistributedDataScan: Planning file tasks locally for table spark_catalog.p530_cimarronbp.attr_vals
24/06/17 06:31:23 INFO LoggingMetricsReporter: Received metrics report: ScanReport{tableName=spark_catalog.p530_cimarronbp.attr_vals, snapshotId=4278818967903043345, filter=true, schemaId=0, projectedFieldIds=[1, 2, 3, 4, 5, 6, 7, 8, 9], projectedFieldNames=[serial_num, trans_seq, attr_name, pre_attr_value, post_attr_value, in_run_file, event_date, family, operation], scanMetrics=ScanMetricsResult{totalPlanningDuration=TimerResult{timeUnit=NANOSECONDS, totalDuration=PT4.375705985S, count=1}, resultDataFiles=CounterResult{unit=COUNT, value=53367}, resultDeleteFiles=CounterResult{unit=COUNT, value=0}, totalDataManifests=CounterResult{unit=COUNT, value=97}, totalDeleteManifests=CounterResult{unit=COUNT, value=0}, scannedDataManifests=CounterResult{unit=COUNT, value=97}, skippedDataManifests=CounterResult{unit=COUNT, value=0}, totalFileSizeInBytes=CounterResult{unit=BYTES, value=2867530435}, totalDeleteFileSizeInBytes=CounterResult{unit=BYTES, value=0}, skippedDataFiles=CounterResult{unit=COUNT, value=0}, skippedDeleteFiles=CounterResult{unit=COUNT, value=0}, scannedDeleteManifests=CounterResult{unit=COUNT, value=0}, skippedDeleteManifests=CounterResult{unit=COUNT, value=0}, indexedDeleteFiles=CounterResult{unit=COUNT, value=0}, equalityDeleteFiles=CounterResult{unit=COUNT, value=0}, positionalDeleteFiles=CounterResult{unit=COUNT, value=0}}, metadata={engine-version=3.5.0, iceberg-version=Apache Iceberg 1.5.0 (commit 2519ab43d654927802cc02e19c917ce90e8e0265), app-id=spark-6b4ff4ba67df455d94af062c51825801, engine-name=spark}}
24/06/17 06:31:23 INFO SparkPartitioningAwareScan: Reporting UnknownPartitioning with 13356 partition(s) for table spark_catalog.p530_cimarronbp.attr_vals
24/06/17 06:31:23 INFO MemoryStore: Block broadcast_7 stored as values in memory (estimated size 32.0 KiB, free 434.4 MiB)
24/06/17 06:31:23 INFO MemoryStore: Block broadcast_7_piece0 stored as bytes in memory (estimated size 3.8 KiB, free 434.4 MiB)
24/06/17 06:31:23 INFO SparkContext: Created broadcast 7 from broadcast at SparkBatch.java:79
24/06/17 06:31:23 INFO MemoryStore: Block broadcast_8 stored as values in memory (estimated size 32.0 KiB, free 434.3 MiB)
24/06/17 06:31:23 INFO MemoryStore: Block broadcast_8_piece0 stored as bytes in memory (estimated size 3.8 KiB, free 434.3 MiB)
24/06/17 06:31:23 INFO SparkContext: Created broadcast 8 from broadcast at SparkBatch.java:79
24/06/17 06:31:23 INFO SparkContext: Starting job: collect at ExecuteStatement.scala:85
24/06/17 06:31:23 INFO SQLOperationListener: Query [65d78d0c-0e89-4f2f-a41c-7a584520bc81]: Job 3 started with 1 stages, 1 active jobs running
24/06/17 06:31:23 INFO SQLOperationListener: Query [65d78d0c-0e89-4f2f-a41c-7a584520bc81]: Stage 3.0 started with 1 tasks, 1 active stages running
2024-06-17 06:31:24.103 INFO KyuubiSessionManager-exec-pool: Thread-63 org.apache.kyuubi.operation.ExecuteStatement: Query[65d78d0c-0e89-4f2f-a41c-7a584520bc81] in RUNNING_STATE
24/06/17 06:31:24 INFO SQLOperationListener: Finished stage: Stage(3, 0); Name: 'collect at ExecuteStatement.scala:85'; Status: succeeded; numTasks: 1; Took: 343 msec
24/06/17 06:31:24 INFO DAGScheduler: Job 3 finished: collect at ExecuteStatement.scala:85, took 0.362979 s
24/06/17 06:31:24 INFO StatsReportListener: task runtime:(count: 1, mean: 326.000000, stdev: 0.000000, max: 326.000000, min: 326.000000)
24/06/17 06:31:24 INFO StatsReportListener:     0%      5%      10%     25%     50%     75%     90%     95%     100%
24/06/17 06:31:24 INFO StatsReportListener:     326.0 ms        326.0 ms        326.0 ms        326.0 ms        326.0 ms        326.0 ms        326.0 ms   326.0 ms        326.0 ms
24/06/17 06:31:24 INFO StatsReportListener: shuffle bytes written:(count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000)
24/06/17 06:31:24 INFO StatsReportListener:     0%      5%      10%     25%     50%     75%     90%     95%     100%
24/06/17 06:31:24 INFO StatsReportListener:     0.0 B   0.0 B   0.0 B   0.0 B   0.0 B   0.0 B   0.0 B   0.0 B   0.0 B
24/06/17 06:31:24 INFO StatsReportListener: fetch wait time:(count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000)
24/06/17 06:31:24 INFO StatsReportListener:     0%      5%      10%     25%     50%     75%     90%     95%     100%
24/06/17 06:31:24 INFO StatsReportListener:     0.0 ms  0.0 ms  0.0 ms  0.0 ms  0.0 ms  0.0 ms  0.0 ms  0.0 ms  0.0 ms
24/06/17 06:31:24 INFO StatsReportListener: remote bytes read:(count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000)
24/06/17 06:31:24 INFO StatsReportListener:     0%      5%      10%     25%     50%     75%     90%     95%     100%
24/06/17 06:31:24 INFO StatsReportListener:     0.0 B   0.0 B   0.0 B   0.0 B   0.0 B   0.0 B   0.0 B   0.0 B   0.0 B
24/06/17 06:31:24 INFO StatsReportListener: task result size:(count: 1, mean: 5057.000000, stdev: 0.000000, max: 5057.000000, min: 5057.000000)
24/06/17 06:31:24 INFO StatsReportListener:     0%      5%      10%     25%     50%     75%     90%     95%     100%
24/06/17 06:31:24 INFO StatsReportListener:     4.9 KiB 4.9 KiB 4.9 KiB 4.9 KiB 4.9 KiB 4.9 KiB 4.9 KiB 4.9 KiB 4.9 KiB
24/06/17 06:31:24 INFO StatsReportListener: executor (non-fetch) time pct: (count: 1, mean: 84.969325, stdev: 0.000000, max: 84.969325, min: 84.969325)
24/06/17 06:31:24 INFO StatsReportListener:     0%      5%      10%     25%     50%     75%     90%     95%     100%
24/06/17 06:31:24 INFO StatsReportListener:     85 %    85 %    85 %    85 %    85 %    85 %    85 %    85 %    85 %
24/06/17 06:31:24 INFO StatsReportListener: fetch wait time pct: (count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000)
24/06/17 06:31:24 INFO StatsReportListener:     0%      5%      10%     25%     50%     75%     90%     95%     100%
24/06/17 06:31:24 INFO StatsReportListener:      0 %     0 %     0 %     0 %     0 %     0 %     0 %     0 %     0 %
24/06/17 06:31:24 INFO StatsReportListener: other time pct: (count: 1, mean: 15.030675, stdev: 0.000000, max: 15.030675, min: 15.030675)
24/06/17 06:31:24 INFO StatsReportListener:     0%      5%      10%     25%     50%     75%     90%     95%     100%
24/06/17 06:31:24 INFO StatsReportListener:     15 %    15 %    15 %    15 %    15 %    15 %    15 %    15 %    15 %
24/06/17 06:31:24 INFO SQLOperationListener: Query [65d78d0c-0e89-4f2f-a41c-7a584520bc81]: Job 3 succeeded, 0 active jobs running
24/06/17 06:31:24 INFO ExecuteStatement: Processing anonymous's query[65d78d0c-0e89-4f2f-a41c-7a584520bc81]: RUNNING_STATE -> FINISHED_STATE, time taken: 5.241 seconds
24/06/17 06:31:24 INFO ExecuteStatement: statementId=65d78d0c-0e89-4f2f-a41c-7a584520bc81, operationRunTime=0.3 s, operationCpuTime=77 ms
2024-06-17 06:31:24.344 INFO KyuubiSessionManager-exec-pool: Thread-63 org.apache.kyuubi.operation.ExecuteStatement: Query[65d78d0c-0e89-4f2f-a41c-7a584520bc81] in FINISHED_STATE
2024-06-17 06:31:24.344 INFO KyuubiSessionManager-exec-pool: Thread-63 org.apache.kyuubi.operation.ExecuteStatement: Processing anonymous's query[65d78d0c-0e89-4f2f-a41c-7a584520bc81]: RUNNING_STATE -> FINISHED_STATE, time taken: 5.244 seconds
+-------------+------------+----------------+-----------------------+-----------------------+--------------+-------------+---------+------------+
| serial_num  | trans_seq  |   attr_name    |    pre_attr_value     |    post_attr_value    | in_run_file  | event_date  | family  | operation  |
+-------------+------------+----------------+-----------------------+-----------------------+--------------+-------------+---------+------------+
| WRQ2NM0H    | 19         | STATE_NAME     | END                   | END                   | NULL         | 20240427    | 2TJ     | CAL2       |
| WRQ2NM0H    | 19         | TEST_DATE      | 04/27/2024 09:55:40   | 04/27/2024 09:55:40   | NULL         | 20240427    | 2TJ     | CAL2       |
| WRQ2NM0H    | 19         | FILE_TYPE      | NTR                   | NTR                   | NULL         | 20240427    | 2TJ     | CAL2       |
| WRQ2NM0H    | 19         | CUM_TEST_TIME  | 162301.49             | 212097.50             | 1            | 20240427    | 2TJ     | CAL2       |
| WRQ2NM0H    | 19         | PCBA_COMP_ID5  | 70437                 | 70437                 | 1            | 20240427    | 2TJ     | CAL2       |
| WRQ2NM0H    | 19         | CCVTEST        | NONE                  | NONE                  | 1            | 20240427    | 2TJ     | CAL2       |
| WRQ2NM0H    | 19         | PCBA_COMP_ID3  | 78810                 | 78810                 | 1            | 20240427    | 2TJ     | CAL2       |
| WRQ2NM0H    | 19         | PCBA_COMP_ID2  | 57706                 | 57706                 | 1            | 20240427    | 2TJ     | CAL2       |
| WRQ2NM0H    | 19         | PCBA_COMP_ID1  | 15229                 | 15229                 | 1            | 20240427    | 2TJ     | CAL2       |
| WRQ2NM0H    | 19         | FTFC_APC_DATE  | "0001-01-0100:00:00"  | "0001-01-0100:00:00"  | 1            | 20240427    | 2TJ     | CAL2       |
+-------------+------------+----------------+-----------------------+-----------------------+--------------+-------------+---------+------------+
10 rows selected (5.287 seconds)

k8s中生成2个executors

这篇关于【kyuubi k8s】kyuubi发布k8s执行spark sql的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1070923

相关文章

SQL中的外键约束

外键约束用于表示两张表中的指标连接关系。外键约束的作用主要有以下三点: 1.确保子表中的某个字段(外键)只能引用父表中的有效记录2.主表中的列被删除时,子表中的关联列也会被删除3.主表中的列更新时,子表中的关联元素也会被更新 子表中的元素指向主表 以下是一个外键约束的实例展示

基于MySQL Binlog的Elasticsearch数据同步实践

一、为什么要做 随着马蜂窝的逐渐发展,我们的业务数据越来越多,单纯使用 MySQL 已经不能满足我们的数据查询需求,例如对于商品、订单等数据的多维度检索。 使用 Elasticsearch 存储业务数据可以很好的解决我们业务中的搜索需求。而数据进行异构存储后,随之而来的就是数据同步的问题。 二、现有方法及问题 对于数据同步,我们目前的解决方案是建立数据中间表。把需要检索的业务数据,统一放到一张M

如何去写一手好SQL

MySQL性能 最大数据量 抛开数据量和并发数,谈性能都是耍流氓。MySQL没有限制单表最大记录数,它取决于操作系统对文件大小的限制。 《阿里巴巴Java开发手册》提出单表行数超过500万行或者单表容量超过2GB,才推荐分库分表。性能由综合因素决定,抛开业务复杂度,影响程度依次是硬件配置、MySQL配置、数据表设计、索引优化。500万这个值仅供参考,并非铁律。 博主曾经操作过超过4亿行数据

性能分析之MySQL索引实战案例

文章目录 一、前言二、准备三、MySQL索引优化四、MySQL 索引知识回顾五、总结 一、前言 在上一讲性能工具之 JProfiler 简单登录案例分析实战中已经发现SQL没有建立索引问题,本文将一起从代码层去分析为什么没有建立索引? 开源ERP项目地址:https://gitee.com/jishenghua/JSH_ERP 二、准备 打开IDEA找到登录请求资源路径位置

MySQL数据库宕机,启动不起来,教你一招搞定!

作者介绍:老苏,10余年DBA工作运维经验,擅长Oracle、MySQL、PG、Mongodb数据库运维(如安装迁移,性能优化、故障应急处理等)公众号:老苏畅谈运维欢迎关注本人公众号,更多精彩与您分享。 MySQL数据库宕机,数据页损坏问题,启动不起来,该如何排查和解决,本文将为你说明具体的排查过程。 查看MySQL error日志 查看 MySQL error日志,排查哪个表(表空间

高效+灵活,万博智云全球发布AWS无代理跨云容灾方案!

摘要 近日,万博智云推出了基于AWS的无代理跨云容灾解决方案,并与拉丁美洲,中东,亚洲的合作伙伴面向全球开展了联合发布。这一方案以AWS应用环境为基础,将HyperBDR平台的高效、灵活和成本效益优势与无代理功能相结合,为全球企业带来实现了更便捷、经济的数据保护。 一、全球联合发布 9月2日,万博智云CEO Michael Wong在线上平台发布AWS无代理跨云容灾解决方案的阐述视频,介绍了

MySQL高性能优化规范

前言:      笔者最近上班途中突然想丰富下自己的数据库优化技能。于是在查阅了多篇文章后,总结出了这篇! 数据库命令规范 所有数据库对象名称必须使用小写字母并用下划线分割 所有数据库对象名称禁止使用mysql保留关键字(如果表名中包含关键字查询时,需要将其用单引号括起来) 数据库对象的命名要能做到见名识意,并且最后不要超过32个字符 临时库表必须以tmp_为前缀并以日期为后缀,备份

90、k8s之secret+configMap

一、secret配置管理 配置管理: 加密配置:保存密码,token,其他敏感信息的k8s资源 应用配置:我们需要定制化的给应用进行配置,我们需要把定制好的配置文件同步到pod当中容器 1.1、加密配置: secret: [root@master01 ~]# kubectl get secrets ##查看加密配置[root@master01 ~]# kubectl get se

[MySQL表的增删改查-进阶]

🌈个人主页:努力学编程’ ⛅个人推荐: c语言从初阶到进阶 JavaEE详解 数据结构 ⚡学好数据结构,刷题刻不容缓:点击一起刷题 🌙心灵鸡汤:总有人要赢,为什么不能是我呢 💻💻💻数据库约束 🔭🔭🔭约束类型 not null: 指示某列不能存储 NULL 值unique: 保证某列的每行必须有唯一的值default: 规定没有给列赋值时的默认值.primary key:

Vue3项目开发——新闻发布管理系统(六)

文章目录 八、首页设计开发1、页面设计2、登录访问拦截实现3、用户基本信息显示①封装用户基本信息获取接口②用户基本信息存储③用户基本信息调用④用户基本信息动态渲染 4、退出功能实现①注册点击事件②添加退出功能③数据清理 5、代码下载 八、首页设计开发 登录成功后,系统就进入了首页。接下来,也就进行首页的开发了。 1、页面设计 系统页面主要分为三部分,左侧为系统的菜单栏,右侧