本文主要是介绍基于Headless构建高可用spark+pyspark集群,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
1、创建Headless Service服务
Headless 服务类型并不分配容器云虚拟 IP,而是直接暴露所属 Pod 的 DNS 记录。没有默认负载均衡器,可直接访问 Pod IP 地址。因此,当我们需要与集群内真实的 Pod IP 地址进行直接交互时,Headless 服务就很有用。
其中Service的关键配置如下:clusterIP: None
,不让其获取clusterIP , DNS解析的时候直接走pod。
---
kind: Service
apiVersion: v1
metadata:name: ecc-spark-servicenamespace: ecc-spark-cluster
spec:clusterIP: Noneports:- port: 7077protocol: TCPtargetPort: 7077name: spark- port: 10000protocol: TCPtargetPort: 10000name: thrift-server-tcp- port: 8080targetPort: 8080name: http- port: 45970protocol: TCPtargetPort: 45970name: thrift-server-driver-tcp - port: 45980protocol: TCPtargetPort: 45980name: thrift-server-blockmanager-tcp - port: 4040protocol: TCPtargetPort: 4040name: thrift-server-tasks-tcp selector:app: ecc-spark-serviceEOF
Service的完全域名: ecc-spark-service.ecc-spark-cluster.svc.cluster.local
headless service的完全域名: headless-service.ecc-spark-cluster.svc.cluster.local
在容器里面ping 完全域名, service解析出的地址是clusterIP,headless service 解析出来的地址是 pod IP。
2、构建spark集群
2.1 、创建spark master
spark master分为两个部分,一个是类型为ReplicationController的主体,命名为ecc-spark-master.yaml,另一部分为一个service,暴露master的7077端口
给slave使用。
#如下是把thriftserver部署在master节点,则需要暴露thriftserver端口、driver端口、
#blockmanager端口服务,以提供worker节点executor与driver交互.
cat >ecc-spark-master.yaml <<EOF
kind: Deployment
apiVersion: apps/v1
metadata:name: ecc-spark-masternamespace: ecc-spark-clusterlabels:app: ecc-spark-master
spec:replicas: 1selector:matchLabels:app: ecc-spark-mastertemplate:metadata:labels:app: ecc-spark-masterspec:serviceAccountName: spark-cdpsecurityContext: {}dnsPolicy: ClusterFirsthostname: ecc-spark-mastercontainers:- name: ecc-spark-masterimage: spark:3.4.1imagePullPolicy: IfNotPresentcommand: ["/bin/sh"]args: ["-c","sh /opt/spark/sbin/start-master.sh && tail -f /opt/spark/logs/spark--org.apache.spark.deploy.master.Master-1-*"]ports:- containerPort: 7077- containerPort: 8080volumeMounts:- mountPath: /opt/usrjars/name: ecc-spark-pvclivenessProbe:failureThreshold: 9initialDelaySeconds: 2periodSeconds: 15successThreshold: 1tcpSocket:port: 8080timeoutSeconds: 10resources:requests:cpu: "2"memory: "6Gi"limits:cpu: "2"memory: "6Gi"- env:- SPARK_LOCAL_DIRSvalue: "/odsdata/sparkdirs/" volumes:- name: ecc-spark-pvcpersistentVolumeClaim:claimName: ecc-spark-pvc-static
2.2、创建spark worker
在启动spark worker脚本中需要传入master的地址,在容器云kubernetes dns且设置了service的缘故,可以通过ecc-spark-master.ecc-spark-cluster.svc.cluster.local:7077访问。
cat >ecc-spark-worker.yaml <<EOF
kind: Deployment
apiVersion: apps/v1
metadata:name: ecc-spark-workernamespace: ecc-spark-clusterlabels:app: ecc-spark-worker
spec:replicas: 1selector:matchLabels:app: ecc-spark-workertemplate:metadata:labels:app: ecc-spark-workerspec:serviceAccountName: spark-cdpsecurityContext: {}dnsPolicy: ClusterFirsthostname: ecc-spark-workercontainers:- name: ecc-spark-workerimage: spark:3.4.1imagePullPolicy: IfNotPresentcommand: ["/bin/sh"]args: ["-c","sh /opt/spark/sbin/start-worker.sh spark://ecc-spark-master.ecc-spark-cluster.svc.cluster.local:7077;tail -f /opt/spark/logs/spark--org.apache.spark.deploy.worker.Worker*"]ports:- containerPort: 8081volumeMounts:- mountPath: /opt/usrjars/name: ecc-spark-pvcresources:requests:cpu: "2"memory: "2Gi"limits:cpu: "2"memory: "4Gi"- env:- SPARK_LOCAL_DIRSvalue: "/odsdata/sparkdirs/" volumes:- name: ecc-spark-pvcpersistentVolumeClaim:claimName: ecc-spark-pvc-staticEOF
2.3 构建pyspark提交环境
import json
import flask
from flask import Flask
from concurrent.futures import ThreadPoolExecutorapp = Flask(__name__)
pool = ThreadPoolExecutor(max_workers=8)@app.route('/')
def hello_world(): # put application's code herereturn 'Hello World!'@app.route('/downloadCode', methods=['post'])
def download_file():model_id = flask.request.json.get('modelId')print(model_id)"""异步提交任务:pool.submit()"""return json.dumps(0, ensure_ascii=False)@app.route('/modelRun', methods=['post'])
def model_run():"""异步提交任务:pool.submit()"""return json.dumps(0, ensure_ascii=False)if __name__ == '__main__':app.run()
spark@c67e6477b2f1:/opt/spark$ python3
Python 3.8.10 (default, May 26 2023, 14:05:08)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>>
将python的调用整合到:start-master.sh 文件末尾启动调用,便可以通过k8s暴露spark-master的F5端口实现http调用。
3、使用spark-operator安装spark集群方式
可以参考阿里云文章:搭建Spark应用
这篇关于基于Headless构建高可用spark+pyspark集群的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!