本文主要是介绍七十七、Linux中一键安装Scala211和spark245,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
一、安装Scala
解压Scala压缩包到soft目录,并改名为scala211
cd /opt/soft/install/
ls
tar -zxvf scala-2.11.12.tgz -C /opt/soft
cd /opt/soft
ls
mv scala-2.11.12 scala211
然后,修改一下profile文件
vi /etc/profile
添加spark安装目录
#scala
export SCALA_HOME=/opt/soft/scala211
export PATH=$PATH:$SCALA_HOME/bin
然后,source一下
source /etc/profile
测试一下
Scala
二、安装spark
cd /opt/soft/install/
ls
tar -zxvf spark-2.4.5-bin-hadoop2.6.tgz -C /opt/soft
cd /opt/soft
ls
mv spark-2.4.5-bin-hadoop2.6 spark245
ls
cd /opt/soft/spark245/
ls
cd /opt/soft/spark245/
pwd
vi /etc/profile
添加spark安装目录
#spark
export SPARK_HOME=/opt/soft/spark245
export PATH=$PATH:$SPARK_HOME/bin
cd ./conf/
ls
cp spark-env.sh.template spark-env.sh
ls
vi ./spark-env.sh
export JAVA_HOME=/opt/jdk1.8.0_221
export SCALA_HOME=/opt/soft/scala211
export SPARK_HOME=/opt/soft/spark245
export HADOOP_INSTALL=/opt/soft/install/hadoop260
export HADOOP_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
export SPARK_MASTER_IP=gree128
export SPARK_DRIVER_MEMORY=2G
export SPARK_EXECUTOR_MEMORY=2G
export SPARK_LOCAL_DIRS=/opt/soft/spark245
cp slaves.template slaves
vi ./slaves
localhost
然后,source一下
source /etc/profile
然后,先启动Hadoop,再以yarn方式启动spark
spark-shell
1.本机 spark-shell --master local[*]
2.Standalone(小集群) spark-shell --master spark://MASTERHOST:7077
3.YARN spark-shell --master yarn-client
这篇关于七十七、Linux中一键安装Scala211和spark245的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!