本文主要是介绍Flinkx启动流程-整体理解,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
1. 先看启动脚本
在bin/flinkx的内容
set -e
export FLINKX_HOME="$(cd "`dirname "$0"`"/..; pwd)"
# Find the java binary
if [ -n "${JAVA_HOME}" ]; thenJAVA_RUN="${JAVA_HOME}/bin/java"
elseif [ `command -v java` ]; thenJAVA_RUN="java"elseecho "JAVA_HOME is not set" >&2exit 1fi
fi
JAR_DIR=$FLINKX_HOME/lib/*
CLASS_NAME=com.dtstack.flinkx.launcher.Launcher
# 参数1.java的命令
# -cp就是classpath :cp解释https://zhuanlan.zhihu.com/p/214093661
# -cp $JAR_DIR $CLASS_NAME 就是指定运行哪个jar下的哪个主类
# $@为传递的参数
# &为后端运行
echo "flinkx starting ..."
nohup $JAVA_RUN -cp $JAR_DIR $CLASS_NAME $@ &
- 先导入flinkx的环境
- 然后是java的二进制文件
- 指定flinkx相关jar包和启动类
- nohup $JAVA_RUN -cp $JAR_DIR $CLASS_NAME $@ &执行
2. 在看Launcher启动类
定位到flinkx-1.8.5\flinkx-launcher\src\main\java\com\dtstack\flinkx\launcher\Launcher.java的main方法,第95行,查看本地模式
if(mode.equals(ClusterMode.local.name())) {String[] localArgs = argList.toArray(new String[argList.size()]);com.dtstack.flinkx.Main.main(localArgs);
}
flinkx本地模式启动实际调用的是 com.dtstack.flinkx.Main.main方法,进去看看
public static void main(String[] args) throws Exception {// 获取传递的参数com.dtstack.flinkx.options.Options options = new OptionParser(args).getOptions();String job = options.getJob(); // -jobString jobIdString = options.getJobid(); // Flink JobString monitor = options.getMonitor();String pluginRoot = options.getPluginRoot(); // -pluginRoot String savepointPath = options.getS();Properties confProperties = parseConf(options.getConfProp()); // -flinkconf// 解析jobPath指定的任务配置json文件DataTransferConfig config = DataTransferConfig.parse(job);speedTest(config);if(StringUtils.isNotEmpty(monitor)) {config.setMonitorUrls(monitor);}if(StringUtils.isNotEmpty(pluginRoot)) {config.setPluginRoot(pluginRoot);}StreamExecutionEnvironment env = (StringUtils.isNotBlank(monitor)) ?StreamExecutionEnvironment.getExecutionEnvironment() :new MyLocalStreamEnvironment();env = openCheckpointConf(env, confProperties);configRestartStrategy(env, config);SpeedConfig speedConfig = config.getJob().getSetting().getSpeed();env.setParallelism(speedConfig.getChannel());env.setRestartStrategy(RestartStrategies.noRestart());// 得到具体的reader类名BaseDataReader dataReader = DataReaderFactory.getDataReader(config, env);DataStream<Row> dataStream = dataReader.readData();dataStream = ((DataStreamSource<Row>) dataStream).setParallelism(speedConfig.getReaderChannel());if (speedConfig.isRebalance()) {dataStream = dataStream.rebalance();}// 得到具体的writer类名BaseDataWriter dataWriter = DataWriterFactory.getDataWriter(config);dataWriter.writeData(dataStream).setParallelism(speedConfig.getWriterChannel());if(env instanceof MyLocalStreamEnvironment) {if(StringUtils.isNotEmpty(savepointPath)){((MyLocalStreamEnvironment) env).setSettings(SavepointRestoreSettings.forPath(savepointPath));}}addEnvClassPath(env, ClassLoaderManager.getClassPath());// 得到执行的结果JobExecutionResult result = env.execute(jobIdString);if(env instanceof MyLocalStreamEnvironment){ResultPrintUtil.printResult(result);}
}
3. 在结合start.sh脚本
结合start.sh脚本,就清楚了
D:/Projects/flinkx-1.8.5/bin/flinkx \
-mode "local" \
-job D:/Projects/flinkx-1.8.5/job/ftp2stream.json \
-pluginRoot "D:/Projects/flinkx-1.8.5/plugins" \
-flinkconf "D:/Projects/flinkx-1.8.5/flinkconf" \
-confProp "{\"flink.checkpoint.interval\":60000}"
flinkx之后执行执行的语句是这样的
nohup java -cp \
D:\Projects\flinkx-1.8.5\lib\flinkx-launcher-1.6.jar \
com.dtstack.flinkx.launcher.Launcher \
-mode "local" \
-job D:/Projects/flinkx-1.8.5/job/ftp2stream.json \
-pluginRoot "D:/Projects/flinkx-1.8.5/plugins" \
-flinkconf "D:/Projects/flinkx-1.8.5/flinkconf" \
-confProp "{\"flink.checkpoint.interval\":60000}" &
Launcher类中通过获取mode,pluginRoot ,flinkconf加载环境,通过json文件得知reader和writer的类型和同步策略
这篇关于Flinkx启动流程-整体理解的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!