本文主要是介绍structured streaming的checkpoint文件无限增长,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
目录
- structured streaming的checkpoint文件无限增长
structured streaming的checkpoint文件无限增长
原理和处理办法:
https://www.waitingforcode.com/apache-spark-structured-streaming/checkpoint-storage-structured-streaming/read#will_it_grow_indifinetely
Will it grow indefinitely?
No. Apache Spark will always keep the number of checkpointed files that you specified in the configuration entry. The configuration entry responsible for that number is spark.sql.streaming.minBatchesToRetain and its default is 100.
You should not ignore this property since it will define your data reprocessing period. For example, if you decided to keep only the last 10 entries that are generated every minute, you will be unable to reprocess the data older than 10 minutes - or at least, you will be unable to do it easily by simply promoting checkpointed information to the one to use by the query. Checkpoint cleaning is a physical delete operation, so you lose the information indefinitely.
答案:
you can use a more global property called spark.sql.streaming.checkpointLocation. If this property is used, Apache Spark will create a checkpoint directory under s p a r k . s q l . s t r e a m i n g . c h e c k p o i n t L o c a t i o n / {spark.sql.streaming.checkpointLocation}/ spark.sql.streaming.checkpointLocation/{options.queryName}. If queryName options is missing it will generate a directory with random UUID identifier.
Always define queryName alongside the spark.sql.streaming.checkpointLocation
If you want to use the checkpoint as your main fault-tolerance mechanism and you configure it with spark.sql.streaming.checkpointLocation, always define the queryName sink option. Otherwise when the query will restart, Apache Spark will create a completely new checkpoint directory and, therefore, do not restore your checkpointed state!
这篇关于structured streaming的checkpoint文件无限增长的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!