本文主要是介绍Flink写入orc类型的HDFS文件,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
方法1:
LogicalType[] orcTypes = new LogicalType[]{new VarCharType(255),new VarCharType(255),new IntType()};
String[] fields = new String[]{"name","gread","cource"};
TypeDescription typeDescription = OrcSplitReaderUtil.logicalTypeToOrcType(RowType.of(orcTypes,fields));//写入orc格式的属性
final Properties writerProps = new Properties();
writerProps.setProperty("orc.compress", "LZ4");
//构造工厂类OrcBulkWriterFactory
final OrcBulkWriterFactory<UserBehaviourLog> factory = new OrcBulkWriterFactory<>(new FieldsVectorizer(typeDescription.toString()),writerProps,new Configuration());final StreamingFileSink<UserBehaviourLog> sink = StreamingFileSink.forBulkFormat(new Path(hdfsPath),factory).withNewBucketAssigner(new LogBucketAssigner()).build();
.......
datastream.
这篇关于Flink写入orc类型的HDFS文件的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!