本文主要是介绍DataX配置案例@Source:MySQL,Sink:HDFS [TableMode],希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
JSON格式化插件地址:extension://feolnkbgcbjmamimpfcnklggdcbgakhe/json-format/index.html
标红的地方是需要着重注意的地方。
案例:将Mysql的base_province这张表全量导入到hdfs上。
------------------------------------
{
"job": {
"content": [
{
"reader": {
"name": "mysqlreader",
"parameter": {
"column": [
"id",
"name",
"region_id",
"area_code",
"iso_code",
"iso_3166_2",
"create_time",
"operate_time"
],
"where": "id>=3",
"connection": [
{
"jdbcUrl": [
"jdbc:mysql://hadoop102:3306/gmall?useUnicode=true&allowPublicKeyRetrieval=true&characterEncoding=utf-8"
],
"table": [
"base_province"
]
}
],
"password": "000000",
"splitPk": "",
"username": "root"
}
},
"writer": {
"name": "hdfswriter",
"parameter": {
"column": [
{
"name": "id",
"type": "bigint"
},
{
"name": "name",
"type": "string"
},
{
"name": "region_id",
"type": "string"
},
{
"name": "area_code",
"type": "string"
},
{
"name": "iso_code",
"type": "string"
},
{
"name": "iso_3166_2",
"type": "string"
},
{
"name": "create_time",
"type": "string"
},
{
"name": "operate_time",
"type": "string"
}
],
"compress": "gzip",
"defaultFS": "hdfs://hadoop102:8020",
"fieldDelimiter": "\t",
"fileName": "base_province",
"fileType": "text",
"path": "/base_province",
"writeMode": "append"
}
}
}
],
"setting": {
"speed": {
"channel": 1
}
}
}
}
这篇关于DataX配置案例@Source:MySQL,Sink:HDFS [TableMode]的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!