高效处理海量慢SQL日志文件:Java与JSQLParser去重方案详解

2024-06-10 20:04

本文主要是介绍高效处理海量慢SQL日志文件:Java与JSQLParser去重方案详解,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

在大数据处理环境下,慢SQL日志优化是一个必要的步骤,尤其当日志文件达到数GB时,直接操作日志文件会带来诸多不便。本文将介绍如何通过Java和JSQLParser库来解析和去重慢SQL日志,以提高性能和可维护性。

背景

公司生产环境中,某些操作产生的SQL执行时间较长,会记录在慢SQL日志文件中。慢SQL日志文件包含了SQL的执行时间、用户信息、查询语句等内容。由于这些日志文件可能包含大量重复的SQL语句,逐条查看和处理既耗时又低效,因此有必要进行去重操作。

目标

本文旨在通过以下步骤实现慢SQL日志的去重:

  1. 读取日志文件内容,解析出注释和SQL语句。
  2. 解析SQL,定义SQL相同的标准。
  3. 实现对象存储解析出来的各个部分,重写equals和hashCode方法。
  4. 使用Set集合去重。
  5. 将去重后的结果写入文件。

工具和依赖

为了实现上述目标,我们将使用以下工具和依赖:

  • Java 8或以上版本
  • Maven
  • JSQLParser库

Maven依赖

<dependency><groupId>com.github.jsqlparser</groupId><artifactId>jsqlparser</artifactId><version>4.9</version>
</dependency>

SQL日志示例

以下是一个慢SQL日志的示例,其中包含了时间、用户信息、数据库模式、查询时间、发送的字节数、时间戳以及SQL语句:

# Time: 2024-05-10T20:30:12.035337+08:00
# User@Host: root[root] @  [192.168.110.110]  Id: 13708199
# Schema: laterdatabase  Last_errno: 0  Killed: 0
# Query_time: 5.000000  Lock_time: 0.000122  Rows_sent: 1  Rows_examined: 610953  Rows_affected: 0
# Bytes_sent: 56
SET timestamp=1715344212;
SELECT * FROM emp where name = '%三%';
# Time: 2024-05-10T11:28:27.315966+08:00
# User@Host: root[root] @  [192.168.110.110]  Id: 13666423
# Schema: scott  Last_errno: 0  Killed: 0
# Query_time: 3.290658  Lock_time: 0.000131  Rows_sent: 0  Rows_examined: 0  Rows_affected: 1
# Bytes_sent: 11
SET timestamp=1715311707;
insert into scott.emp        ( name, age)        values            ('张三',            38);
# Time: 2024-05-10T20:30:12.035337+08:00
# User@Host: root[root] @  [192.168.110.110]  Id: 13708199
# Schema: laterdatabase  Last_errno: 0  Killed: 0
# Query_time: 5.000000  Lock_time: 0.000122  Rows_sent: 1  Rows_examined: 610953  Rows_affected: 0
# Bytes_sent: 56
SET timestamp=1715344212;
SELECT * FROM emp where name = '%三%';
# Time: 2024-05-14T16:18:03.879351+08:00
# User@Host: root[root] @  [192.168.110.110]  Id: 13966826
# Schema: scott  Last_errno: 0  Killed: 0
# Query_time: 3.120938  Lock_time: 0.000100  Rows_sent: 0  Rows_examined: 1  Rows_affected: 1
# Bytes_sent: 52
SET timestamp=1715674683;
UPDATE emp      SET `ename` = '张三',      `age` = 18 WHERE      `id` = 1045983421034180     ;
# Time: 2024-05-10T20:30:12.035337+08:00
# User@Host: root[root] @  [192.168.110.110]  Id: 13708199
# Schema: laterdatabase  Last_errno: 0  Killed: 0
# Query_time: 5.000000  Lock_time: 0.000122  Rows_sent: 1  Rows_examined: 610953  Rows_affected: 0
# Bytes_sent: 56
SET timestamp=1715344212;
SELECT * FROM emp where name = '%三%';
# Time: 2024-05-06T01:58:36.959671+08:00
# User@Host: root[root] @  [192.168.110.110]  Id: 13387119
# Schema: scott  Last_errno: 0  Killed: 0
# Query_time: 6.161219  Lock_time: 0.000875  Rows_sent: 0  Rows_examined: 2137468  Rows_affected: 0
# Bytes_sent: 11
SET timestamp=1714931916;
delete  from emp where id = 1;
# Time: 2024-05-10T20:30:12.035337+08:00
# User@Host: root[root] @  [192.168.110.110]  Id: 13708199
# Schema: laterdatabase  Last_errno: 0  Killed: 0
# Query_time: 5.000000  Lock_time: 0.000122  Rows_sent: 1  Rows_examined: 610953  Rows_affected: 0
# Bytes_sent: 56
SET timestamp=1715344212;
SELECT * FROM emp where name = '%三%';

去重后的效果

# slow.log	未知服务	4
# Time: 2024-05-10T20:30:12.035337+08:00
# User@Host: root[root] @  [192.168.110.110]  Id: 13708199
# Schema: laterdatabase  Last_errno: 0  Killed: 0
# Query_time: 5.000000  Lock_time: 0.000122  Rows_sent: 1  Rows_examined: 610953  Rows_affected: 0
# Bytes_sent: 56
SET timestamp=1715344212;
SELECT * FROM emp where name = '%三%';# slow.log	未知服务	1
# Time: 2024-05-14T16:18:03.879351+08:00
# User@Host: root[root] @  [192.168.110.110]  Id: 13966826
# Schema: scott  Last_errno: 0  Killed: 0
# Query_time: 3.120938  Lock_time: 0.000100  Rows_sent: 0  Rows_examined: 1  Rows_affected: 1
# Bytes_sent: 52
SET timestamp=1715674683;
UPDATE emp      SET `ename` = '张三',      `age` = 18 WHERE      `id` = 1045983421034180     ;# slow.log	未知服务	1
# Time: 2024-05-06T01:58:36.959671+08:00
# User@Host: root[root] @  [192.168.110.110]  Id: 13387119
# Schema: scott  Last_errno: 0  Killed: 0
# Query_time: 6.161219  Lock_time: 0.000875  Rows_sent: 0  Rows_examined: 2137468  Rows_affected: 0
# Bytes_sent: 11
SET timestamp=1714931916;
delete  from emp where id = 1;# slow.log	未知服务	1
# Time: 2024-05-10T11:28:27.315966+08:00
# User@Host: root[root] @  [192.168.110.110]  Id: 13666423
# Schema: scott  Last_errno: 0  Killed: 0
# Query_time: 3.290658  Lock_time: 0.000131  Rows_sent: 0  Rows_examined: 0  Rows_affected: 1
# Bytes_sent: 11
SET timestamp=1715311707;
insert into scott.emp        ( name, age)        values            ('张三',            38);

额外信息说明

  1. # slow.log 未知服务 4为去重时额外添加进去的行,slow.log代表sql所在原文件(如果慢sql日志文件被切分成好多个小文件时方便定位第一次出现的位置)
  2. 未知服务代表服务名称,如果项目是微服务架构,可以将ip替换为服务名,可读性更高,只需要关注自己负责的未付即可,也可以在输出去重后的文件时按照微服务名称命名,每个微服务的日志单独写入到一个文件.
  3. # slow.log 未知服务 4 最后的数字表示该sql出现的次数

去重与统计

import lombok.Data;
import lombok.experimental.Accessors;
import net.sf.jsqlparser.parser.CCJSqlParserUtil;
import net.sf.jsqlparser.statement.Statement;
import net.sf.jsqlparser.statement.delete.Delete;
import net.sf.jsqlparser.statement.insert.Insert;
import net.sf.jsqlparser.statement.select.Select;
import net.sf.jsqlparser.statement.select.SelectItem;
import net.sf.jsqlparser.statement.update.Update;
import net.sf.jsqlparser.statement.update.UpdateSet;
import net.sf.jsqlparser.util.TablesNamesFinder;
import org.springframework.util.StringUtils;import java.io.StringReader;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;@Data
@Accessors(chain = true)
public class SlowQueryMetadata {private int count = 1;private String time;private String userAndHost;private String schema;private String queryTime;private String bytesSent;private String timestamp;private String sql;private String fileName;private String appName;/*** 和元数据保持一致格式*/@Overridepublic String toString() {return"# " + fileName + "\t" + appName + "\t" + count + "\n" +time + "\n" +userAndHost + "\n" +schema + "\n" +queryTime + "\n" +bytesSent + "\n" +timestamp + "\n" +sql + "\n";}/*** 去重 不同参数的相同格式的sql* 计数*/@Overridepublic boolean equals(Object o) {if (this == o) return true;if (o == null || getClass() != o.getClass()) return false;SlowQueryMetadata slowQueryMetadata = (SlowQueryMetadata) o;boolean equals = Objects.equals(uniqueString(sql), uniqueString(slowQueryMetadata.sql));if (equals) {slowQueryMetadata.setCount(slowQueryMetadata.getCount() + 1);}return equals;}@Overridepublic int hashCode() {return Objects.hash(uniqueString(sql));}/*** 去除sql中的参数*/private String uniqueString(String sql) {if (!StringUtils.hasText(sql)) {return "";}try {sql = sql.replace("\t", " ");sql = sql.replaceAll("\\s+", " ");// 替换掉注释部分 /*...*/sql = sql.replaceAll("/\\*.*?\\*/", "");// 相对比较耗时Statement statement = CCJSqlParserUtil.parse(new StringReader(sql));if (statement instanceof Insert) {return insertStatement((Insert) statement);}if (statement instanceof Update) {return updateStatement((Update) statement);}if (statement instanceof Delete) {return deleteStatement((Delete) statement);}if (statement instanceof Select) {return selectStatement(statement);}} catch (Exception e) {return sql;}return sql;}private String selectStatement(Statement statement) {List<String> tables = new ArrayList<>(new TablesNamesFinder().getTables(statement));Select select = (Select) statement;try {List<SelectItem<?>> selectItems = select.getPlainSelect().getSelectItems();return "select " + selectItems + " " + tables;} catch (Exception e) {return "select " + tables;}}/*** delete语句只要表名相同 删除条件列相同 即可认为是同一条sql*/private String deleteStatement(Delete delete) {StringBuilder builder = new StringBuilder("delete ");builder.append(delete.getTable().getName().trim());builder.append(" where ");String[] ands = delete.getWhere().toString().toLowerCase().split("and");for (String and : ands) {builder.append(and.split("=")[0].trim()).append(" ");}return builder.toString();}/*** 更新语句只要表名 要更新的列名 where条件列名 相同就认为是同一条sql*/private String updateStatement(Update update) {StringBuilder builder = new StringBuilder("update ");builder.append(update.getTable().getName().trim());builder.append(" column ");for (UpdateSet updateSet : update.getUpdateSets()) {builder.append(updateSet.getColumns().toString().trim());}builder.append(" where ");String[] ands = update.getWhere().toString().toLowerCase().split("and");for (String and : ands) {builder.append(and.split("=")[0].trim()).append(" ");}return builder.toString();}/*** 新增语句只要表相同列相同即可认为是相同sql*/private String insertStatement(Insert statement) {return "insert " + statement.getTable().getName() + " " + statement.getColumns().toString();}}
package com.study.jsqlparser;import com.study.jsqlparser.dto.SlowQueryMetadata;
import com.study.jsqlparser.utils.FileUtils;
import lombok.SneakyThrows;
import lombok.extern.slf4j.Slf4j;
import org.springframework.util.StringUtils;import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.stream.Collectors;@Slf4j
public class SlowQueryAnalysis {@SneakyThrowspublic static void main(String[] args) {
//        statistics4Directory();statistics4File();}/*** 对单个慢sql文件文件去重*/private static void statistics4File() throws IOException {String fileFullName = "slow.log";// 去重后的sql文件String deduplicationFileFullName = "deduplication.log";File slowFile = new File(fileFullName);List<String> list = FileUtils.readFileByLine(slowFile);log.info("从文件中读取到的总行数:{}", list.size());List<SlowQueryMetadata> slowQueryMetadataList = getSqlDTOList(list, slowFile.getName());log.info("提取出了:{}条sql", slowQueryMetadataList.size());HashSet<SlowQueryMetadata> set = new HashSet<>(slowQueryMetadataList);log.info("去重后的sql条数:{}", set.size());List<String> deduplication = set.stream().sorted(Comparator.comparingInt(SlowQueryMetadata::getCount).reversed()).map(SlowQueryMetadata::toString).collect(Collectors.toList());FileUtils.write2File(new File(deduplicationFileFullName), deduplication);}/*** 对文件夹下所有慢sql文件去重*/private static void statistics4Directory() throws IOException {String directoryFullName = "E:\\xinao\\sql优化\\0516慢SQL已分割\\sql\\";// 去重后的sql文件String deduplicationFileFullName = "deduplication.sql";Set<SlowQueryMetadata> set = new HashSet<>();for (File file : new File(directoryFullName).listFiles()) {String fileName = file.getName();log.info(fileName);List<String> list = FileUtils.readFileByLine(file);log.info("从文件中读取到的总行数:{}", list.size());List<SlowQueryMetadata> slowQueryMetadataList = getSqlDTOList(list, fileName);log.info("提取出了:{}条sql", slowQueryMetadataList.size());set.addAll(slowQueryMetadataList);}log.info("去重后的sql条数:{}", set.size());List<String> deduplication = set.stream().sorted(Comparator.comparingInt(SlowQueryMetadata::getCount).reversed()).map(SlowQueryMetadata::toString).collect(Collectors.toList());FileUtils.write2File(new File(deduplicationFileFullName), deduplication);}private static List<SlowQueryMetadata> getSqlDTOList(List<String> list, String fileName) {List<SlowQueryMetadata> slowQueryMetadataList = new ArrayList<>();for (int i = 0; i < list.size(); i++) {String line = list.get(i);if (!StringUtils.hasText(line)) {continue;}if (line.trim().startsWith("# Time:")) {SlowQueryMetadata slowQueryMetadata = new SlowQueryMetadata();slowQueryMetadata.setFileName(fileName);slowQueryMetadataList.add(slowQueryMetadata);slowQueryMetadata.setTime(line);boolean multilineComment = false;while (i < list.size() - 1) {i++;line = list.get(i);// 处理多行注释if (line.trim().contains("/*")) {multilineComment = true;continue;}if (line.trim().contains("*/")) {multilineComment = false;continue;}if (multilineComment) {continue;}if (line.trim().startsWith("# Time:")) {i--;break;}if (line.startsWith("# User@Host:")) {slowQueryMetadata.setUserAndHost(line);evaluationAppName(line, slowQueryMetadata);}if (line.startsWith("# Schema:")) {slowQueryMetadata.setSchema(line);}if (line.startsWith("# Query_time:")) {slowQueryMetadata.setQueryTime(line);}if (line.startsWith("# Bytes_sent:")) {slowQueryMetadata.setBytesSent(line);}if (line.startsWith("SET timestamp=")) {slowQueryMetadata.setTimestamp(line);}if (line.toLowerCase().trim().startsWith("insert") || line.toLowerCase().trim().startsWith("delete") || line.toLowerCase().trim().startsWith("update") || line.toLowerCase().trim().startsWith("select")) {StringBuilder sql = new StringBuilder(line);while (i < list.size() - 1) {i++;line = list.get(i);if (line.startsWith("# Time: ")) {i--;break;}if (StringUtils.hasText(line) && !line.trim().startsWith("--")) {sql.append(line);}}slowQueryMetadata.setSql(sql.toString());break;}}}}return slowQueryMetadataList;}private static void evaluationAppName(String line, SlowQueryMetadata slowQueryMetadata) {try {// # User@Host: root[root] @  [192.168.100.101]  Id: 13523930String ip = line.substring(line.lastIndexOf("[")+1,line.lastIndexOf("]"));Map<String, String> appMap = getAppMap();String appName = appMap.get(ip);if (!StringUtils.hasText(appName)) {appName = "未知服务";}slowQueryMetadata.setAppName(appName);} catch (Exception e) {slowQueryMetadata.setAppName("异常服务");}}private static Map<String, String> getAppMap() {Map<String, String> appMap = new HashMap<>();appMap.put("192.168.100.101", "com-study-gateway");appMap.put("192.168.100.102", "com-study-gateway");appMap.put("192.168.100.103", "com-study-registry");appMap.put("192.168.100.104", "com-study-registry");appMap.put("192.168.100.104", "com-study-obs-web");appMap.put("192.168.100.106", "com-study-uid-web");return appMap;}}

JSQLParser使用

insert语句

@Test
public void testInsertStatement() {String sql = "INSERT INTO emp (name, age) VALUES ('张三', 38)";try {Statement parse = CCJSqlParserUtil.parse(sql);if (parse instanceof Insert) {Insert insert = (Insert) parse;Table table = insert.getTable();String name = table.getName();log.info(name);Values values = insert.getValues();log.info(values.toString());ExpressionList<Column> columns = insert.getColumns();log.info(columns.toString());}} catch (JSQLParserException e) {e.printStackTrace();}
}
19:02:07.239 [main] INFO com.study.jsqlparser.JSQLParserTest - emp
19:02:07.242 [main] INFO com.study.jsqlparser.JSQLParserTest - VALUES ('张三', 38)
19:02:07.242 [main] INFO com.study.jsqlparser.JSQLParserTest - name, age

delete语句

@Test
public void testDeleteStatement() {String sql = "DELETE FROM emp WHERE id = 1";try {Statement parse = CCJSqlParserUtil.parse(sql);if (parse instanceof Delete) {Delete delete = (Delete) parse;Table table = delete.getTable();log.info(table.toString());Expression where = delete.getWhere();log.info(where.toString());}} catch (JSQLParserException e) {e.printStackTrace();}
}
19:08:04.037 [main] INFO com.study.jsqlparser.JSQLParserTest - emp
19:08:04.039 [main] INFO com.study.jsqlparser.JSQLParserTest - id = 1

update 语句

@Test
public void testUpdateStatement() {String sql = "UPDATE emp SET ename = '张三', age = 18 WHERE id = 1045983421034180";try {Statement parse = CCJSqlParserUtil.parse(sql);if (parse instanceof Update) {Update update = (Update) parse;log.info(update.getTable().toString());List<UpdateSet> updateSets = update.getUpdateSets();for (UpdateSet updateSet : updateSets) {ExpressionList<Column> columns = updateSet.getColumns();log.info(columns.toString());ExpressionList<?> values = updateSet.getValues();log.info(values.toString());}Expression where = update.getWhere();log.info(where.toString());}} catch (JSQLParserException e) {e.printStackTrace();}
}
19:19:18.450 [main] INFO com.study.jsqlparser.JSQLParserTest - emp
19:19:18.452 [main] INFO com.study.jsqlparser.JSQLParserTest - ename
19:19:18.452 [main] INFO com.study.jsqlparser.JSQLParserTest - '张三'
19:19:18.452 [main] INFO com.study.jsqlparser.JSQLParserTest - age
19:19:18.452 [main] INFO com.study.jsqlparser.JSQLParserTest - 18
19:19:18.452 [main] INFO com.study.jsqlparser.JSQLParserTest - id = 1045983421034180

简单select语句

@Test
public void testSelectStatement() {String sql = "SELECT name,age FROM emp WHERE name LIKE '%三%' group by name,age having avg(age) >18  limit 10";try {Statement parse = CCJSqlParserUtil.parse(sql);if (parse instanceof Select) {Select select = (Select) parse;PlainSelect plainSelect = select.getPlainSelect();log.info(plainSelect.getSelectItems().toString());log.info(plainSelect.getFromItem().toString());log.info(plainSelect.getWhere().toString());log.info(plainSelect.getGroupBy().toString());log.info(plainSelect.getHaving().toString());log.info(plainSelect.getLimit().toString());}} catch (JSQLParserException e) {e.printStackTrace();}
}
19:47:22.904 [main] INFO com.study.jsqlparser.JSQLParserTest - [name, age]
19:47:22.906 [main] INFO com.study.jsqlparser.JSQLParserTest - emp
19:47:22.906 [main] INFO com.study.jsqlparser.JSQLParserTest - name LIKE '%三%'
19:47:22.906 [main] INFO com.study.jsqlparser.JSQLParserTest - GROUP BY name, age
19:47:22.906 [main] INFO com.study.jsqlparser.JSQLParserTest - avg(age) > 18
19:47:22.907 [main] INFO com.study.jsqlparser.JSQLParserTest -  LIMIT 10

复杂 select 语句

@Test
public void testComplexSelectStatement() {String sql = "select d.deptno,d.dname,avg(sal) from emp e join dept d on e.deptno = d.deptno where e.is_deleted = 0 group by d.deptno,d.dname having avg(sal)>10000 ORDER BY avg(sal) limit 10";try {Statement parse = CCJSqlParserUtil.parse(sql);if (parse instanceof Select) {Select select = (Select) parse;PlainSelect plainSelect = select.getPlainSelect();log.info(plainSelect.getSelectItems().toString());log.info(plainSelect.getFromItem().toString());log.info(plainSelect.getJoins().toString());log.info(plainSelect.getWhere().toString());log.info(plainSelect.getGroupBy().toString());log.info(plainSelect.getHaving().toString());log.info(plainSelect.getOrderByElements().toString());log.info(plainSelect.getLimit().toString());}} catch (JSQLParserException e) {e.printStackTrace();}
}
08:34:32.895 [main] INFO com.study.jsqlparser.JSQLParserTest - [d.deptno, d.dname, avg(sal)]
08:34:32.898 [main] INFO com.study.jsqlparser.JSQLParserTest - emp e
08:34:32.899 [main] INFO com.study.jsqlparser.JSQLParserTest - [JOIN dept d ON e.deptno = d.deptno]
08:34:32.899 [main] INFO com.study.jsqlparser.JSQLParserTest - e.is_deleted = 0
08:34:32.899 [main] INFO com.study.jsqlparser.JSQLParserTest - GROUP BY d.deptno, d.dname
08:34:32.899 [main] INFO com.study.jsqlparser.JSQLParserTest - avg(sal) > 10000
08:34:32.899 [main] INFO com.study.jsqlparser.JSQLParserTest -  LIMIT 10
08:34:32.899 [main] INFO com.study.jsqlparser.JSQLParserTest - [avg(sal)]

JSQLParserTest完整代码

import lombok.extern.slf4j.Slf4j;
import net.sf.jsqlparser.JSQLParserException;
import net.sf.jsqlparser.expression.Expression;
import net.sf.jsqlparser.expression.operators.relational.ExpressionList;
import net.sf.jsqlparser.parser.CCJSqlParserUtil;
import net.sf.jsqlparser.schema.Column;
import net.sf.jsqlparser.schema.Table;
import net.sf.jsqlparser.statement.Statement;
import net.sf.jsqlparser.statement.delete.Delete;
import net.sf.jsqlparser.statement.insert.Insert;
import net.sf.jsqlparser.statement.select.PlainSelect;
import net.sf.jsqlparser.statement.select.Select;
import net.sf.jsqlparser.statement.select.Values;
import net.sf.jsqlparser.statement.update.Update;
import net.sf.jsqlparser.statement.update.UpdateSet;
import org.junit.jupiter.api.Test;import java.util.List;@Slf4j
public class JSQLParserTest {@Testpublic void testInsertStatement() {String sql = "INSERT INTO emp (name, age) VALUES ('张三', 38)";try {Statement parse = CCJSqlParserUtil.parse(sql);if (parse instanceof Insert) {Insert insert = (Insert) parse;Table table = insert.getTable();String name = table.getName();log.info(name);Values values = insert.getValues();log.info(values.toString());ExpressionList<Column> columns = insert.getColumns();log.info(columns.toString());}} catch (JSQLParserException e) {e.printStackTrace();}}@Testpublic void testDeleteStatement() {String sql = "DELETE FROM emp WHERE id = 1";try {Statement parse = CCJSqlParserUtil.parse(sql);if (parse instanceof Delete) {Delete delete = (Delete) parse;Table table = delete.getTable();log.info(table.toString());Expression where = delete.getWhere();log.info(where.toString());}} catch (JSQLParserException e) {e.printStackTrace();}}//@Testpublic void testUpdateStatement() {String sql = "UPDATE emp SET ename = '张三', age = 18 WHERE id = 1045983421034180";try {Statement parse = CCJSqlParserUtil.parse(sql);if (parse instanceof Update) {Update update = (Update) parse;log.info(update.getTable().toString());List<UpdateSet> updateSets = update.getUpdateSets();for (UpdateSet updateSet : updateSets) {ExpressionList<Column> columns = updateSet.getColumns();log.info(columns.toString());ExpressionList<?> values = updateSet.getValues();log.info(values.toString());}Expression where = update.getWhere();log.info(where.toString());}} catch (JSQLParserException e) {e.printStackTrace();}}@Testpublic void testSelectStatement() {String sql = "SELECT name,age FROM emp WHERE name LIKE '%三%' group by name,age having avg(age) >18  limit 10";try {Statement parse = CCJSqlParserUtil.parse(sql);if (parse instanceof Select) {Select select = (Select) parse;PlainSelect plainSelect = select.getPlainSelect();log.info(plainSelect.getSelectItems().toString());log.info(plainSelect.getFromItem().toString());log.info(plainSelect.getWhere().toString());log.info(plainSelect.getGroupBy().toString());log.info(plainSelect.getHaving().toString());log.info(plainSelect.getLimit().toString());}} catch (JSQLParserException e) {e.printStackTrace();}}@Testpublic void testComplexSelectStatement() {String sql = "select d.deptno,d.dname,avg(sal) from emp e join dept d on e.deptno = d.deptno where e.is_deleted = 0 group by d.deptno,d.dname having avg(sal)>10000 ORDER BY avg(sal) limit 10";try {Statement parse = CCJSqlParserUtil.parse(sql);if (parse instanceof Select) {Select select = (Select) parse;PlainSelect plainSelect = select.getPlainSelect();log.info(plainSelect.getSelectItems().toString());log.info(plainSelect.getFromItem().toString());log.info(plainSelect.getJoins().toString());log.info(plainSelect.getWhere().toString());log.info(plainSelect.getGroupBy().toString());log.info(plainSelect.getHaving().toString());log.info(plainSelect.getOrderByElements().toString());log.info(plainSelect.getLimit().toString());}} catch (JSQLParserException e) {e.printStackTrace();}}
}

结论

通过本文介绍的方法,可以有效地对数GB的慢SQL日志文件进行去重处理,大幅提高查询效率并减小文件规模。这种方法不仅适用于慢SQL日志处理,还可以扩展到其他类似的日志文件去重需求中。

未来工作

未来的优化方向可以包括:

  1. 更加智能的SQL标准化方法,以处理更复杂的SQL语句。
  2. 将处理逻辑进一步并行化,利用多线程和分布式计算提高处理速度。
  3. 开发图形化界面工具,方便运维人员操作和查看去重后的日志。

通过持续的优化和改进,我们可以进一步提升慢SQL日志处理的效率和准确性,为企业的数据处理和优化提供坚实的基础。

这篇关于高效处理海量慢SQL日志文件:Java与JSQLParser去重方案详解的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1049134

相关文章

JVM 的类初始化机制

前言 当你在 Java 程序中new对象时,有没有考虑过 JVM 是如何把静态的字节码(byte code)转化为运行时对象的呢,这个问题看似简单,但清楚的同学相信也不会太多,这篇文章首先介绍 JVM 类初始化的机制,然后给出几个易出错的实例来分析,帮助大家更好理解这个知识点。 JVM 将字节码转化为运行时对象分为三个阶段,分别是:loading 、Linking、initialization

Spring Security 基于表达式的权限控制

前言 spring security 3.0已经可以使用spring el表达式来控制授权,允许在表达式中使用复杂的布尔逻辑来控制访问的权限。 常见的表达式 Spring Security可用表达式对象的基类是SecurityExpressionRoot。 表达式描述hasRole([role])用户拥有制定的角色时返回true (Spring security默认会带有ROLE_前缀),去

浅析Spring Security认证过程

类图 为了方便理解Spring Security认证流程,特意画了如下的类图,包含相关的核心认证类 概述 核心验证器 AuthenticationManager 该对象提供了认证方法的入口,接收一个Authentiaton对象作为参数; public interface AuthenticationManager {Authentication authenticate(Authenti

Spring Security--Architecture Overview

1 核心组件 这一节主要介绍一些在Spring Security中常见且核心的Java类,它们之间的依赖,构建起了整个框架。想要理解整个架构,最起码得对这些类眼熟。 1.1 SecurityContextHolder SecurityContextHolder用于存储安全上下文(security context)的信息。当前操作的用户是谁,该用户是否已经被认证,他拥有哪些角色权限…这些都被保

Spring Security基于数据库验证流程详解

Spring Security 校验流程图 相关解释说明(认真看哦) AbstractAuthenticationProcessingFilter 抽象类 /*** 调用 #requiresAuthentication(HttpServletRequest, HttpServletResponse) 决定是否需要进行验证操作。* 如果需要验证,则会调用 #attemptAuthentica

Spring Security 从入门到进阶系列教程

Spring Security 入门系列 《保护 Web 应用的安全》 《Spring-Security-入门(一):登录与退出》 《Spring-Security-入门(二):基于数据库验证》 《Spring-Security-入门(三):密码加密》 《Spring-Security-入门(四):自定义-Filter》 《Spring-Security-入门(五):在 Sprin

Java架构师知识体认识

源码分析 常用设计模式 Proxy代理模式Factory工厂模式Singleton单例模式Delegate委派模式Strategy策略模式Prototype原型模式Template模板模式 Spring5 beans 接口实例化代理Bean操作 Context Ioc容器设计原理及高级特性Aop设计原理Factorybean与Beanfactory Transaction 声明式事物

SQL中的外键约束

外键约束用于表示两张表中的指标连接关系。外键约束的作用主要有以下三点: 1.确保子表中的某个字段(外键)只能引用父表中的有效记录2.主表中的列被删除时,子表中的关联列也会被删除3.主表中的列更新时,子表中的关联元素也会被更新 子表中的元素指向主表 以下是一个外键约束的实例展示

基于MySQL Binlog的Elasticsearch数据同步实践

一、为什么要做 随着马蜂窝的逐渐发展,我们的业务数据越来越多,单纯使用 MySQL 已经不能满足我们的数据查询需求,例如对于商品、订单等数据的多维度检索。 使用 Elasticsearch 存储业务数据可以很好的解决我们业务中的搜索需求。而数据进行异构存储后,随之而来的就是数据同步的问题。 二、现有方法及问题 对于数据同步,我们目前的解决方案是建立数据中间表。把需要检索的业务数据,统一放到一张M

如何去写一手好SQL

MySQL性能 最大数据量 抛开数据量和并发数,谈性能都是耍流氓。MySQL没有限制单表最大记录数,它取决于操作系统对文件大小的限制。 《阿里巴巴Java开发手册》提出单表行数超过500万行或者单表容量超过2GB,才推荐分库分表。性能由综合因素决定,抛开业务复杂度,影响程度依次是硬件配置、MySQL配置、数据表设计、索引优化。500万这个值仅供参考,并非铁律。 博主曾经操作过超过4亿行数据