数据源10min自动断开连接导致查询抛异常(未获取可用连接)

本文主要是介绍数据源10min自动断开连接导致查询抛异常(未获取可用连接),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

由于个人能力有限,本文章仅仅代表本人想法,若有不对请及时指出,若有侵权,请联系本人。

1 背景

工作中引入druid来管理数据源连接,由于数据源每隔10分钟强制管理空闲超过10分钟的连接,导致每隔10分钟出现1次获取不到有效连接异常。业务请求量非常少(1h可能来一次请求)。因此,研究了一下druid源码,以及相应的解决方案。
(1)设置maxEvictableIdleTimeMillis为300000,这样5分钟之后强制剔除空闲超过5分钟的连接。
新来的请求重新建立新的连接。
优点: 适合定时任务或者请求量特别特别少的业务场景
(2)保活
keepAlive: true
keepAliveBetweenTimeMillis: 120000
优点: 持续保存有效连接,及时响应业务请求
缺点: 持有成本

2 技术实战

2.1 druid引入以及默认配置

引入maven <dependency><groupId>com.alibaba</groupId><artifactId>druid-spring-boot-starter</artifactId><version>1.2.23</version></dependency>
// spi融入到springboot框架
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
com.alibaba.druid.spring.boot.autoconfigure.DruidDataSourceAutoConfig
ure
@Configuration
@ConditionalOnProperty(name = "spring.datasource.type",havingValue = "com.alibaba.druid.pool.DruidDataSource",matchIfMissing = true)
@ConditionalOnClass(DruidDataSource.class)
@AutoConfigureBefore(DataSourceAutoConfiguration.class)
@EnableConfigurationProperties({DruidStatProperties.class, DataSourceProperties.class})
@Import({DruidSpringAopConfiguration.class,DruidStatViewServletConfiguration.class,DruidWebStatFilterConfiguration.class,DruidFilterConfiguration.class})
public class DruidDataSourceAutoConfigure {private static final Logger LOGGER = LoggerFactory.getLogger(DruidDataSourceAutoConfigure.class);@Bean@ConditionalOnMissingBean({DruidDataSourceWrapper.class,DruidDataSource.class,DataSource.class})public DruidDataSourceWrapper dataSource() {LOGGER.info("Init DruidDataSource");return new DruidDataSourceWrapper();}
}
@ConfigurationProperties("spring.datasource.druid")
public class DruidDataSourceWrapper extends DruidDataSource implements InitializingBean {xxx
}
// 查看DruidAbstractDataSource类的属性// 默认初始化连接池=0public static final int DEFAULT_INITIAL_SIZE = 0;// 默认最大连接池=6public static final int DEFAULT_MAX_ACTIVE_SIZE = 8;// 默认最大的空闲连接池=8public static final int DEFAULT_MAX_IDLE = 8;// 默认最小的空闲连接池=0public static final int DEFAULT_MIN_IDLE = 0;// 默认最长的获取连接等待时间-1public static final int DEFAULT_MAX_WAIT = -1;// 默认validation_query=nullpublic static final String DEFAULT_VALIDATION_QUERY = null;// 默认当应用向连接池申请连接时,连接池不判断这条连接是否是可用的。public static final boolean DEFAULT_TEST_ON_BORROW = false;// 默认当一个连接使用完归还到连接池时不进行验证public static final boolean DEFAULT_TEST_ON_RETURN = false;// 默认进行空闲时检测public static final boolean DEFAULT_WHILE_IDLE = true;// 默认检查空闲连接的频率 1minpublic static final long DEFAULT_TIME_BETWEEN_EVICTION_RUNS_MILLIS = 60 * 1000L;// 默认连接出错后重试时间间隔 0.5spublic static final long DEFAULT_TIME_BETWEEN_CONNECT_ERROR_MILLIS = 500;public static final int DEFAULT_NUM_TESTS_PER_EVICTION_RUN = 3;public static final int DEFAULT_TIME_CONNECT_TIMEOUT_MILLIS = 10_000;// 默认连接超时时间10spublic static final int DEFAULT_TIME_SOCKET_TIMEOUT_MILLIS = 10_000;// 默认剔除空闲连接最小的等待时间public static final long DEFAULT_MIN_EVICTABLE_IDLE_TIME_MILLIS = 1000L * 60L * 30L;// 默认剔除空闲连接最大的等待时间public static final long DEFAULT_MAX_EVICTABLE_IDLE_TIME_MILLIS = 1000L * 60L * 60L * 7;// 默认物理连接超时时间public static final long DEFAULT_PHY_TIMEOUT_MILLIS = -1;// 默认自动提交事务protected volatile boolean defaultAutoCommit = true;

2.2 项目初始化执行

        @Bean@ConditionalOnMissingBean({DruidDataSourceWrapper.class,DruidDataSource.class,DataSource.class})public DruidDataSourceWrapper dataSource() {LOGGER.info("Init DruidDataSource");return new DruidDataSourceWrapper();}public DruidDataSource() {this(false);// 默认非公平锁}public DruidDataSource(boolean fairLock) {super(fairLock);// 接受从系统参数传递的配置configFromPropeties(System.getProperties());}// 初始化非公平锁public DruidAbstractDataSource(boolean lockFair) {lock = new ReentrantLock(lockFair);notEmpty = lock.newCondition();empty = lock.newCondition();}
@ConfigurationProperties("spring.datasource.druid")
public class DruidDataSourceWrapper extends DruidDataSource implements InitializingBean {xxx@Overridepublic void afterPropertiesSet() throws Exception {xxxinit();//进行初始化,这时候会调用com.alibaba.druid.pool.DruidDataSource#init}xxx
}
public void init() throws SQLException {if (inited) {return;}// bug fixed for dead lock, for issue #2980DruidDriver.getInstance();final ReentrantLock lock = this.lock;try {lock.lockInterruptibly();} catch (InterruptedException e) {throw new SQLException("interrupt", e);}boolean init = false;try {if (inited) {return;}initStackTrace = Utils.toString(Thread.currentThread().getStackTrace());this.id = DruidDriver.createDataSourceId();if (this.id > 1) {long delta = (this.id - 1) * 100000;connectionIdSeedUpdater.addAndGet(this, delta);statementIdSeedUpdater.addAndGet(this, delta);resultSetIdSeedUpdater.addAndGet(this, delta);transactionIdSeedUpdater.addAndGet(this, delta);}if (this.jdbcUrl != null) {this.jdbcUrl = this.jdbcUrl.trim();initFromWrapDriverUrl();}initTimeoutsFromUrlOrProperties();for (Filter filter : filters) {filter.init(this);}if (this.dbTypeName == null || this.dbTypeName.length() == 0) {this.dbTypeName = JdbcUtils.getDbType(jdbcUrl, null);}DbType dbType = DbType.of(this.dbTypeName);if (JdbcUtils.isMysqlDbType(dbType)) {boolean cacheServerConfigurationSet = false;if (this.connectProperties.containsKey("cacheServerConfiguration")) {cacheServerConfigurationSet = true;} else if (this.jdbcUrl.indexOf("cacheServerConfiguration") != -1) {cacheServerConfigurationSet = true;}if (cacheServerConfigurationSet) {this.connectProperties.put("cacheServerConfiguration", "true");}}if (maxActive <= 0) {throw new IllegalArgumentException("illegal maxActive " + maxActive);}if (maxActive < minIdle) {throw new IllegalArgumentException("illegal maxActive " + maxActive);}if (getInitialSize() > maxActive) {throw new IllegalArgumentException("illegal initialSize " + this.initialSize + ", maxActive " + maxActive);}if (timeBetweenLogStatsMillis > 0 && useGlobalDataSourceStat) {throw new IllegalArgumentException("timeBetweenLogStatsMillis not support useGlobalDataSourceStat=true");}if (maxEvictableIdleTimeMillis < minEvictableIdleTimeMillis) {throw new SQLException("maxEvictableIdleTimeMillis must be grater than minEvictableIdleTimeMillis");}if (keepAlive && keepAliveBetweenTimeMillis <= timeBetweenEvictionRunsMillis) {throw new SQLException("keepAliveBetweenTimeMillis must be greater than timeBetweenEvictionRunsMillis");}if (this.driverClass != null) {this.driverClass = driverClass.trim();}initFromSPIServiceLoader();resolveDriver();initCheck();this.netTimeoutExecutor = new SynchronousExecutor();initExceptionSorter();initValidConnectionChecker();validationQueryCheck();if (isUseGlobalDataSourceStat()) {dataSourceStat = JdbcDataSourceStat.getGlobal();if (dataSourceStat == null) {dataSourceStat = new JdbcDataSourceStat("Global", "Global", this.dbTypeName);JdbcDataSourceStat.setGlobal(dataSourceStat);}if (dataSourceStat.getDbType() == null) {dataSourceStat.setDbType(this.dbTypeName);}} else {dataSourceStat = new JdbcDataSourceStat(this.name, this.jdbcUrl, this.dbTypeName, this.connectProperties);}dataSourceStat.setResetStatEnable(this.resetStatEnable);connections = new DruidConnectionHolder[maxActive];evictConnections = new DruidConnectionHolder[maxActive];keepAliveConnections = new DruidConnectionHolder[maxActive];nullConnections = new DruidConnectionHolder[maxActive];SQLException connectError = null;if (createScheduler != null && asyncInit) {for (int i = 0; i < initialSize; ++i) {submitCreateTask(true);}} else if (!asyncInit) {// init connectionswhile (poolingCount < initialSize) {try {PhysicalConnectionInfo pyConnectInfo = createPhysicalConnection();DruidConnectionHolder holder = new DruidConnectionHolder(this, pyConnectInfo);connections[poolingCount++] = holder;} catch (SQLException ex) {LOG.error("init datasource error, url: " + this.getUrl(), ex);if (initExceptionThrow) {connectError = ex;break;} else {Thread.sleep(3000);}}}if (poolingCount > 0) {poolingPeak = poolingCount;poolingPeakTime = System.currentTimeMillis();}}createAndLogThread();createAndStartCreatorThread();createAndStartDestroyThread();// await threads initedLatch to support dataSource restart.if (createConnectionThread != null) {createConnectionThread.getInitedLatch().await();}if (destroyConnectionThread != null) {destroyConnectionThread.getInitedLatch().await();}init = true;initedTime = new Date();registerMbean();if (connectError != null && poolingCount == 0) {throw connectError;}if (keepAlive) {if (createScheduler != null) {// async fill to minIdlefor (int i = 0; i < minIdle - initialSize; ++i) {submitCreateTask(true);}} else {empty.signal();}}} catch (SQLException e) {LOG.error("{dataSource-" + this.getID() + "} init error", e);throw e;} catch (InterruptedException e) {throw new SQLException(e.getMessage(), e);} catch (RuntimeException e) {LOG.error("{dataSource-" + this.getID() + "} init error", e);throw e;} catch (Error e) {LOG.error("{dataSource-" + this.getID() + "} init error", e);throw e;} finally {inited = true;lock.unlock();if (init && LOG.isInfoEnabled()) {String msg = "{dataSource-" + this.getID();if (this.name != null && !this.name.isEmpty()) {msg += ",";msg += this.name;}msg += "} inited";LOG.info(msg);}}}

2.3 执行回收空闲连接

public class DestroyConnectionThread extends Thread {xxxpublic void run() {initedLatch.countDown();for (; !Thread.currentThread().isInterrupted(); ) {// 从前面开始删除try { // 若closed 为true,直接break停止执行if (closed || closing) {break;}// 每隔timeBetweenEvictionRunsMillis 执行一次if (timeBetweenEvictionRunsMillis > 0) {Thread.sleep(timeBetweenEvictionRunsMillis);} else {//每隔1s执行一次Thread.sleep(1000); //}if (Thread.interrupted()) {break;}destroyTask.run();} catch (InterruptedException e) {break;}}}}public class DestroyTask implements Runnable {public DestroyTask() {}@Overridepublic void run() {// 执行回收空闲连接shrink(true, keepAlive);if (isRemoveAbandoned()) {removeAbandoned();}}}// checkTime为true, keepalive默认为falsepublic void shrink(boolean checkTime, boolean keepAlive) {if (poolingCount == 0) {return;}final Lock lock = this.lock;try {lock.lockInterruptibly();} catch (InterruptedException e) {return;}boolean needFill = false;int evictCount = 0;int keepAliveCount = 0;int fatalErrorIncrement = fatalErrorCount - fatalErrorCountLastShrink;fatalErrorCountLastShrink = fatalErrorCount;try {if (!inited) {return;}final int checkCount = poolingCount - minIdle;final long currentTimeMillis = System.currentTimeMillis();// remaining is the position of the next connection should be retained in the pool.int remaining = 0;int i = 0;for (; i < poolingCount; ++i) {DruidConnectionHolder connection = connections[i];if ((onFatalError || fatalErrorIncrement > 0) && (lastFatalErrorTimeMillis > connection.connectTimeMillis)) {keepAliveConnections[keepAliveCount++] = connection;continue;}if (checkTime) {if (phyTimeoutMillis > 0) {long phyConnectTimeMillis = currentTimeMillis - connection.connectTimeMillis;if (phyConnectTimeMillis > phyTimeoutMillis) {evictConnections[evictCount++] = connection;continue;}}long idleMillis = currentTimeMillis - connection.lastActiveTimeMillis;if (idleMillis < minEvictableIdleTimeMillis&& idleMillis < keepAliveBetweenTimeMillis) {break;}// 当空闲时间 > 最小空闲时间if (idleMillis >= minEvictableIdleTimeMillis) {if (i < checkCount) {evictConnections[evictCount++] = connection;continue;// 当空闲时间 > 最大空闲时间} else if (idleMillis > maxEvictableIdleTimeMillis) {// 放到剔除空闲连接数组中,并且剔除数量+1evictConnections[evictCount++] = connection;continue;}}// 若开启了保活,并且空闲连接 >= 保活间隔时间if (keepAlive && idleMillis >= keepAliveBetweenTimeMillis&& currentTimeMillis - connection.lastKeepTimeMillis >= keepAliveBetweenTimeMillis) {keepAliveConnections[keepAliveCount++] = connection;} else {if (i != remaining) {// move the connection to the new position for retaining it in the pool.connections[remaining] = connection;}remaining++;}} else {if (i < checkCount) {evictConnections[evictCount++] = connection;} else {break;}}}// shrink connections by HotSpot intrinsic function _arraycopy for performance optimization.int removeCount = evictCount + keepAliveCount;if (removeCount > 0) {int breakedCount = poolingCount - i;if (breakedCount > 0) {// retains the connections that start at the break position.System.arraycopy(connections, i, connections, remaining, breakedCount);remaining += breakedCount;}// clean the old references of the connections that have been moved forward to the new positions.System.arraycopy(nullConnections, 0, connections, remaining, removeCount);poolingCount -= removeCount;}keepAliveCheckCount += keepAliveCount;if (keepAlive && poolingCount + activeCount < minIdle) {needFill = true;}} finally {lock.unlock();}if (evictCount > 0) {// 遍历所有需要剔除的空闲连接数组,将连接进行释放for (int i = 0; i < evictCount; ++i) {DruidConnectionHolder item = evictConnections[i];Connection connection = item.getConnection();JdbcUtils.close(connection);destroyCountUpdater.incrementAndGet(this);}// use HotSpot intrinsic function _arraycopy for performance optimization.System.arraycopy(nullConnections, 0, evictConnections, 0, evictConnections.length);}if (keepAliveCount > 0) {// keep orderfor (int i = keepAliveCount - 1; i >= 0; --i) {DruidConnectionHolder holder = keepAliveConnections[i];Connection connection = holder.getConnection();holder.incrementKeepAliveCheckCount();boolean validate = false;try {this.validateConnection(connection);validate = true;} catch (Throwable error) {keepAliveCheckErrorLast = error;keepAliveCheckErrorCountUpdater.incrementAndGet(this);if (LOG.isDebugEnabled()) {LOG.debug("keepAliveErr", error);}}boolean discard = !validate;if (validate) {holder.lastKeepTimeMillis = System.currentTimeMillis();boolean putOk = put(holder, 0L, true);if (!putOk) {discard = true;}}if (discard) {try {connection.close();} catch (Exception error) {discardErrorLast = error;discardErrorCountUpdater.incrementAndGet(DruidDataSource.this);if (LOG.isErrorEnabled()) {LOG.error("discard connection error", error);}}if (holder.socket != null) {try {holder.socket.close();} catch (Exception error) {discardErrorLast = error;discardErrorCountUpdater.incrementAndGet(DruidDataSource.this);if (LOG.isErrorEnabled()) {LOG.error("discard connection error", error);}}}lock.lock();try {holder.discard = true;discardCount++;if (activeCount + poolingCount + createTaskCount < minIdle) {needFill = true;}} finally {lock.unlock();}}}this.getDataSourceStat().addKeepAliveCheckCount(keepAliveCount);// use HotSpot intrinsic function _arraycopy for performance optimization.System.arraycopy(nullConnections, 0, keepAliveConnections, 0, keepAliveConnections.length);}if (needFill) {lock.lock();try {int fillCount = minIdle - (activeCount + poolingCount + createTaskCount);emptySignal(fillCount);} finally {lock.unlock();}} else if (fatalErrorIncrement > 0) {lock.lock();try {emptySignal();} finally {lock.unlock();}}}

2.3 全部收回连接后接受请求重新创建连接
在这里插入图片描述
在这里插入图片描述

    @Overridepublic DruidPooledConnection getConnection() throws SQLException {return getConnection(maxWait);}

2.4 设置keepAlive=true

      keepAlive: truekeepAliveBetweenTimeMillis: 120000timeBetweenEvictionRunsMillis: 5000 #关闭空闲连接间隔   5sminEvictableIdleTimeMillis: 120000 #连接保持空闲而不被驱逐的最小时间 2分钟maxEvictableIdleTimeMillis: 420000 #连接保持空闲而不被驱逐的最小时间 5分钟
            if (keepAlive && poolingCount + activeCount < minIdle) {needFill = true; //需要重建物理连接,保持minIdle数量}

这篇关于数据源10min自动断开连接导致查询抛异常(未获取可用连接)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1125164

相关文章

SQL 中多表查询的常见连接方式详解

《SQL中多表查询的常见连接方式详解》本文介绍SQL中多表查询的常见连接方式,包括内连接(INNERJOIN)、左连接(LEFTJOIN)、右连接(RIGHTJOIN)、全外连接(FULLOUTER... 目录一、连接类型图表(ASCII 形式)二、前置代码(创建示例表)三、连接方式代码示例1. 内连接(I

如何利用Java获取当天的开始和结束时间

《如何利用Java获取当天的开始和结束时间》:本文主要介绍如何使用Java8的LocalDate和LocalDateTime类获取指定日期的开始和结束时间,展示了如何通过这些类进行日期和时间的处... 目录前言1. Java日期时间API概述2. 获取当天的开始和结束时间代码解析运行结果3. 总结前言在J

轻松上手MYSQL之JSON函数实现高效数据查询与操作

《轻松上手MYSQL之JSON函数实现高效数据查询与操作》:本文主要介绍轻松上手MYSQL之JSON函数实现高效数据查询与操作的相关资料,MySQL提供了多个JSON函数,用于处理和查询JSON数... 目录一、jsON_EXTRACT 提取指定数据二、JSON_UNQUOTE 取消双引号三、JSON_KE

java获取图片的大小、宽度、高度方式

《java获取图片的大小、宽度、高度方式》文章介绍了如何将File对象转换为MultipartFile对象的过程,并分享了个人经验,希望能为读者提供参考... 目China编程录Java获取图片的大小、宽度、高度File对象(该对象里面是图片)MultipartFile对象(该对象里面是图片)总结java获取图片

Java通过反射获取方法参数名的方式小结

《Java通过反射获取方法参数名的方式小结》这篇文章主要为大家详细介绍了Java如何通过反射获取方法参数名的方式,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一下... 目录1、前言2、解决方式方式2.1: 添加编译参数配置 -parameters方式2.2: 使用Spring的内部工具类 -

Go Mongox轻松实现MongoDB的时间字段自动填充

《GoMongox轻松实现MongoDB的时间字段自动填充》这篇文章主要为大家详细介绍了Go语言如何使用mongox库,在插入和更新数据时自动填充时间字段,从而提升开发效率并减少重复代码,需要的可以... 目录前言时间字段填充规则Mongox 的安装使用 Mongox 进行插入操作使用 Mongox 进行更

Java如何获取视频文件的视频时长

《Java如何获取视频文件的视频时长》文章介绍了如何使用Java获取视频文件的视频时长,包括导入maven依赖和代码案例,同时,也讨论了在运行过程中遇到的SLF4J加载问题,并给出了解决方案... 目录Java获取视频文件的视频时长1、导入maven依赖2、代码案例3、SLF4J: Failed to lo

java如何通过Kerberos认证方式连接hive

《java如何通过Kerberos认证方式连接hive》该文主要介绍了如何在数据源管理功能中适配不同数据源(如MySQL、PostgreSQL和Hive),特别是如何在SpringBoot3框架下通过... 目录Java实现Kerberos认证主要方法依赖示例续期连接hive遇到的问题分析解决方式扩展思考总

C语言中自动与强制转换全解析

《C语言中自动与强制转换全解析》在编写C程序时,类型转换是确保数据正确性和一致性的关键环节,无论是隐式转换还是显式转换,都各有特点和应用场景,本文将详细探讨C语言中的类型转换机制,帮助您更好地理解并在... 目录类型转换的重要性自动类型转换(隐式转换)强制类型转换(显式转换)常见错误与注意事项总结与建议类型

查询SQL Server数据库服务器IP地址的多种有效方法

《查询SQLServer数据库服务器IP地址的多种有效方法》作为数据库管理员或开发人员,了解如何查询SQLServer数据库服务器的IP地址是一项重要技能,本文将介绍几种简单而有效的方法,帮助你轻松... 目录使用T-SQL查询方法1:使用系统函数方法2:使用系统视图使用SQL Server Configu