对主从 Redis 进行 QPS 压测

2024-06-15 07:48
文章标签 进行 redis 压测 主从 qps

本文主要是介绍对主从 Redis 进行 QPS 压测,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

1、对redis读写分离架构进行压测,单实例写QPS+单实例读QPS

 
  1. redis-3.2.8/src

  2.  
  3. ./redis-benchmark -h 192.168.31.187

  4.  
  5. -c <clients> Number of parallel connections (default 50) // 并发

  6. -n <requests> Total number of requests (default 100000) // 总数

  7. -d <size> Data size of SET/GET value in bytes (default 2) // 大小

  8.  
  9. 根据你自己的高峰期的访问量,在高峰期,瞬时最大用户量会达到10万+,-c 100000,-n 10000000,-d 50

  10.  
  11. 各种基准测试,直接出来

  12.  
  13. 1核1G,虚拟机

  14.  
  15. ====== PING_INLINE ======

  16. 100000 requests completed in 1.28 seconds

  17. 50 parallel clients

  18. 3 bytes payload

  19. keep alive: 1

  20.  
  21. 99.78% <= 1 milliseconds

  22. 99.93% <= 2 milliseconds

  23. 99.97% <= 3 milliseconds

  24. 100.00% <= 3 milliseconds

  25. 78308.54 requests per second

  26.  
  27. ====== PING_BULK ======

  28. 100000 requests completed in 1.30 seconds

  29. 50 parallel clients

  30. 3 bytes payload

  31. keep alive: 1

  32.  
  33. 99.87% <= 1 milliseconds

  34. 100.00% <= 1 milliseconds

  35. 76804.91 requests per second

  36.  
  37. ====== SET ======

  38. 100000 requests completed in 2.50 seconds

  39. 50 parallel clients

  40. 3 bytes payload

  41. keep alive: 1

  42.  
  43. 5.95% <= 1 milliseconds

  44. 99.63% <= 2 milliseconds

  45. 99.93% <= 3 milliseconds

  46. 99.99% <= 4 milliseconds

  47. 100.00% <= 4 milliseconds

  48. 40032.03 requests per second

  49.  
  50. ====== GET ======

  51. 100000 requests completed in 1.30 seconds

  52. 50 parallel clients

  53. 3 bytes payload

  54. keep alive: 1

  55.  
  56. 99.73% <= 1 milliseconds

  57. 100.00% <= 2 milliseconds

  58. 100.00% <= 2 milliseconds

  59. 76628.35 requests per second

  60.  
  61. ====== INCR ======

  62. 100000 requests completed in 1.90 seconds

  63. 50 parallel clients

  64. 3 bytes payload

  65. keep alive: 1

  66.  
  67. 80.92% <= 1 milliseconds

  68. 99.81% <= 2 milliseconds

  69. 99.95% <= 3 milliseconds

  70. 99.96% <= 4 milliseconds

  71. 99.97% <= 5 milliseconds

  72. 100.00% <= 6 milliseconds

  73. 52548.61 requests per second

  74.  
  75. ====== LPUSH ======

  76. 100000 requests completed in 2.58 seconds

  77. 50 parallel clients

  78. 3 bytes payload

  79. keep alive: 1

  80.  
  81. 3.76% <= 1 milliseconds

  82. 99.61% <= 2 milliseconds

  83. 99.93% <= 3 milliseconds

  84. 100.00% <= 3 milliseconds

  85. 38684.72 requests per second

  86.  
  87. ====== RPUSH ======

  88. 100000 requests completed in 2.47 seconds

  89. 50 parallel clients

  90. 3 bytes payload

  91. keep alive: 1

  92.  
  93. 6.87% <= 1 milliseconds

  94. 99.69% <= 2 milliseconds

  95. 99.87% <= 3 milliseconds

  96. 99.99% <= 4 milliseconds

  97. 100.00% <= 4 milliseconds

  98. 40469.45 requests per second

  99.  
  100. ====== LPOP ======

  101. 100000 requests completed in 2.26 seconds

  102. 50 parallel clients

  103. 3 bytes payload

  104. keep alive: 1

  105.  
  106. 28.39% <= 1 milliseconds

  107. 99.83% <= 2 milliseconds

  108. 100.00% <= 2 milliseconds

  109. 44306.60 requests per second

  110.  
  111. ====== RPOP ======

  112. 100000 requests completed in 2.18 seconds

  113. 50 parallel clients

  114. 3 bytes payload

  115. keep alive: 1

  116.  
  117. 36.08% <= 1 milliseconds

  118. 99.75% <= 2 milliseconds

  119. 100.00% <= 2 milliseconds

  120. 45871.56 requests per second

  121.  
  122. ====== SADD ======

  123. 100000 requests completed in 1.23 seconds

  124. 50 parallel clients

  125. 3 bytes payload

  126. keep alive: 1

  127.  
  128. 99.94% <= 1 milliseconds

  129. 100.00% <= 2 milliseconds

  130. 100.00% <= 2 milliseconds

  131. 81168.83 requests per second

  132.  
  133. ====== SPOP ======

  134. 100000 requests completed in 1.28 seconds

  135. 50 parallel clients

  136. 3 bytes payload

  137. keep alive: 1

  138.  
  139. 99.80% <= 1 milliseconds

  140. 99.96% <= 2 milliseconds

  141. 99.96% <= 3 milliseconds

  142. 99.97% <= 5 milliseconds

  143. 100.00% <= 5 milliseconds

  144. 78369.91 requests per second

  145.  
  146. ====== LPUSH (needed to benchmark LRANGE) ======

  147. 100000 requests completed in 2.47 seconds

  148. 50 parallel clients

  149. 3 bytes payload

  150. keep alive: 1

  151.  
  152. 15.29% <= 1 milliseconds

  153. 99.64% <= 2 milliseconds

  154. 99.94% <= 3 milliseconds

  155. 100.00% <= 3 milliseconds

  156. 40420.37 requests per second

  157.  
  158. ====== LRANGE_100 (first 100 elements) ======

  159. 100000 requests completed in 3.69 seconds

  160. 50 parallel clients

  161. 3 bytes payload

  162. keep alive: 1

  163.  
  164. 30.86% <= 1 milliseconds

  165. 96.99% <= 2 milliseconds

  166. 99.94% <= 3 milliseconds

  167. 99.99% <= 4 milliseconds

  168. 100.00% <= 4 milliseconds

  169. 27085.59 requests per second

  170.  
  171. ====== LRANGE_300 (first 300 elements) ======

  172. 100000 requests completed in 10.22 seconds

  173. 50 parallel clients

  174. 3 bytes payload

  175. keep alive: 1

  176.  
  177. 0.03% <= 1 milliseconds

  178. 5.90% <= 2 milliseconds

  179. 90.68% <= 3 milliseconds

  180. 95.46% <= 4 milliseconds

  181. 97.67% <= 5 milliseconds

  182. 99.12% <= 6 milliseconds

  183. 99.98% <= 7 milliseconds

  184. 100.00% <= 7 milliseconds

  185. 9784.74 requests per second

  186.  
  187. ====== LRANGE_500 (first 450 elements) ======

  188. 100000 requests completed in 14.71 seconds

  189. 50 parallel clients

  190. 3 bytes payload

  191. keep alive: 1

  192.  
  193. 0.00% <= 1 milliseconds

  194. 0.07% <= 2 milliseconds

  195. 1.59% <= 3 milliseconds

  196. 89.26% <= 4 milliseconds

  197. 97.90% <= 5 milliseconds

  198. 99.24% <= 6 milliseconds

  199. 99.73% <= 7 milliseconds

  200. 99.89% <= 8 milliseconds

  201. 99.96% <= 9 milliseconds

  202. 99.99% <= 10 milliseconds

  203. 100.00% <= 10 milliseconds

  204. 6799.48 requests per second

  205.  
  206. ====== LRANGE_600 (first 600 elements) ======

  207. 100000 requests completed in 18.56 seconds

  208. 50 parallel clients

  209. 3 bytes payload

  210. keep alive: 1

  211.  
  212. 0.00% <= 2 milliseconds

  213. 0.23% <= 3 milliseconds

  214. 1.75% <= 4 milliseconds

  215. 91.17% <= 5 milliseconds

  216. 98.16% <= 6 milliseconds

  217. 99.04% <= 7 milliseconds

  218. 99.83% <= 8 milliseconds

  219. 99.95% <= 9 milliseconds

  220. 99.98% <= 10 milliseconds

  221. 100.00% <= 10 milliseconds

  222. 5387.35 requests per second

  223.  
  224. ====== MSET (10 keys) ======

  225. 100000 requests completed in 4.02 seconds

  226. 50 parallel clients

  227. 3 bytes payload

  228. keep alive: 1

  229.  
  230. 0.01% <= 1 milliseconds

  231. 53.22% <= 2 milliseconds

  232. 99.12% <= 3 milliseconds

  233. 99.55% <= 4 milliseconds

  234. 99.70% <= 5 milliseconds

  235. 99.90% <= 6 milliseconds

  236. 99.95% <= 7 milliseconds

  237. 100.00% <= 8 milliseconds

  238. 24869.44 requests per second

 
  1. 我们这个读写分离这一块的第一讲

  2.  
  3. 大部分情况下来说,看你的服务器的机器性能和配置,机器越牛逼,配置越高

  4.  
  5. 单机上十几万,单机上二十万

  6.  
  7. 很多公司里,给一些低配置的服务器,操作复杂度

  8.  
  9. 大公司里,都是公司会提供统一的云平台,比如京东、腾讯、BAT、其他的一些、小米、美团

  10.  
  11. 虚拟机,低配

  12.  
  13. 搭建一些集群,专门为某个项目,搭建的专用集群,4核4G内存,比较复杂的操作,数据比较大

  14.  
  15. 几万,单机做到,差不多了

  16.  
  17. redis提供的高并发,至少到上万,没问题

  18.  
  19. 几万~十几万/二十万不等

  20.  
  21. QPS,自己不同公司,不同服务器,自己去测试,跟生产环境还有区别

  22.  
  23. 生产环境,大量的网络请求的调用,网络本身就有开销,你的redis的吞吐量就不一定那么高了

  24.  
  25. QPS的两个杀手:一个是复杂操作,lrange,挺多的; value很大,2 byte,我之前用redis做大规模的缓存

  26.  
  27. 做商品详情页的cache,可能是需要把大串数据,拼接在一起,作为一个json串,大小可能都几k,几个byte

  28.  
  29. 2、水平扩容redis读节点,提升度吞吐量

  30.  
  31. 就按照上一节课讲解的,再在其他服务器上搭建redis从节点,单个从节点读请QPS在5万左右,两个redis从节点,所有的读请求打到两台机器上去,承载整个集群读QPS在10万+

 

 

这篇关于对主从 Redis 进行 QPS 压测的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1062851

相关文章

Nginx如何进行流量按比例转发

《Nginx如何进行流量按比例转发》Nginx可以借助split_clients指令或通过weight参数以及Lua脚本实现流量按比例转发,下面小编就为大家介绍一下两种方式具体的操作步骤吧... 目录方式一:借助split_clients指令1. 配置split_clients2. 配置后端服务器组3. 配

Linux搭建Mysql主从同步的教程

《Linux搭建Mysql主从同步的教程》:本文主要介绍Linux搭建Mysql主从同步的教程,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录linux搭建mysql主从同步1.启动mysql服务2.修改Mysql主库配置文件/etc/my.cnf3.重启主库my

Python使用DeepSeek进行联网搜索功能详解

《Python使用DeepSeek进行联网搜索功能详解》Python作为一种非常流行的编程语言,结合DeepSeek这一高性能的深度学习工具包,可以方便地处理各种深度学习任务,本文将介绍一下如何使用P... 目录一、环境准备与依赖安装二、DeepSeek简介三、联网搜索与数据集准备四、实践示例:图像分类1.

Redis客户端工具之RedisInsight的下载方式

《Redis客户端工具之RedisInsight的下载方式》RedisInsight是Redis官方提供的图形化客户端工具,下载步骤包括访问Redis官网、选择RedisInsight、下载链接、注册... 目录Redis客户端工具RedisInsight的下载一、点击进入Redis官网二、点击RedisI

Redis实现RBAC权限管理

《Redis实现RBAC权限管理》本文主要介绍了Redis实现RBAC权限管理,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学习学习吧... 目录1. 什么是 RBAC?2. 为什么使用 Redis 实现 RBAC?3. 设计 RBAC 数据结构

Redis 内存淘汰策略深度解析(最新推荐)

《Redis内存淘汰策略深度解析(最新推荐)》本文详细探讨了Redis的内存淘汰策略、实现原理、适用场景及最佳实践,介绍了八种内存淘汰策略,包括noeviction、LRU、LFU、TTL、Rand... 目录一、 内存淘汰策略概述二、内存淘汰策略详解2.1 ​noeviction(不淘汰)​2.2 ​LR

Go使用pprof进行CPU,内存和阻塞情况分析

《Go使用pprof进行CPU,内存和阻塞情况分析》Go语言提供了强大的pprof工具,用于分析CPU、内存、Goroutine阻塞等性能问题,帮助开发者优化程序,提高运行效率,下面我们就来深入了解下... 目录1. pprof 介绍2. 快速上手:启用 pprof3. CPU Profiling:分析 C

Java中有什么工具可以进行代码反编译详解

《Java中有什么工具可以进行代码反编译详解》:本文主要介绍Java中有什么工具可以进行代码反编译的相关资,料,包括JD-GUI、CFR、Procyon、Fernflower、Javap、Byte... 目录1.JD-GUI2.CFR3.Procyon Decompiler4.Fernflower5.Jav

Python进行PDF文件拆分的示例详解

《Python进行PDF文件拆分的示例详解》在日常生活中,我们常常会遇到大型的PDF文件,难以发送,将PDF拆分成多个小文件是一个实用的解决方案,下面我们就来看看如何使用Python实现PDF文件拆分... 目录使用工具将PDF按页数拆分将PDF的每一页拆分为单独的文件将PDF按指定页数拆分根据页码范围拆分

Linux使用cut进行文本提取的操作方法

《Linux使用cut进行文本提取的操作方法》Linux中的cut命令是一个命令行实用程序,用于从文件或标准输入中提取文本行的部分,本文给大家介绍了Linux使用cut进行文本提取的操作方法,文中有详... 目录简介基础语法常用选项范围选择示例用法-f:字段选择-d:分隔符-c:字符选择-b:字节选择--c