LoongArch单机Ceph Bcache加速4K随机写性能测试

2023-10-17 23:44

本文主要是介绍LoongArch单机Ceph Bcache加速4K随机写性能测试,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

LoongArch单机Ceph Bcache加速4K随机写性能测试

两块HDD做OSD

[root@ceph01 ~]# fio -direct=1 -iodepth=128 -thread -rw=randwrite -ioengine=libaio -bs=4k -size=100G -numjobs=1 -runtime=600 -group_reporting -name=mytest -filename=/dev/rbd0
mytest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.22
Starting 1 thread
Jobs: 1 (f=1): [w(1)][100.0%][w=1605KiB/s][w=401 IOPS][eta 00m:00s]
mytest: (groupid=0, jobs=1): err= 0: pid=83763: Mon Oct 16 03:44:45 2023write: IOPS=404, BW=1620KiB/s (1659kB/s)(950MiB/600262msec); 0 zone resetsslat (usec): min=3, max=116, avg= 5.76, stdev= 4.41clat (msec): min=36, max=947, avg=316.05, stdev=71.26lat (msec): min=36, max=947, avg=316.06, stdev=71.26clat percentiles (msec):|  1.00th=[  180],  5.00th=[  215], 10.00th=[  239], 20.00th=[  264],| 30.00th=[  279], 40.00th=[  296], 50.00th=[  309], 60.00th=[  326],| 70.00th=[  342], 80.00th=[  363], 90.00th=[  397], 95.00th=[  435],| 99.00th=[  542], 99.50th=[  609], 99.90th=[  793], 99.95th=[  810],| 99.99th=[  944]bw (  KiB/s): min=  232, max= 3072, per=100.00%, avg=1622.36, stdev=394.76, samples=1198iops        : min=   58, max=  768, avg=405.58, stdev=98.69, samples=1198lat (msec)   : 50=0.02%, 100=0.01%, 250=14.65%, 500=83.73%, 750=1.44%lat (msec)   : 1000=0.15%cpu          : usr=0.11%, sys=0.30%, ctx=16672, majf=0, minf=0IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%issued rwts: total=0,243095,0,0 short=0,0,0,0 dropped=0,0,0,0latency   : target=0, window=0, percentile=100.00%, depth=128Run status group 0 (all jobs):WRITE: bw=1620KiB/s (1659kB/s), 1620KiB/s-1620KiB/s (1659kB/s-1659kB/s), io=950MiB (996MB), run=600262-600262msecDisk stats (read/write):rbd0: ios=0/242996, merge=0/0, ticks=0/76720597, in_queue=76980152, util=100.00%

【Bcache】一块SSD加速两块HDD(OSD)

Ceph中使用bcache_4K随机写测试_1块SDD加速2块HDD

image-20231013003009484

[root@ceph01 ceph]# fio -direct=1 -iodepth=128 -thread -rw=randwrite -ioengine=libaio -bs=4k -size=100G -numjobs=1 -runtime=600 -group_reporting -name=mytest -filename=/dev/rbd0
mytest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.22
Starting 1 thread
Jobs: 1 (f=1): [w(1)][10.7%][w=12.1MiB/s][w=3097 IOPS][eta 08m:56s]
mytest: (groupid=0, jobs=1): err= 0: pid=37245: Thu Oct 12 22:08:43 2023write: IOPS=4065, BW=15.9MiB/s (16.7MB/s)(1024MiB/64475msec); 0 zone resetsslat (usec): min=3, max=173, avg= 5.80, stdev= 3.99clat (msec): min=9, max=336, avg=31.47, stdev=21.69lat (msec): min=9, max=336, avg=31.48, stdev=21.69clat percentiles (msec):|  1.00th=[   17],  5.00th=[   19], 10.00th=[   21], 20.00th=[   24],| 30.00th=[   26], 40.00th=[   27], 50.00th=[   28], 60.00th=[   29],| 70.00th=[   31], 80.00th=[   33], 90.00th=[   37], 95.00th=[   48],| 99.00th=[  146], 99.50th=[  180], 99.90th=[  268], 99.95th=[  284],| 99.99th=[  334]bw (  KiB/s): min= 4216, max=20288, per=100.00%, avg=16284.49, stdev=3300.36, samples=128iops        : min= 1054, max= 5072, avg=4071.00, stdev=825.06, samples=128lat (msec)   : 10=0.01%, 20=8.54%, 50=87.00%, 100=2.20%, 250=2.13%lat (msec)   : 500=0.14%cpu          : usr=0.95%, sys=3.21%, ctx=28915, majf=0, minf=0IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0latency   : target=0, window=0, percentile=100.00%, depth=128Run status group 0 (all jobs):WRITE: bw=15.9MiB/s (16.7MB/s), 15.9MiB/s-15.9MiB/s (16.7MB/s-16.7MB/s), io=1024MiB (1074MB), run=64475-64475msecDisk stats (read/write):rbd0: ios=0/261860, merge=0/0, ticks=0/8185334, in_queue=8198080, util=100.00%

【Bcache】两块SSD加速两块HDD(OSD)

image-20231014231026800

[root@ceph01 ceph]# fio -direct=1 -iodepth=128 -thread -rw=randwrite -ioengine=libaio -bs=4k -size=100G -numjobs=1 -runtime=600 -group_reporting -name=mytest -filename=/dev/rbd0
mytest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.22
Starting 1 thread
Jobs: 1 (f=1): [w(1)][10.5%][w=15.8MiB/s][w=4047 IOPS][eta 08m:57s]
mytest: (groupid=0, jobs=1): err= 0: pid=363441: Sat Oct 14 11:19:49 2023write: IOPS=4158, BW=16.2MiB/s (17.0MB/s)(1024MiB/63037msec); 0 zone resetsslat (usec): min=3, max=624, avg= 5.76, stdev= 4.10clat (msec): min=10, max=359, avg=30.77, stdev=18.74lat (msec): min=10, max=359, avg=30.78, stdev=18.74clat percentiles (msec):|  1.00th=[   17],  5.00th=[   19], 10.00th=[   21], 20.00th=[   23],| 30.00th=[   25], 40.00th=[   27], 50.00th=[   28], 60.00th=[   29],| 70.00th=[   31], 80.00th=[   33], 90.00th=[   38], 95.00th=[   51],| 99.00th=[  111], 99.50th=[  153], 99.90th=[  262], 99.95th=[  266],| 99.99th=[  359]bw (  KiB/s): min= 3768, max=21016, per=100.00%, avg=16674.80, stdev=3007.06, samples=125iops        : min=  942, max= 5254, avg=4168.57, stdev=751.74, samples=125lat (msec)   : 20=9.54%, 50=85.45%, 100=3.49%, 250=1.38%, 500=0.13%cpu          : usr=0.99%, sys=3.24%, ctx=28037, majf=0, minf=0IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0latency   : target=0, window=0, percentile=100.00%, depth=128Run status group 0 (all jobs):WRITE: bw=16.2MiB/s (17.0MB/s), 16.2MiB/s-16.2MiB/s (17.0MB/s-17.0MB/s), io=1024MiB (1074MB), run=63037-63037msecDisk stats (read/write):rbd0: ios=0/261835, merge=0/0, ticks=0/7989253, in_queue=7996052, util=100.00%

【Bcache】两块SSD加速两块HDD(OSD)+两块SSD加速block.db和block.wal

[root@ceph01 ~]# fio -direct=1 -iodepth=128 -thread -rw=randwrite -ioengine=libaio -bs=4k -size=100G -numjobs=1 -runtime=600 -group_reporting -name=mytest -filename=/dev/rbd0
mytest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.22
Starting 1 thread
Jobs: 1 (f=1): [w(1)][10.5%][w=16.3MiB/s][w=4162 IOPS][eta 08m:57s]
mytest: (groupid=0, jobs=1): err= 0: pid=73003: Mon Oct 16 02:34:47 2023write: IOPS=4124, BW=16.1MiB/s (16.9MB/s)(1024MiB/63559msec); 0 zone resetsslat (usec): min=3, max=109, avg= 5.69, stdev= 3.75clat (msec): min=10, max=294, avg=31.03, stdev=17.27lat (msec): min=10, max=294, avg=31.03, stdev=17.27clat percentiles (msec):|  1.00th=[   17],  5.00th=[   19], 10.00th=[   21], 20.00th=[   23],| 30.00th=[   25], 40.00th=[   27], 50.00th=[   28], 60.00th=[   30],| 70.00th=[   32], 80.00th=[   34], 90.00th=[   40], 95.00th=[   52],| 99.00th=[  110], 99.50th=[  136], 99.90th=[  226], 99.95th=[  249],| 99.99th=[  284]bw (  KiB/s): min= 6200, max=20376, per=100.00%, avg=16508.00, stdev=2659.19, samples=126iops        : min= 1550, max= 5094, avg=4126.88, stdev=664.77, samples=126lat (msec)   : 20=9.65%, 50=85.11%, 100=3.87%, 250=1.33%, 500=0.04%cpu          : usr=1.00%, sys=3.13%, ctx=25141, majf=0, minf=0IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0latency   : target=0, window=0, percentile=100.00%, depth=128Run status group 0 (all jobs):WRITE: bw=16.1MiB/s (16.9MB/s), 16.1MiB/s-16.1MiB/s (16.9MB/s-16.9MB/s), io=1024MiB (1074MB), run=63559-63559msecDisk stats (read/write):rbd0: ios=0/261407, merge=0/0, ticks=0/8062837, in_queue=8075472, util=100.00%

两块SSD做OSD

image-20231013225413334

[root@ceph01 ~]# fio -direct=1 -iodepth=128 -thread -rw=randwrite -ioengine=libaio -bs=4k -size=100G -numjobs=1 -runtime=600 -group_reporting -name=mytest -filename=/dev/rbd0
mytest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.22
Starting 1 thread
Jobs: 1 (f=1): [w(1)][4.3%][w=34.7MiB/s][w=8883 IOPS][eta 09m:35s] 
mytest: (groupid=0, jobs=1): err= 0: pid=125901: Fri Oct 13 11:06:55 2023write: IOPS=10.2k, BW=39.8MiB/s (41.7MB/s)(1024MiB/25751msec); 0 zone resetsslat (nsec): min=3310, max=78980, avg=5364.31, stdev=3425.80clat (usec): min=2965, max=33393, avg=12565.10, stdev=3428.92lat (usec): min=2970, max=33400, avg=12570.90, stdev=3428.60clat percentiles (usec):|  1.00th=[ 6652],  5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[ 9765],| 30.00th=[10552], 40.00th=[11207], 50.00th=[11994], 60.00th=[12780],| 70.00th=[13829], 80.00th=[15139], 90.00th=[17171], 95.00th=[19006],| 99.00th=[22676], 99.50th=[24511], 99.90th=[27657], 99.95th=[28705],| 99.99th=[31589]bw (  KiB/s): min=33628, max=44247, per=99.99%, avg=40717.55, stdev=2014.23, samples=51iops        : min= 8407, max=11061, avg=10179.18, stdev=503.57, samples=51lat (msec)   : 4=0.01%, 10=23.01%, 20=73.64%, 50=3.35%cpu          : usr=2.23%, sys=7.41%, ctx=20485, majf=0, minf=0IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0latency   : target=0, window=0, percentile=100.00%, depth=128Run status group 0 (all jobs):WRITE: bw=39.8MiB/s (41.7MB/s), 39.8MiB/s-39.8MiB/s (41.7MB/s-41.7MB/s), io=1024MiB (1074MB), run=25751-25751msecDisk stats (read/write):rbd0: ios=0/260926, merge=0/0, ticks=0/3220267, in_queue=3224032, util=99.89%

结论

测试环境为单机双副本!!

测试IOPSBWlat
两块HDD做OSD4041659kB/s316.06ms
【Bcache】一块SSD加速两块HDD(OSD)406516.7MB/s31.48ms
【Bcache】两块SSD加速两块HDD(OSD)415817.0MB/s30.78ms
【Bcache】两块SSD加速两块HDD(OSD)+两块SSD加速block.db和block.wal412416.9MB/s31.03ms
两块SSD做OSD10.2k41.7MB/s12.57ms
对比数据1640567MB/s7.80ms

这篇关于LoongArch单机Ceph Bcache加速4K随机写性能测试的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/228739

相关文章

Python如何使用__slots__实现节省内存和性能优化

《Python如何使用__slots__实现节省内存和性能优化》你有想过,一个小小的__slots__能让你的Python类内存消耗直接减半吗,没错,今天咱们要聊的就是这个让人眼前一亮的技巧,感兴趣的... 目录背景:内存吃得满满的类__slots__:你的内存管理小助手举个大概的例子:看看效果如何?1.

Python中随机休眠技术原理与应用详解

《Python中随机休眠技术原理与应用详解》在编程中,让程序暂停执行特定时间是常见需求,当需要引入不确定性时,随机休眠就成为关键技巧,下面我们就来看看Python中随机休眠技术的具体实现与应用吧... 目录引言一、实现原理与基础方法1.1 核心函数解析1.2 基础实现模板1.3 整数版实现二、典型应用场景2

Redis中高并发读写性能的深度解析与优化

《Redis中高并发读写性能的深度解析与优化》Redis作为一款高性能的内存数据库,广泛应用于缓存、消息队列、实时统计等场景,本文将深入探讨Redis的读写并发能力,感兴趣的小伙伴可以了解下... 目录引言一、Redis 并发能力概述1.1 Redis 的读写性能1.2 影响 Redis 并发能力的因素二、

Golang中拼接字符串的6种方式性能对比

《Golang中拼接字符串的6种方式性能对比》golang的string类型是不可修改的,对于拼接字符串来说,本质上还是创建一个新的对象将数据放进去,主要有6种拼接方式,下面小编就来为大家详细讲讲吧... 目录拼接方式介绍性能对比测试代码测试结果源码分析golang的string类型是不可修改的,对于拼接字

mysql线上查询之前要性能调优的技巧及示例

《mysql线上查询之前要性能调优的技巧及示例》文章介绍了查询优化的几种方法,包括使用索引、避免不必要的列和行、有效的JOIN策略、子查询和派生表的优化、查询提示和优化器提示等,这些方法可以帮助提高数... 目录避免不必要的列和行使用有效的JOIN策略使用子查询和派生表时要小心使用查询提示和优化器提示其他常

SpringBoot中整合RabbitMQ(测试+部署上线最新完整)的过程

《SpringBoot中整合RabbitMQ(测试+部署上线最新完整)的过程》本文详细介绍了如何在虚拟机和宝塔面板中安装RabbitMQ,并使用Java代码实现消息的发送和接收,通过异步通讯,可以优化... 目录一、RabbitMQ安装二、启动RabbitMQ三、javascript编写Java代码1、引入

Nginx设置连接超时并进行测试的方法步骤

《Nginx设置连接超时并进行测试的方法步骤》在高并发场景下,如果客户端与服务器的连接长时间未响应,会占用大量的系统资源,影响其他正常请求的处理效率,为了解决这个问题,可以通过设置Nginx的连接... 目录设置连接超时目的操作步骤测试连接超时测试方法:总结:设置连接超时目的设置客户端与服务器之间的连接

Python使用国内镜像加速pip安装的方法讲解

《Python使用国内镜像加速pip安装的方法讲解》在Python开发中,pip是一个非常重要的工具,用于安装和管理Python的第三方库,然而,在国内使用pip安装依赖时,往往会因为网络问题而导致速... 目录一、pip 工具简介1. 什么是 pip?2. 什么是 -i 参数?二、国内镜像源的选择三、如何

Springboot中分析SQL性能的两种方式详解

《Springboot中分析SQL性能的两种方式详解》文章介绍了SQL性能分析的两种方式:MyBatis-Plus性能分析插件和p6spy框架,MyBatis-Plus插件配置简单,适用于开发和测试环... 目录SQL性能分析的两种方式:功能介绍实现方式:实现步骤:SQL性能分析的两种方式:功能介绍记录

Tomcat高效部署与性能优化方式

《Tomcat高效部署与性能优化方式》本文介绍了如何高效部署Tomcat并进行性能优化,以确保Web应用的稳定运行和高效响应,高效部署包括环境准备、安装Tomcat、配置Tomcat、部署应用和启动T... 目录Tomcat高效部署与性能优化一、引言二、Tomcat高效部署三、Tomcat性能优化总结Tom