xfs logdev 完美解决cgroup iops限制时ext4 data=writeback才能解决的问题

本文主要是介绍xfs logdev 完美解决cgroup iops限制时ext4 data=writeback才能解决的问题,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

背景

Linux下面ext4和xfs都是带journal的文件系统,在写metadata前,必须先写metadata的journal。

(journal类似数据库的REDO LOG,可用于灾难恢复)

metadata则包含了文件系统的inodes, directories, indirect blocks信息。创建文件(含目录),改变文件大小,改变文件修改时间都涉及metadata的写操作。

在ext4,xfs文件系统中,metadata的journal操作是串行的,这点和redo log也类似。

cgroup的blkio模块,可以控制进程对指定块设备的读,写IOPS,吞吐率等。

当我们对iops进行限制时,由于”metadata的journal操作是串行的”,可能遇到相互干扰的问题。

例如:

有1个块设备,找到它的major,minor号。

#ll /dev/mapper/aliflash-lv0*  
lrwxrwxrwx 1 root root 7 Jan  7 11:12 /dev/mapper/aliflash-lv01 -> ../dm-0  
#ll /dev/dm-0  
brw-rw---- 1 root disk 253, 0 Jan  7 11:22 /dev/dm-0  
  • 1
  • 2
  • 3
  • 4
  • 1
  • 2
  • 3
  • 4

在这个块设备上创建xfs或ext4文件系统,并挂载到/data01。

初始化两个PostgreSQL数据库实例,分别放在/data01不同目录中。

限制其中一个PostgreSQL集群对(253:0)这个块设备的写IOPS到100。

ps -ewf|grep postgres  
digoal 24259     1  0 12:58 pts/4    00:00:00 /home/digoal/pgsql9.5/bin/postgres  -- 监听1921  
digoal 24260 24259  0 12:58 ?        00:00:00 postgres: logger process              
digoal 24262 24259  0 12:58 ?        00:00:00 postgres: checkpointer process        
digoal 24263 24259  0 12:58 ?        00:00:00 postgres: writer process              
digoal 24264 24259  0 12:58 ?        00:00:00 postgres: wal writer process          
digoal 24265 24259  0 12:58 ?        00:00:00 postgres: autovacuum launcher process     
digoal 24266 24259  0 12:58 ?        00:00:00 postgres: stats collector process     
digoal 24293     1  0 12:58 pts/4    00:00:00 /home/digoal/pgsql9.5/bin/postgres -D /data01/digoal/pg_root  -- 监听1922  
digoal 24294 24293  0 12:58 ?        00:00:00 postgres: logger process                                          
digoal 24296 24293  0 12:58 ?        00:00:20 postgres: checkpointer process                                    
digoal 24297 24293  0 12:58 ?        00:00:00 postgres: writer process                                          
digoal 24298 24293  0 12:58 ?        00:00:00 postgres: wal writer process                                      
digoal 24299 24293  0 12:58 ?        00:00:00 postgres: autovacuum launcher process                             
digoal 24300 24293  0 12:58 ?        00:00:00 postgres: stats collector process   
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

限制1921实例的IOPS

cd /sys/fs/cgroup/blkio/  
mkdir cg1  
cd cg1  
echo "253:0 100" > blkio.throttle.write_iops_device  
echo 24259 > tasks  
echo 24260 > tasks  
echo 24262 > tasks  
echo 24263 > tasks  
echo 24264 > tasks  
echo 24265 > tasks  
echo 24266 > tasks 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

开启一个会大量修改metadata的压测。使用create database即可。

(create database会大量的COPY模板库的数据文件,调用fsync。从而产生大量的metadata修改的动作,触发metadata journal的修改。)

vi test.sh  
#!/bin/bash  

for ((i=0;i<100;i++))  
do  
psql -h 127.0.0.1 -p 1921 -c "create database $i"  
done  . ./test.sh  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

观察块设备IOPS,被写IOPS限制在100了。

iostat -x 1  
avg-cpu:  %user   %nice %system %iowait  %steal   %idle  0.00    0.00    0.03    3.12    0.00   96.84  
Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util  
dm-0              0.00     0.00    0.00  100.00     0.00  1600.00    16.00     0.00    0.00   0.00   0.00  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 1
  • 2
  • 3
  • 4
  • 5

现在连到1922的实例,去压测性能:

pgbench -i -s 100 -h 127.0.0.1 -p 1922  pgbench -M prepared -n -r -P 1 -c 96 -j 96 -T 100 -h 127.0.0.1 -p 1922  
progress: 1.0 s, 33.0 tps, lat 2.841 ms stddev 1.746  
progress: 2.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 3.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 4.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 5.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 6.0 s, 197.2 tps, lat 2884.437 ms stddev 2944.982  
progress: 7.0 s, 556.6 tps, lat 33.527 ms stddev 34.798  
progress: 8.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 9.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 10.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 11.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 12.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 13.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 14.0 s, 0.0 tps, lat -nan ms stddev -nan  
progress: 15.0 s, 0.0 tps, lat -nan ms stddev -nan  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

可以看到,1922的性能受到了1921的影响,实际上块设备的IO能力有几十万。

为什么?

由于metadata journal是串行操作的,当1921实例操作metadata journal变慢之后,影响了1922实例对该文件系统的metadata journal的操作。

甚至select 1;这种操作都会受到影响,原因是每次front process与PostgreSQL在建立连接时,需要创建一个临时catalog文件global/pg_internal.init.pid。

跟踪第二个数据库实例的postmaster进程

[root@digoal ~]# strace -T -f -p 24293 >./conn 2>&1  
  • 1
  • 1

连接第二个数据库实例

postgres@digoal-> strace -T psql -h 127.0.0.1 -p 1922  
execve("/opt/pgsql/bin/psql", ["psql", "-h", "127.0.0.1", "-p", "1922"], [/* 34 vars */]) = 0 <0.009976>  
brk(0)                                  = 0x1747000 <0.000007>  
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3

… 
poll([{fd=3, events=POLLIN|POLLERR}], 1, -1) // 会卡在这里 
此时在系统中可以看到startup进程,是postmaster fork出来的,注意这个进程号,和后面的conn文件对应起来。

[root@digoal postgresql-9.4.4]# ps -efw|grep start  
postgres 46147 24293  0 19:43 ?        00:00:00 postgres: postgres postgres 127.0.0.1(17947) startup  strace -T psql -h 127.0.0.1 -p 1922的输出截取:  
setsockopt(3, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 <0.000008>  
connect(3, {sa_family=AF_INET, sin_port=htons(1922), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000943>  
poll([{fd=3, events=POLLOUT|POLLERR}], 1, -1) = 1 ([{fd=3, revents=POLLOUT}]) <0.000011>  
getsockopt(3, SOL_SOCKET, SO_ERROR, [0], [4]) = 0 <0.000124>  
getsockname(3, {sa_family=AF_INET, sin_port=htons(17947), sin_addr=inet_addr("127.0.0.1")}, [16]) = 0 <0.000015>  
poll([{fd=3, events=POLLOUT|POLLERR}], 1, -1) = 1 ([{fd=3, revents=POLLOUT}]) <0.000008>  
sendto(3, "\0\0\0\10\4\322\26/", 8, MSG_NOSIGNAL, NULL, 0) = 8 <0.000050>  
poll([{fd=3, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=3, revents=POLLIN}]) <0.000600>  
recvfrom(3, "N", 16384, 0, NULL, NULL)  = 1 <0.000010>  
poll([{fd=3, events=POLLOUT|POLLERR}], 1, -1) = 1 ([{fd=3, revents=POLLOUT}]) <0.000007>  
sendto(3, "\0\0\0T\0\3\0\0user\0postgres\0database\0p"..., 84, MSG_NOSIGNAL, NULL, 0) = 84 <0.000020>  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

poll响应时间达到了67秒

poll([{fd=3, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=3, revents=POLLIN}]) <67.436925>  , 响应时间达到了67秒  
recvfrom(3, "R\0\0\0\10\0\0\0\0S\0\0\0\32application_name\0p"..., 16384, 0, NULL, NULL) = 322 <0.000017>  
  • 1
  • 2
  • 1
  • 2

当建立连接后,查看postmaster进程的跟踪情况。可以看到startup进程46147,这个进程调用write花了66秒,因为这次调用write时触发了写修改metadata的动作。

[root@digoal ~]# grep "pid 46147" conn|less  
[pid 46147] mmap(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f0f1403d000 <0.000012>  
[pid 46147] unlink("global/pg_internal.init.46147") = -1 ENOENT (No such file or directory) <0.000059>  
[pid 46147] open("global/pg_internal.init.46147", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 14 <0.000068>  
[pid 46147] fstat(14, {st_mode=S_IFREG|0600, st_size=0, ...}) = 0 <0.000013>  
[pid 46147] mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f0f1403c000 <0.000020>  
[pid 46147] write(14, "f2W\0008\1\0\0\0\0\0\0\200\6\0\0\0\0\0\0U2\0\0\0\0\0\0\0\0\0\0"..., 4096 <unfinished ...>  
[pid 46147] <... write resumed> )       = 4096 <66.317440>  
[pid 46147] --- SIGALRM (Alarm clock) @ 0 (0) ---  
找到对应的代码:write_relcache_init_file@src/backend/utils/cache/relcache.c
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

重新跟踪这个c文件:

[root@digoal ~]# cat trc.stp   
global f_start[999999]  probe process("/opt/pgsql/bin/postgres").function("*@/opt/soft_bak/postgresql-9.4.4/src/backend/utils/cache/relcache.c").call {   f_start[execname(), pid(), tid(), cpu()] = gettimeofday_ms()  
}  probe process("/opt/pgsql/bin/postgres").function("*@/opt/soft_bak/postgresql-9.4.4/src/backend/utils/cache/relcache.c").return {   t=gettimeofday_ms()  a=execname()  b=cpu()  c=pid()  d=pp()  e=tid()  if (f_start[a,c,e,b] && t-f_start[a,c,e,b]>1) {  
#    printf("time:%d, execname:%s, pp:%s, par:%s\n", t - f_start[a,c,e,b], a, d, $$locals$$)  printf("time:%d, execname:%s, pp:%s\n", t - f_start[a,c,e,b], a, d)  }  }  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

因为startup进程是动态生成的,所以只能这样> [root@digoal ~]# cat t.sh

!/bin/bash for ((i=0;i<1;i=0)) do PID=ps -ewf|grep start|grep -v grep|awk '{print $2}' stap -vp 5 -DMAXSKIPPED=9999999 -DSTP_NO_OVERLOAD -DMAXTRYLOCK=100 ./trc.stp -x $PID done 重新跟踪如下:

postgres@digoal-> strace -T psql -h 127.0.0.1 -p 1922 
[root@digoal ~]# . ./t.sh Pass 1: parsed user script and 111 library 
script(s) using 209296virt/36828res/3172shr/34516data kb, in 
180usr/20sys/199real ms. Pass 2: analyzed script: 102 probe(s), 7 
function(s), 4 embed(s), 1 global(s) using 
223800virt/51400res/4172shr/49020data kb, in 80usr/60sys/142real ms. 
Pass 3: translated to C into 
“/tmp/stapbw7MDq/stap_b17f8a3318ccf4b972f4b84491bbdc1e_54060_src.c” 
using 223800virt/51744res/4504shr/49020data kb, in 10usr/40sys/57real 
ms. Pass 4: compiled C into 
“stap_b17f8a3318ccf4b972f4b84491bbdc1e_54060.ko” in 
1440usr/370sys/1640real ms. Pass 5: starting run. time:6134, 
execname:postgres, 
pp:process(“/opt/pgsql9.4.4/bin/postgres”).function(“write_item@/opt/soft_bak/postgresql-9.4.4/src/backend/utils/cache/relcache.c:4979”).return time:3, execname:postgres, 
pp:process(“/opt/pgsql9.4.4/bin/postgres”).function(“write_item@/opt/soft_bak/postgresql-9.4.4/src/backend/utils/cache/relcache.c:4979”).return time:6, execname:postgres, 
pp:process(“/opt/pgsql9.4.4/bin/postgres”).function(“write_item@/opt/soft_bak/postgresql-9.4.4/src/backend/utils/cache/relcache.c:4979”).return… 
以上问题怎么解决?如何隔离数据库实例的IOPS不会产生相互干扰呢?

解决办法1

不同的实例使用不同的文件系统。

例如

mkfs.ext4 /dev/mapper/vgdata01-lv01  
mkfs.ext4 /dev/mapper/vgdata01-lv02  
mount /dev/mapper/vgdata01-lv01 /data01  
mount /dev/mapper/vgdata01-lv02 /data02 
  • 1
  • 2
  • 3
  • 4
  • 1
  • 2
  • 3
  • 4

两个数据库实例分别放在/data01和/data02

限制/dev/mapper/vgdata01-lv01的IOPS,不会影响另一个文件系统。

这种方法的弊端:如果实例数很多,需要拆分成很多个小的文件系统,不适合空间弹性管理和共用。

解决办法2

针对ext4

正常情况下写数据的顺序如果你要修改metadata,必须确保metadata对应的块改变已经落盘,因此可能出现写metadata被迫要刷dirty data page的情况。

pic

如果dirty data page刷盘很慢,就会导致metadata写受堵。而写metadata journal又是串行的,势必影响其他进程对metadata journal的修改。

使用 data=writeback 加载ext4文件系统。

这个方法的原理是写metadata前,不需要等待data写完。从而可能出现metadata是新的,但是数据是旧的情况。(例如inode是新的,data是旧的,可能某些inode引用的块不存在或者是旧的已删除的块)

写metadata不等待写data,好处就是串行操作不好因为data受堵塞而堵塞。

       data={journal|ordered|writeback}  Specifies the journalling mode for file data.  Metadata is always journaled.  To use modes other than ordered on the root filesystem, pass the mode to  the  kernel  as  boot  parameter,  e.g.   root-  flags=data=journal.  journal  All data is committed into the journal prior to being written into the main filesystem.  ordered  This is the default mode.  All data is forced directly out to the main file system prior to its metadata being committed to the journal.  writeback  Data  ordering  is not preserved - data may be written into the main filesystem after its metadata has been committed to the journal.  This is rumoured to be the highest-throughput option.  It  guarantees internal filesystem integrity, however it can allow old data to appear in files after a crash and journal recovery.  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

弊端,文件系统或操作系统crash后,可能导致metadata和data不一致,出现脏块。

解决办法3

将用作journal块设备独立出来,在限制IOPS时,不限制journal块设备的IO(因为metadata journal的操作很少,也很快,没有必要限制),只限制data块设备的IO。

这种方法只适合xfs文件系统。

ext4文件系统使用这种方法未达到效果,ext4分开journal dev方法如下,但是没有效果,你可以尝试一下。

创建逻辑卷,一个放DATA,一个放journal

#pvcreate /dev/dfa  
#pvcreate /dev/dfb  
#pvcreate /dev/dfc  
#vgcreate aliflash /dev/dfa /dev/dfb /dev/dfc  
#lvcreate -i 3 -I 8 -L 1T -n lv01 aliflash  
#lvcreate -i 3 -I 8 -L 2G -n lv02 aliflash  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

创建journal块设备

#mkfs.ext4 -O journal_dev -b 4096 /dev/mapper/aliflash-lv02  
mke2fs 1.41.12 (17-May-2010)  
Discarding device blocks: done                              
Filesystem label=  
OS type: Linux  
Block size=4096 (log=2)  
Fragment size=4096 (log=2)  
Stride=2 blocks, Stripe width=6 blocks  
0 inodes, 525312 blocks  
0 blocks (0.00%) reserved for the super user  
First data block=0  
0 block group  
32768 blocks per group, 32768 fragments per group  
0 inodes per group  
Superblock backups stored on blocks:   Zeroing journal device: done     
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

创建ext4文件系统

#mkfs.ext4 -E stride=16,stripe-width=48 -J device=/dev/mapper/aliflash-lv02 /dev/mapper/aliflash-lv01  
mke2fs 1.41.12 (17-May-2010)  
Using journal device's blocksize: 4096  
Discarding device blocks: done                              
Filesystem label=  
OS type: Linux  
Block size=4096 (log=2)  
Fragment size=4096 (log=2)  
Stride=16 blocks, Stripe width=48 blocks  
67117056 inodes, 268437504 blocks  
13421875 blocks (5.00%) reserved for the super user  
First data block=0  
Maximum filesystem blocks=4294967296  
8193 block groups  
32768 blocks per group, 32768 fragments per group  
8192 inodes per group  
Superblock backups stored on blocks:   32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,   4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,   102400000, 214990848  Writing inode tables: done                              
Adding journal to device /dev/mapper/aliflash-lv02: done  
Writing superblocks and filesystem accounting information: done  This filesystem will be automatically checked every 31 mounts or  
180 days, whichever comes first.  Use tune2fs -c or -i to override.  #ll /dev/mapper/aliflash-lv0*  
lrwxrwxrwx 1 root root 7 Jan  7 11:12 /dev/mapper/aliflash-lv01 -> ../dm-0  
lrwxrwxrwx 1 root root 7 Jan  7 11:12 /dev/mapper/aliflash-lv02 -> ../dm-1  
#ll /dev/dm-0  
brw-rw---- 1 root disk 253, 0 Jan  7 11:22 /dev/dm-0  
#ll /dev/dm-1  
brw-rw---- 1 root disk 253, 1 Jan  7 11:22 /dev/dm-1 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35

挂载文件系统

#mount -o nobarrier,noatime,nodiratime,discard,defaults,nodelalloc /dev/mapper/aliflash-lv01 /data01
  • 1
  • 1

使用本文开头的方法,只限制/dev/mapper/vgdata01-lv01的IOPS,测试不能解决问题。

XFS文件系统使用journal dev的方法

# mkfs.xfs -f -b size=4096 -l logdev=/dev/mapper/vgdata01-lv02,size=2136997888,sunit=16 -d agcount=9000,sunit=16,swidth=48 /dev/mapper/vgdata01-lv01   
# mount -t xfs -o nobarrier,nolargeio,logbsize=262144,noatime,nodiratime,swalloc,logdev=/dev/mapper/vgdata01-lv02 /dev/mapper/vgdata01-lv01 /data01  
  • 1
  • 2
  • 1
  • 2

使用本文开头的方法,只限制/dev/mapper/vgdata01-lv01的IOPS,测试,问题得到解决。

(完)





































http://adsdf45d5.blog.sohu.com/
http://tuobei9253.blog.sohu.com/
http://gao50046270.blog.sohu.com/
http://chi50937290.blog.sohu.com/
http://zhangjiao47101.blog.sohu.com/
http://zhangji313664.blog.sohu.com/
http://bahai67604411.blog.sohu.com/
http://shaojiaokeguanj.blog.sohu.com/
http://zhituan072.blog.sohu.com/
http://chenji041445.blog.sohu.com/
http://wengqinmukenwen.blog.sohu.com/
http://shaji5cang.blog.sohu.com/
http://jingxuan3875148.blog.sohu.com/
http://duiyan4626.blog.sohu.com/
http://ju70481631.blog.sohu.com/
http://koulian7xing.blog.sohu.com/
http://jiaohai95967.blog.sohu.com/
http://tuzhebi474755.blog.sohu.com/
http://xin48991072.blog.sohu.com/
http://yihe924748902.blog.sohu.com/
http://canbafuwen.blog.sohu.com/
http://jieqiang72845.blog.sohu.com/
http://duidong447.blog.sohu.com/
http://huaji4623.blog.sohu.com/
http://panwen4rang.blog.sohu.com/
http://wopa93090365.blog.sohu.com/
http://muzhuanglunba.blog.sohu.com/
http://luanre961.blog.sohu.com/
http://duzhang762508.blog.sohu.com/
http://leizhishaqian.blog.sohu.com/
http://wenpan5liang.blog.sohu.com/
http://zhanmu51643817.blog.sohu.com/
http://yanzhong279063.blog.sohu.com/
http://siaodaodouzhi.blog.sohu.com/
http://bi54971604.blog.sohu.com/
http://yuetuo325.blog.sohu.com/
http://xuantuo8ben.blog.sohu.com/
http://tuituan3338688.blog.sohu.com/
http://hexin66061259.blog.sohu.com/
http://chishao8962869.blog.sohu.com/
http://douhao17571914.blog.sohu.com/
http://tuofen9247.blog.sohu.com/
http://zhanpu930127.blog.sohu.com/
http://jiaopou106103.blog.sohu.com/
http://asdgr44545.blog.sohu.com/
http://shaoxing43803.blog.sohu.com/
http://guasigou836432.blog.sohu.com/
http://jiyan7858973.blog.sohu.com/
http://siyingxunzhang.blog.sohu.com/
http://dixin0530421.blog.sohu.com/
http://denglan6927035.blog.sohu.com/
http://luci4837632.blog.sohu.com/
http://muo5279211.blog.sohu.com/
http://chuncheng2302.blog.sohu.com/


http://yuxie9538003.blog.sohu.com/
http://nashi0723432.blog.sohu.com/
http://shikuang8235124.blog.sohu.com/
http://bushu800533.blog.sohu.com/
http://fangshi853691.blog.sohu.com/
http://zhaonan8553162.blog.sohu.com/
http://tan62380239.blog.sohu.com/
http://baliangyixie.blog.sohu.com/
http://wocong7pang.blog.sohu.com/
http://yansubatuobeng.blog.sohu.com/
http://weigong122433.blog.sohu.com/
http://xilun3064916.blog.sohu.com/
http://jiuqiao12112.blog.sohu.com/
http://jiuqiao12112.blog.sohu.com/
http://meidu040028243.blog.sohu.com/
http://yaodui715743.blog.sohu.com/
http://bizhuang3047407.blog.sohu.com/
http://naoren9su.blog.sohu.com/
http://caigou8607205.blog.sohu.com/
http://yuanchan940970.blog.sohu.com/
http://yahui2reng.blog.sohu.com/
http://baken874558224.blog.sohu.com/
http://qiang67839430.blog.sohu.com/
http://caisongshanlin.blog.sohu.com/
http://xingren8yuan.blog.sohu.com/
http://mi85743908.blog.sohu.com/
http://cangnaoweizhong.blog.sohu.com/
http://caodu1988.blog.sohu.com/
http://oumen024103548.blog.sohu.com/
http://conglunpawengu.blog.sohu.com/
http://zongou4032.blog.sohu.com/
http://simuzhaolunyue.blog.sohu.com/
**
http://ji2435944976.blog.sohu.com/
http://mi57927962.blog.sohu.com/
http://zaijiao111198.blog.sohu.com/
http://wengsonghui.blog.sohu.com/
http://tanglu0820.blog.sohu.com/
http://kanggu324319.blog.sohu.com/
http://hao09973884.blog.sohu.com/
http://liaoyou294403.blog.sohu.com/
http://beiga31924.blog.sohu.com/
http://bengang8793.blog.sohu.com/
http://zhaoyemo248414.blog.sohu.com/
http://yunzhuang883828.blog.sohu.com/
http://tang96506293.blog.sohu.com/
http://mi97226219.blog.sohu.com/
http://xintan8751924.blog.sohu.com/
http://yisi7871228.blog.sohu.com/
http://anyou8877302.blog.sohu.com/
http://hedui0648.blog.sohu.com/
http://zhangzhao9442.blog.sohu.com/
http://fanzhe22586.blog.sohu.com/
http://lutan712765.blog.sohu.com/
http://tunzhuo6756784.blog.sohu.com/
http://xingdu647413.blog.sohu.com/
http://laogai6yi.blog.sohu.com/
http://bazhao8653793.blog.sohu.com/
http://wengou127268.blog.sohu.com/
http://gouyi095785830.blog.sohu.com/
http://pai43751829.blog.sohu.com/
http://lanli40170338.blog.sohu.com/
http://liaocong059.blog.sohu.com/
http://xianpamizi.blog.sohu.com/
http://jiaogang8823.blog.sohu.com/
http://qiujiangtuibi.blog.sohu.com/
http://ziyou6022784.blog.sohu.com/
http://yiju9zi543983.blog.sohu.com/
http://taoou65009244.blog.sohu.com/
http://qiao70300396.blog.sohu.com/
http://yanlanziborao.blog.sohu.com/
http://shanyi4623.blog.sohu.com/
http://zhongfeilaojion.blog.sohu.com/
http://chui70041599.blog.sohu.com/
http://fanji5715929.blog.sohu.com/
http://qiaocang583854.blog.sohu.com/
http://pojiao653404.blog.sohu.com/
http://xialao7419031.blog.sohu.com/
http://jiesi05618251.blog.sohu.com/
http://chun33956606.blog.sohu.com/
http://fuyi23583.blog.sohu.com/
http://liaochao5632829.blog.sohu.com/
http://jiukong3431.blog.sohu.com/
http://payun902839.blog.sohu.com/
http://shou65411074.blog.sohu.com/
http://jiu27577730.blog.sohu.com/
http://suiyun307356.blog.sohu.com/
http://zhuangjiao0777.blog.sohu.com/
http://rangcu59067747.blog.sohu.com/
http://chiben396.blog.sohu.com/
http://tangdudibabi.blog.sohu.com/
http://xifu94493751.blog.sohu.com/
http://gouji6331148.blog.sohu.com/
http://songsiqunshan.blog.sohu.com/
http://yefutuyun.blog.sohu.com/
http://rangmei240083.blog.sohu.com/
http://laokong0343.blog.sohu.com/
http://ouguaben957187.blog.sohu.com/
http://kongcu8ji.blog.sohu.com/
http://ju96896872.blog.sohu.com/
http://gu42563539.blog.sohu.com/
http://budeng470.blog.sohu.com/
http://beimo0647.blog.sohu.com/
http://mitui0753836.blog.sohu.com/
http://ditui050475979.blog.sohu.com/
http://shanpuwosongchi.blog.sohu.com/
http://shanwen41739.blog.sohu.com/
http://xiehe4899212.blog.sohu.com/
http://gongxun6449689.blog.sohu.com/
http://yanji817992.blog.sohu.com/
http://panyefangzhuang.blog.sohu.com/
http://xiba70103.blog.sohu.com/
http://yele964294878.blog.sohu.com/
http://jimengsi785851.blog.sohu.com/
http://naixing17877.blog.sohu.com/
http://songmichengzhuo.blog.sohu.com/
http://zhaoke0zhao.blog.sohu.com/
http://shikui0630.blog.sohu.com/
http://zhuijie03332.blog.sohu.com/
http://pingshaowobu.blog.sohu.com/
http://xingzi5482044.blog.sohu.com/
http://haohe167305427.blog.sohu.com/
http://wenjitangshidi.blog.sohu.com/
http://hekuang0883002.blog.sohu.com/
http://liexian8951993.blog.sohu.com/
http://kaoyihuangping.blog.sohu.com/
http://shanchenglangan.blog.sohu.com/
http://youe8fen248981.blog.sohu.com/
http://xinlao1360.blog.sohu.com/
http://xinfei162.blog.sohu.com/
http://kuishaqiaojia.blog.sohu.com/
http://youpai28646.blog.sohu.com/
http://zheci884696722.blog.sohu.com/
http://bici4586276.blog.sohu.com/
http://baixianshuozili.blog.sohu.com/
http://yaoou0496644.blog.sohu.com/
http://feiba40943575.blog.sohu.com/
http://mikang29374279.blog.sohu.com/
http://ziyayongwei.blog.sohu.com/
http://jugu8xin108716.blog.sohu.com/
http://tongsi74688327.blog.sohu.com/
http://jiumu52584476.blog.sohu.com/
http://shoulei4si.blog.sohu.com/
http://hela90333.blog.sohu.com/
http://canfu1201.blog.sohu.com/
http://ci63177386.blog.sohu.com/
http://paichun611904.blog.sohu.com/
http://muhuaigongduan.blog.sohu.com/
http://yepin86649006.blog.sohu.com/
http://hepoumenxian.blog.sohu.com/
http://juxianfangua.blog.sohu.com/
http://cihan0xiong.blog.sohu.com/
http://zi42110143.blog.sohu.com/
http://xicuduiyunkan.blog.sohu.com/
http://sekenganping.blog.sohu.com/
http://tushuihuangnafu.blog.sohu.com/
http://pandou585.blog.sohu.com/
http://cupolu870760.blog.sohu.com/
http://jifu6446256106.blog.sohu.com/
http://shaosha3590.blog.sohu.com/
http://keyou4880255.blog.sohu.com/
http://yijiong15533.blog.sohu.com/






















这篇关于xfs logdev 完美解决cgroup iops限制时ext4 data=writeback才能解决的问题的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/795173

相关文章

usb接口驱动异常问题常用解决方案

《usb接口驱动异常问题常用解决方案》当遇到USB接口驱动异常时,可以通过多种方法来解决,其中主要就包括重装USB控制器、禁用USB选择性暂停设置、更新或安装新的主板驱动等... usb接口驱动异常怎么办,USB接口驱动异常是常见问题,通常由驱动损坏、系统更新冲突、硬件故障或电源管理设置导致。以下是常用解决

Mysql如何解决死锁问题

《Mysql如何解决死锁问题》:本文主要介绍Mysql如何解决死锁问题,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录【一】mysql中锁分类和加锁情况【1】按锁的粒度分类全局锁表级锁行级锁【2】按锁的模式分类【二】加锁方式的影响因素【三】Mysql的死锁情况【1

SpringBoot内嵌Tomcat临时目录问题及解决

《SpringBoot内嵌Tomcat临时目录问题及解决》:本文主要介绍SpringBoot内嵌Tomcat临时目录问题及解决,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,... 目录SprinjavascriptgBoot内嵌Tomcat临时目录问题1.背景2.方案3.代码中配置t

SpringBoot使用GZIP压缩反回数据问题

《SpringBoot使用GZIP压缩反回数据问题》:本文主要介绍SpringBoot使用GZIP压缩反回数据问题,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录SpringBoot使用GZIP压缩反回数据1、初识gzip2、gzip是什么,可以干什么?3、Spr

如何解决idea的Module:‘:app‘platform‘android-32‘not found.问题

《如何解决idea的Module:‘:app‘platform‘android-32‘notfound.问题》:本文主要介绍如何解决idea的Module:‘:app‘platform‘andr... 目录idea的Module:‘:app‘pwww.chinasem.cnlatform‘android-32

Node.js 数据库 CRUD 项目示例详解(完美解决方案)

《Node.js数据库CRUD项目示例详解(完美解决方案)》:本文主要介绍Node.js数据库CRUD项目示例详解(完美解决方案),本文给大家介绍的非常详细,对大家的学习或工作具有一定的参考... 目录项目结构1. 初始化项目2. 配置数据库连接 (config/db.js)3. 创建模型 (models/

kali linux 无法登录root的问题及解决方法

《kalilinux无法登录root的问题及解决方法》:本文主要介绍kalilinux无法登录root的问题及解决方法,本文给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,... 目录kali linux 无法登录root1、问题描述1.1、本地登录root1.2、ssh远程登录root2、

SpringBoot应用中出现的Full GC问题的场景与解决

《SpringBoot应用中出现的FullGC问题的场景与解决》这篇文章主要为大家详细介绍了SpringBoot应用中出现的FullGC问题的场景与解决方法,文中的示例代码讲解详细,感兴趣的小伙伴可... 目录Full GC的原理与触发条件原理触发条件对Spring Boot应用的影响示例代码优化建议结论F

MySQL 中查询 VARCHAR 类型 JSON 数据的问题记录

《MySQL中查询VARCHAR类型JSON数据的问题记录》在数据库设计中,有时我们会将JSON数据存储在VARCHAR或TEXT类型字段中,本文将详细介绍如何在MySQL中有效查询存储为V... 目录一、问题背景二、mysql jsON 函数2.1 常用 JSON 函数三、查询示例3.1 基本查询3.2

Pyserial设置缓冲区大小失败的问题解决

《Pyserial设置缓冲区大小失败的问题解决》本文主要介绍了Pyserial设置缓冲区大小失败的问题解决,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面... 目录问题描述原因分析解决方案问题描述使用set_buffer_size()设置缓冲区大小后,buf