Linux 常见 RAID 及软 RAID 创建

2023-11-21 04:59
文章标签 linux 常见 创建 raid 及软

本文主要是介绍Linux 常见 RAID 及软 RAID 创建,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

CentOS-Logo

RAID可以大幅度的提高磁盘性能,以及可靠性,这么好的技术怎么能不掌握呢!此篇介绍一些常见RAID,及其在Linux上的软RAID创建方法。


mdadm

  • 创建软RAID
mdadm -C -v /dev/创建的设备名 -l级别 -n数量 添加的磁盘 [-x数量 添加的热备份盘]

-C:创建一个新的阵列--create
-v:显示细节--verbose
-l:设定RAID级别--level=
-n:指定阵列中可用device数目--raid-devices=
-x:指定初始阵列的富余device数目--spare-devices=,空闲盘(热备磁盘)能在工作盘损坏后自动顶替

  • 查看详细信息
mdadm -D /dev/设备名

-D:打印一个或多个md device的详细信息--detail

  • 查看RAID的状态
cat /proc/mdstat
  • 模拟损坏
mdadm -f /dev/设备名 磁盘

-f:模拟损坏fail

  • 移除损坏
mdadm -r /dev/设备名 磁盘

-r:移除remove

  • 添加新硬盘作为热备份盘
mdadm -a /dev/设备名 磁盘

-a:添加add


RAID0

RAID0俗称条带,它将两个或多个硬盘组成一个逻辑硬盘,容量是所有硬盘之和,因为是多个硬盘组合成一个,故可并行写操作,写入速度提高,但此方式硬盘数据没有冗余,没有容错,一旦一个物理硬盘损坏,则所有数据均丢失。因而,RAID0适合于对数据量大,但安全性要求不高的场景,比如音像、视频文件的存储等。

RAID0

实验RAID0创建,格式化,挂载使用。

1.添加220G的硬盘,分区,类型IDfd

[root@localhost ~]# fdisk -l | grep raid
/dev/sdb1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdc1            2048    41943039    20970496   fd  Linux raid autodetect

2.创建RAID0

[root@localhost ~]# mdadm -C -v /dev/md0 -l0 -n2 /dev/sd{b,c}1
mdadm: chunk size defaults to 512K
mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

3.查看raidstat状态。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid0]
md0 : active raid0 sdc1[1] sdb1[0]41906176 blocks super 1.2 512k chunksunused devices: <none>

4.查看RAID0的详细信息。

[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:Version : 1.2Creation Time : Sun Aug 25 15:28:13 2019Raid Level : raid0Array Size : 41906176 (39.96 GiB 42.91 GB)Raid Devices : 2Total Devices : 2Persistence : Superblock is persistentUpdate Time : Sun Aug 25 15:28:13 2019State : cleanActive Devices : 2Working Devices : 2Failed Devices : 0Spare Devices : 0Chunk Size : 512KConsistency Policy : noneName : localhost:0  (local to host localhost)UUID : 7ff54c57:b99a59da:6b56c6d5:a4576ccfEvents : 0Number   Major   Minor   RaidDevice State0       8       17        0      active sync   /dev/sdb11       8       33        1      active sync   /dev/sdc1

5.格式化。

[root@localhost ~]# mkfs.xfs /dev/md0
meta-data=/dev/md0               isize=512    agcount=16, agsize=654720 blks=                       sectsz=512   attr=2, projid32bit=1=                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25=                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5120, version=2=                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

6.挂载使用。

[root@localhost ~]# mkdir /mnt/md0
[root@localhost ~]# mount /dev/md0 /mnt/md0/
[root@localhost ~]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        17G 1013M   16G   6% /
devtmpfs                devtmpfs  901M     0  901M   0% /dev
tmpfs                   tmpfs     912M     0  912M   0% /dev/shm
tmpfs                   tmpfs     912M  8.7M  904M   1% /run
tmpfs                   tmpfs     912M     0  912M   0% /sys/fs/cgroup
/dev/sda1               xfs      1014M  143M  872M  15% /boot
tmpfs                   tmpfs     183M     0  183M   0% /run/user/0
/dev/md0                xfs        40G   33M   40G   1% /mnt/md0

RAID1

RAID1俗称镜像,它最少由两个硬盘组成,且两个硬盘上存储的数据均相同,以实现数据冗余。RAID1读操作速度有所提高,写操作理论上与单硬盘速度一样,但由于数据需要同时写入所有硬盘,实际上稍为下降。容错性是所有组合方式里最好的,只要有一块硬盘正常,则能保持正常工作。但它对硬盘容量的利用率则是最低,只有50%,因而成本也是最高。RAID1适合对数据安全性要求非常高的场景,比如存储数据库数据文件之类。

RAID1

实验RAID1创建,格式化,挂载使用,故障模拟,重新添加热备份。

1.添加320G的硬盘,分区,类型IDfd

[root@localhost ~]# fdisk -l | grep raid
/dev/sdb1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdc1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdd1            2048    41943039    20970496   fd  Linux raid autodetect

2.创建RAID1,并添加1个热备份盘。

[root@localhost ~]# mdadm -C -v /dev/md1 -l1 -n2 /dev/sd{b,c}1 -x1 /dev/sdd1
mdadm: Note: this array has metadata at the start andmay not be suitable as a boot device.  If you plan tostore '/boot' on this device please ensure thatyour boot-loader understands md/v1.x metadata, or use--metadata=0.90
mdadm: size set to 20953088K
Continue creating array? y
mdadm: Fail create md1 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

3.查看raidstat状态。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdd1[2](S) sdc1[1] sdb1[0]20953088 blocks super 1.2 [2/2] [UU][========>............]  resync = 44.6% (9345792/20953088) finish=0.9min speed=203996K/secunused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdd1[2](S) sdc1[1] sdb1[0]20953088 blocks super 1.2 [2/2] [UU]unused devices: <none>

4.查看RAID1的详细信息。

[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:Version : 1.2Creation Time : Sun Aug 25 15:38:44 2019Raid Level : raid1Array Size : 20953088 (19.98 GiB 21.46 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 3Persistence : Superblock is persistentUpdate Time : Sun Aug 25 15:39:24 2019State : clean, resyncingActive Devices : 2Working Devices : 3Failed Devices : 0Spare Devices : 1Consistency Policy : resyncResync Status : 40% completeName : localhost:1  (local to host localhost)UUID : b921e8b3:a18e2fc9:11706ba4:ed633dfdEvents : 6Number   Major   Minor   RaidDevice State0       8       17        0      active sync   /dev/sdb11       8       33        1      active sync   /dev/sdc12       8       49        -      spare   /dev/sdd1

5.格式化。

[root@localhost ~]# mkfs.xfs /dev/md1
meta-data=/dev/md1               isize=512    agcount=4, agsize=1309568 blks=                       sectsz=512   attr=2, projid32bit=1=                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=5238272, imaxpct=25=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

6.挂载使用。

[root@localhost ~]# mkdir /mnt/md1
[root@localhost ~]# mount /dev/md1 /mnt/md1/
[root@localhost ~]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        17G 1014M   16G   6% /
devtmpfs                devtmpfs  901M     0  901M   0% /dev
tmpfs                   tmpfs     912M     0  912M   0% /dev/shm
tmpfs                   tmpfs     912M  8.7M  904M   1% /run
tmpfs                   tmpfs     912M     0  912M   0% /sys/fs/cgroup
/dev/sda1               xfs      1014M  143M  872M  15% /boot
tmpfs                   tmpfs     183M     0  183M   0% /run/user/0
/dev/md1                xfs        20G   33M   20G   1% /mnt/md1

7.创建测试文件。

[root@localhost ~]# touch /mnt/md1/test{1..9}.txt
[root@localhost ~]# ls /mnt/md1/
test1.txt  test2.txt  test3.txt  test4.txt  test5.txt  test6.txt  test7.txt  test8.txt  test9.txt

8.故障模拟。

[root@localhost ~]# mdadm -f /dev/md1 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md1

9.查看测试文件。

[root@localhost ~]# ls /mnt/md1/
test1.txt  test2.txt  test3.txt  test4.txt  test5.txt  test6.txt  test7.txt  test8.txt  test9.txt

10.查看状态。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdd1[2] sdc1[1] sdb1[0](F)20953088 blocks super 1.2 [2/1] [_U][=====>...............]  recovery = 26.7% (5600384/20953088) finish=1.2min speed=200013K/secunused devices: <none>
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:Version : 1.2Creation Time : Sun Aug 25 15:38:44 2019Raid Level : raid1Array Size : 20953088 (19.98 GiB 21.46 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 3Persistence : Superblock is persistentUpdate Time : Sun Aug 25 15:47:57 2019State : active, degraded, recoveringActive Devices : 1Working Devices : 2Failed Devices : 1Spare Devices : 1Consistency Policy : resyncRebuild Status : 17% completeName : localhost:1  (local to host localhost)UUID : b921e8b3:a18e2fc9:11706ba4:ed633dfdEvents : 22Number   Major   Minor   RaidDevice State2       8       49        0      spare rebuilding   /dev/sdd11       8       33        1      active sync   /dev/sdc10       8       17        -      faulty   /dev/sdb1

11.再次查看状态。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdd1[2] sdc1[1] sdb1[0](F)20953088 blocks super 1.2 [2/2] [UU]unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:Version : 1.2Creation Time : Sun Aug 25 15:38:44 2019Raid Level : raid1Array Size : 20953088 (19.98 GiB 21.46 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 3Persistence : Superblock is persistentUpdate Time : Sun Aug 25 15:49:28 2019State : activeActive Devices : 2Working Devices : 2Failed Devices : 1Spare Devices : 0Consistency Policy : resyncName : localhost:1  (local to host localhost)UUID : b921e8b3:a18e2fc9:11706ba4:ed633dfdEvents : 37Number   Major   Minor   RaidDevice State2       8       49        0      active sync   /dev/sdd11       8       33        1      active sync   /dev/sdc10       8       17        -      faulty   /dev/sdb1

12.移除损坏的磁盘

[root@localhost ~]# mdadm -r /dev/md1 /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md1
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:Version : 1.2Creation Time : Sun Aug 25 15:38:44 2019Raid Level : raid1Array Size : 20953088 (19.98 GiB 21.46 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 2Persistence : Superblock is persistentUpdate Time : Sun Aug 25 15:52:57 2019State : activeActive Devices : 2Working Devices : 2Failed Devices : 0Spare Devices : 0Consistency Policy : resyncName : localhost:1  (local to host localhost)UUID : b921e8b3:a18e2fc9:11706ba4:ed633dfdEvents : 38Number   Major   Minor   RaidDevice State2       8       49        0      active sync   /dev/sdd11       8       33        1      active sync   /dev/sdc1

13.重新添加热备份盘。

[root@localhost ~]# mdadm -a /dev/md1 /dev/sdb1
mdadm: added /dev/sdb1
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:Version : 1.2Creation Time : Sun Aug 25 15:38:44 2019Raid Level : raid1Array Size : 20953088 (19.98 GiB 21.46 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 3Persistence : Superblock is persistentUpdate Time : Sun Aug 25 15:53:32 2019State : activeActive Devices : 2Working Devices : 3Failed Devices : 0Spare Devices : 1Consistency Policy : resyncName : localhost:1  (local to host localhost)UUID : b921e8b3:a18e2fc9:11706ba4:ed633dfdEvents : 39Number   Major   Minor   RaidDevice State2       8       49        0      active sync   /dev/sdd11       8       33        1      active sync   /dev/sdc13       8       17        -      spare   /dev/sdb1

RAID5

RAID5最少由三个硬盘组成,它将数据分散存储于阵列中的每个硬盘,并且还伴有一个数据校验位,数据位与校验位通过算法能相互验证,当丢失其中的一位时,RAID控制器能通过算法,利用其它两位数据将丢失的数据进行计算还原。因而RAID5最多能允许一个硬盘损坏,有容错性。RAID5相对于其它的组合方式,在容错与成本方面有一个平衡,因而受到大多数使用者的欢迎。一般的磁盘阵列,最常使用的就是RAID5这种方式。

RAID5

实验RAID5创建,格式化,挂载使用,故障模拟,重新添加热备份。

1.添加420G的硬盘,分区,类型IDfd

[root@localhost ~]# fdisk -l | grep raid
/dev/sdb1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdc1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdd1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sde1            2048    41943039    20970496   fd  Linux raid autodetect

2.创建RAID5,并添加1个热备份盘。

[root@localhost ~]# mdadm -C -v /dev/md5 -l5 -n3 /dev/sd[b-d]1 -x1 /dev/sde1
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 20953088K
mdadm: Fail create md5 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.

3.查看raidstat状态。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]41906176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_][====>................]  recovery = 24.1% (5057340/20953088) finish=1.3min speed=202293K/secunused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]41906176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]unused devices: <none>

4.查看RAID5的详细信息。

[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:Version : 1.2Creation Time : Sun Aug 25 16:13:44 2019Raid Level : raid5Array Size : 41906176 (39.96 GiB 42.91 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 3Total Devices : 4Persistence : Superblock is persistentUpdate Time : Sun Aug 25 16:15:29 2019State : cleanActive Devices : 3Working Devices : 4Failed Devices : 0Spare Devices : 1Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncName : localhost:5  (local to host localhost)UUID : a055094e:9adaff79:2edae9b9:0dcc3f1bEvents : 18Number   Major   Minor   RaidDevice State0       8       17        0      active sync   /dev/sdb11       8       33        1      active sync   /dev/sdc14       8       49        2      active sync   /dev/sdd13       8       65        -      spare   /dev/sde1

5.格式化。

[root@localhost ~]# mkfs.xfs /dev/md5
meta-data=/dev/md5               isize=512    agcount=16, agsize=654720 blks=                       sectsz=512   attr=2, projid32bit=1=                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25=                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5120, version=2=                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

6.挂载使用。

[root@localhost ~]# mkdir /mnt/md5
[root@localhost ~]# mount /dev/md5 /mnt/md5/
[root@localhost ~]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        17G 1014M   16G   6% /
devtmpfs                devtmpfs  901M     0  901M   0% /dev
tmpfs                   tmpfs     912M     0  912M   0% /dev/shm
tmpfs                   tmpfs     912M  8.7M  904M   1% /run
tmpfs                   tmpfs     912M     0  912M   0% /sys/fs/cgroup
/dev/sda1               xfs      1014M  143M  872M  15% /boot
tmpfs                   tmpfs     183M     0  183M   0% /run/user/0
/dev/md5                xfs        40G   33M   40G   1% /mnt/md5

7.创建测试文件。

[root@localhost ~]# touch /mnt/md5/test{1..9}.txt
[root@localhost ~]# ls /mnt/md5/
test1.txt  test2.txt  test3.txt  test4.txt  test5.txt  test6.txt  test7.txt  test8.txt  test9.txt

8.故障模拟。

[root@localhost ~]# mdadm -f /dev/md5 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md5

9.查看测试文件。

[root@localhost ~]# ls /mnt/md5/
test1.txt  test2.txt  test3.txt  test4.txt  test5.txt  test6.txt  test7.txt  test8.txt  test9.txt

10.查看状态。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[4] sde1[3] sdc1[1] sdb1[0](F)41906176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU][====>................]  recovery = 21.0% (4411136/20953088) finish=1.3min speed=210054K/secunused devices: <none>
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:Version : 1.2Creation Time : Sun Aug 25 16:13:44 2019Raid Level : raid5Array Size : 41906176 (39.96 GiB 42.91 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 3Total Devices : 4Persistence : Superblock is persistentUpdate Time : Sun Aug 25 16:21:31 2019State : clean, degraded, recoveringActive Devices : 2Working Devices : 3Failed Devices : 1Spare Devices : 1Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncRebuild Status : 12% completeName : localhost:5  (local to host localhost)UUID : a055094e:9adaff79:2edae9b9:0dcc3f1bEvents : 23Number   Major   Minor   RaidDevice State3       8       65        0      spare rebuilding   /dev/sde11       8       33        1      active sync   /dev/sdc14       8       49        2      active sync   /dev/sdd10       8       17        -      faulty   /dev/sdb1

11.再次查看状态。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[4] sde1[3] sdc1[1] sdb1[0](F)41906176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:Version : 1.2Creation Time : Sun Aug 25 16:13:44 2019Raid Level : raid5Array Size : 41906176 (39.96 GiB 42.91 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 3Total Devices : 4Persistence : Superblock is persistentUpdate Time : Sun Aug 25 16:23:09 2019State : cleanActive Devices : 3Working Devices : 3Failed Devices : 1Spare Devices : 0Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncName : localhost:5  (local to host localhost)UUID : a055094e:9adaff79:2edae9b9:0dcc3f1bEvents : 39Number   Major   Minor   RaidDevice State3       8       65        0      active sync   /dev/sde11       8       33        1      active sync   /dev/sdc14       8       49        2      active sync   /dev/sdd10       8       17        -      faulty   /dev/sdb1

12.移除损坏的磁盘。

[root@localhost ~]# mdadm -r /dev/md5 /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md5
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:Version : 1.2Creation Time : Sun Aug 25 16:13:44 2019Raid Level : raid5Array Size : 41906176 (39.96 GiB 42.91 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 3Total Devices : 3Persistence : Superblock is persistentUpdate Time : Sun Aug 25 16:25:01 2019State : cleanActive Devices : 3Working Devices : 3Failed Devices : 0Spare Devices : 0Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncName : localhost:5  (local to host localhost)UUID : a055094e:9adaff79:2edae9b9:0dcc3f1bEvents : 40Number   Major   Minor   RaidDevice State3       8       65        0      active sync   /dev/sde11       8       33        1      active sync   /dev/sdc14       8       49        2      active sync   /dev/sdd1

13.重新添加热备份盘。

[root@localhost ~]# mdadm -a /dev/md5 /dev/sdb1
mdadm: added /dev/sdb1
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:Version : 1.2Creation Time : Sun Aug 25 16:13:44 2019Raid Level : raid5Array Size : 41906176 (39.96 GiB 42.91 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 3Total Devices : 4Persistence : Superblock is persistentUpdate Time : Sun Aug 25 16:25:22 2019State : cleanActive Devices : 3Working Devices : 4Failed Devices : 0Spare Devices : 1Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncName : localhost:5  (local to host localhost)UUID : a055094e:9adaff79:2edae9b9:0dcc3f1bEvents : 41Number   Major   Minor   RaidDevice State3       8       65        0      active sync   /dev/sde11       8       33        1      active sync   /dev/sdc14       8       49        2      active sync   /dev/sdd15       8       17        -      spare   /dev/sdb1

RAID6

RAID6是在RAID5的基础上改良而成的,RAID6再将数据校验位增加一位,所以允许损坏的硬盘数量也由 RAID5的一个增加到二个。由于同一阵列中两个硬盘同时损坏的概率非常少,所以,RAID6用增加一块硬盘的代价,换来了比RAID5更高的数据安全性。

RAID6

实验RAID6创建,格式化,挂载使用,故障模拟,重新添加热备份。

1.添加620G的硬盘,分区,类型IDfd

[root@localhost ~]# fdisk -l | grep raid
/dev/sdb1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdc1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdd1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sde1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdf1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdg1            2048    41943039    20970496   fd  Linux raid autodetect

2.创建RAID6,并添加2个热备份盘。

[root@localhost ~]# mdadm -C -v /dev/md6 -l6 -n4 /dev/sd[b-e]1 -x2 /dev/sd[f-g]1
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 20953088K
mdadm: Fail create md6 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md6 started.

3.查看raidstat状态。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md6 : active raid6 sdg1[5](S) sdf1[4](S) sde1[3] sdd1[2] sdc1[1] sdb1[0]41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU][===>.................]  resync = 18.9% (3962940/20953088) finish=1.3min speed=208575K/secunused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md6 : active raid6 sdg1[5](S) sdf1[4](S) sde1[3] sdd1[2] sdc1[1] sdb1[0]41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]unused devices: <none>

4.查看RAID6的详细信息。

[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:Version : 1.2Creation Time : Sun Aug 25 16:34:36 2019Raid Level : raid6Array Size : 41906176 (39.96 GiB 42.91 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 4Total Devices : 6Persistence : Superblock is persistentUpdate Time : Sun Aug 25 16:34:43 2019State : clean, resyncingActive Devices : 4Working Devices : 6Failed Devices : 0Spare Devices : 2Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncResync Status : 10% completeName : localhost:6  (local to host localhost)UUID : 7c3d15a2:4066f2c6:742f3e4c:82aae1bbEvents : 1Number   Major   Minor   RaidDevice State0       8       17        0      active sync   /dev/sdb11       8       33        1      active sync   /dev/sdc12       8       49        2      active sync   /dev/sdd13       8       65        3      active sync   /dev/sde14       8       81        -      spare   /dev/sdf15       8       97        -      spare   /dev/sdg1

5.格式化。

[root@localhost ~]# mkfs.xfs /dev/md6
meta-data=/dev/md6               isize=512    agcount=16, agsize=654720 blks=                       sectsz=512   attr=2, projid32bit=1=                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25=                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5120, version=2=                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

6.挂载使用。

[root@localhost ~]# mkdir /mnt/md6
[root@localhost ~]# mount /dev/md6 /mnt/md6/
[root@localhost ~]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        17G 1014M   16G   6% /
devtmpfs                devtmpfs  901M     0  901M   0% /dev
tmpfs                   tmpfs     912M     0  912M   0% /dev/shm
tmpfs                   tmpfs     912M  8.7M  903M   1% /run
tmpfs                   tmpfs     912M     0  912M   0% /sys/fs/cgroup
/dev/sda1               xfs      1014M  143M  872M  15% /boot
tmpfs                   tmpfs     183M     0  183M   0% /run/user/0
/dev/md6                xfs        40G   33M   40G   1% /mnt/md6

7.创建测试文件。

[root@localhost ~]# touch /mnt/md6/test{1..9}.txt
[root@localhost ~]# ls /mnt/md6/
test1.txt  test2.txt  test3.txt  test4.txt  test5.txt  test6.txt  test7.txt  test8.txt  test9.txt

8.故障模拟。

[root@localhost ~]# mdadm -f /dev/md6 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md6
[root@localhost ~]# mdadm -f /dev/md6 /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md6

9.查看测试文件。

[root@localhost ~]# ls /mnt/md6/
test1.txt  test2.txt  test3.txt  test4.txt  test5.txt  test6.txt  test7.txt  test8.txt  test9.txt

10.查看状态。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md6 : active raid6 sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1](F) sdb1[0](F)41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/2] [__UU][====>................]  recovery = 23.8% (4993596/20953088) finish=1.2min speed=208066K/secunused devices: <none>
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:Version : 1.2Creation Time : Sun Aug 25 16:34:36 2019Raid Level : raid6Array Size : 41906176 (39.96 GiB 42.91 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 4Total Devices : 6Persistence : Superblock is persistentUpdate Time : Sun Aug 25 16:41:09 2019State : clean, degraded, recoveringActive Devices : 2Working Devices : 4Failed Devices : 2Spare Devices : 2Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncRebuild Status : 13% completeName : localhost:6  (local to host localhost)UUID : 7c3d15a2:4066f2c6:742f3e4c:82aae1bbEvents : 27Number   Major   Minor   RaidDevice State5       8       97        0      spare rebuilding   /dev/sdg14       8       81        1      spare rebuilding   /dev/sdf12       8       49        2      active sync   /dev/sdd13       8       65        3      active sync   /dev/sde10       8       17        -      faulty   /dev/sdb11       8       33        -      faulty   /dev/sdc1

11.再次查看状态。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md6 : active raid6 sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1](F) sdb1[0](F)41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:Version : 1.2Creation Time : Sun Aug 25 16:34:36 2019Raid Level : raid6Array Size : 41906176 (39.96 GiB 42.91 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 4Total Devices : 6Persistence : Superblock is persistentUpdate Time : Sun Aug 25 16:42:42 2019State : cleanActive Devices : 4Working Devices : 4Failed Devices : 2Spare Devices : 0Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncName : localhost:6  (local to host localhost)UUID : 7c3d15a2:4066f2c6:742f3e4c:82aae1bbEvents : 46Number   Major   Minor   RaidDevice State5       8       97        0      active sync   /dev/sdg14       8       81        1      active sync   /dev/sdf12       8       49        2      active sync   /dev/sdd13       8       65        3      active sync   /dev/sde10       8       17        -      faulty   /dev/sdb11       8       33        -      faulty   /dev/sdc1

12.移除损坏的磁盘。

[root@localhost ~]# mdadm -r /dev/md6 /dev/sd{b,c}1
mdadm: hot removed /dev/sdb1 from /dev/md6
mdadm: hot removed /dev/sdc1 from /dev/md6
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:Version : 1.2Creation Time : Sun Aug 25 16:34:36 2019Raid Level : raid6Array Size : 41906176 (39.96 GiB 42.91 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 4Total Devices : 4Persistence : Superblock is persistentUpdate Time : Sun Aug 25 16:43:43 2019State : cleanActive Devices : 4Working Devices : 4Failed Devices : 0Spare Devices : 0Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncName : localhost:6  (local to host localhost)UUID : 7c3d15a2:4066f2c6:742f3e4c:82aae1bbEvents : 47Number   Major   Minor   RaidDevice State5       8       97        0      active sync   /dev/sdg14       8       81        1      active sync   /dev/sdf12       8       49        2      active sync   /dev/sdd13       8       65        3      active sync   /dev/sde1

13.重新添加热备份盘。

[root@localhost ~]# mdadm -a /dev/md6 /dev/sd{b,c}1
mdadm: added /dev/sdb1
mdadm: added /dev/sdc1
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:Version : 1.2Creation Time : Sun Aug 25 16:34:36 2019Raid Level : raid6Array Size : 41906176 (39.96 GiB 42.91 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 4Total Devices : 6Persistence : Superblock is persistentUpdate Time : Sun Aug 25 16:44:01 2019State : cleanActive Devices : 4Working Devices : 6Failed Devices : 0Spare Devices : 2Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncName : localhost:6  (local to host localhost)UUID : 7c3d15a2:4066f2c6:742f3e4c:82aae1bbEvents : 49Number   Major   Minor   RaidDevice State5       8       97        0      active sync   /dev/sdg14       8       81        1      active sync   /dev/sdf12       8       49        2      active sync   /dev/sdd13       8       65        3      active sync   /dev/sde16       8       17        -      spare   /dev/sdb17       8       33        -      spare   /dev/sdc1

RAID10

RAID10是先将数据进行镜像操作,然后再对数据进行分组,RAID1在这里就是一个冗余的备份阵列,而RAID0则负责数据的读写阵列。至少要四块盘,两两组合做RAID1,然后做RAID0RAID10对存储容量的利用率和RAID1一样低,只有50%Raid10方案造成了50%的磁盘浪费,但是它提供了200%的速度和单磁盘损坏的数据安全性,并且当同时损坏的磁盘不在同一RAID1中,就能保证数据安全性,RAID10能提供比RAID5更好的性能。这种新结构的可扩充性不好,使用此方案比较昂贵。

RAID10

实验RAID10创建,格式化,挂载使用,故障模拟,重新添加热备份。

1.添加420G的硬盘,分区,类型IDfd

[root@localhost ~]# fdisk -l | grep raid
/dev/sdb1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdc1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdd1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sde1            2048    41943039    20970496   fd  Linux raid autodetect

2.创建两个RAID1,不添加热备份盘。

[root@localhost ~]# mdadm -C -v /dev/md101 -l1 -n2 /dev/sd{b,c}1
mdadm: Note: this array has metadata at the start andmay not be suitable as a boot device.  If you plan tostore '/boot' on this device please ensure thatyour boot-loader understands md/v1.x metadata, or use--metadata=0.90
mdadm: size set to 20953088K
Continue creating array? y
mdadm: Fail create md101 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md101 started.
[root@localhost ~]# mdadm -C -v /dev/md102 -l1 -n2 /dev/sd{d,e}1
mdadm: Note: this array has metadata at the start andmay not be suitable as a boot device.  If you plan tostore '/boot' on this device please ensure thatyour boot-loader understands md/v1.x metadata, or use--metadata=0.90
mdadm: size set to 20953088K
Continue creating array? y
mdadm: Fail create md102 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md102 started.

3.查看raidstat状态。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md102 : active raid1 sde1[1] sdd1[0]20953088 blocks super 1.2 [2/2] [UU][=========>...........]  resync = 48.4% (10148224/20953088) finish=0.8min speed=200056K/secmd101 : active raid1 sdc1[1] sdb1[0]20953088 blocks super 1.2 [2/2] [UU][=============>.......]  resync = 69.6% (14604672/20953088) finish=0.5min speed=200052K/secunused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md102 : active raid1 sde1[1] sdd1[0]20953088 blocks super 1.2 [2/2] [UU]md101 : active raid1 sdc1[1] sdb1[0]20953088 blocks super 1.2 [2/2] [UU]unused devices: <none>

4.查看两个RAID1的详细信息。

[root@localhost ~]# mdadm -D /dev/md101
/dev/md101:Version : 1.2Creation Time : Sun Aug 25 16:53:00 2019Raid Level : raid1Array Size : 20953088 (19.98 GiB 21.46 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 2Persistence : Superblock is persistentUpdate Time : Sun Aug 25 16:53:58 2019State : clean, resyncingActive Devices : 2Working Devices : 2Failed Devices : 0Spare Devices : 0Consistency Policy : resyncResync Status : 62% completeName : localhost:101  (local to host localhost)UUID : 80bb4fc5:1a628936:275ba828:17f23330Events : 9Number   Major   Minor   RaidDevice State0       8       17        0      active sync   /dev/sdb11       8       33        1      active sync   /dev/sdc1
[root@localhost ~]# mdadm -D /dev/md102
/dev/md102:Version : 1.2Creation Time : Sun Aug 25 16:53:23 2019Raid Level : raid1Array Size : 20953088 (19.98 GiB 21.46 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 2Persistence : Superblock is persistentUpdate Time : Sun Aug 25 16:54:02 2019State : clean, resyncingActive Devices : 2Working Devices : 2Failed Devices : 0Spare Devices : 0Consistency Policy : resyncResync Status : 42% completeName : localhost:102  (local to host localhost)UUID : 38abac72:74fa8a53:3a21b5e4:01ae64cdEvents : 6Number   Major   Minor   RaidDevice State0       8       49        0      active sync   /dev/sdd11       8       65        1      active sync   /dev/sde1

5.创建RAID10

[root@localhost ~]# mdadm -C -v /dev/md10 -l0 -n2 /dev/md10{1,2}
mdadm: chunk size defaults to 512K
mdadm: Fail create md10 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.

6.查看raidstat状态。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid0]
md10 : active raid0 md102[1] md101[0]41871360 blocks super 1.2 512k chunksmd102 : active raid1 sde1[1] sdd1[0]20953088 blocks super 1.2 [2/2] [UU]md101 : active raid1 sdc1[1] sdb1[0]20953088 blocks super 1.2 [2/2] [UU]unused devices: <none>

7.查看RAID10的详细信息。

[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:Version : 1.2Creation Time : Sun Aug 25 16:56:08 2019Raid Level : raid0Array Size : 41871360 (39.93 GiB 42.88 GB)Raid Devices : 2Total Devices : 2Persistence : Superblock is persistentUpdate Time : Sun Aug 25 16:56:08 2019State : cleanActive Devices : 2Working Devices : 2Failed Devices : 0Spare Devices : 0Chunk Size : 512KConsistency Policy : noneName : localhost:10  (local to host localhost)UUID : 23c6abac:b131a049:db25cac8:686fb045Events : 0Number   Major   Minor   RaidDevice State0       9      101        0      active sync   /dev/md1011       9      102        1      active sync   /dev/md102

8.格式化。

[root@localhost ~]# mkfs.xfs /dev/md10
meta-data=/dev/md10              isize=512    agcount=16, agsize=654208 blks=                       sectsz=512   attr=2, projid32bit=1=                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10467328, imaxpct=25=                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5112, version=2=                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

9.挂载使用。

[root@localhost ~]# mkdir /mnt/md10
[root@localhost ~]# mount /dev/md10 /mnt/md10/
[root@localhost ~]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        17G 1014M   16G   6% /
devtmpfs                devtmpfs  901M     0  901M   0% /dev
tmpfs                   tmpfs     912M     0  912M   0% /dev/shm
tmpfs                   tmpfs     912M  8.7M  903M   1% /run
tmpfs                   tmpfs     912M     0  912M   0% /sys/fs/cgroup
/dev/sda1               xfs      1014M  143M  872M  15% /boot
tmpfs                   tmpfs     183M     0  183M   0% /run/user/0
/dev/md10               xfs        40G   33M   40G   1% /mnt/md10

10.创建测试文件。

[root@localhost ~]# touch /mnt/md10/test{1..9}.txt
[root@localhost ~]# ls /mnt/md10/
test1.txt  test2.txt  test3.txt  test4.txt  test5.txt  test6.txt  test7.txt  test8.txt  test9.txt

11.故障模拟。

[root@localhost ~]# mdadm -f /dev/md101 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md101
[root@localhost ~]# mdadm -f /dev/md102 /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md102

12.查看测试文件。

[root@localhost ~]# ls /mnt/md10/
test1.txt  test2.txt  test3.txt  test4.txt  test5.txt  test6.txt  test7.txt  test8.txt  test9.txt

13.查看状态。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid0]
md10 : active raid0 md102[1] md101[0]41871360 blocks super 1.2 512k chunksmd102 : active raid1 sde1[1] sdd1[0](F)20953088 blocks super 1.2 [2/1] [_U]md101 : active raid1 sdc1[1] sdb1[0](F)20953088 blocks super 1.2 [2/1] [_U]unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md101
/dev/md101:Version : 1.2Creation Time : Sun Aug 25 16:53:00 2019Raid Level : raid1Array Size : 20953088 (19.98 GiB 21.46 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 2Persistence : Superblock is persistentUpdate Time : Sun Aug 25 17:01:11 2019State : clean, degradedActive Devices : 1Working Devices : 1Failed Devices : 1Spare Devices : 0Consistency Policy : resyncName : localhost:101  (local to host localhost)UUID : 80bb4fc5:1a628936:275ba828:17f23330Events : 23Number   Major   Minor   RaidDevice State-       0        0        0      removed1       8       33        1      active sync   /dev/sdc10       8       17        -      faulty   /dev/sdb1
[root@localhost ~]# mdadm -D /dev/md102
/dev/md102:Version : 1.2Creation Time : Sun Aug 25 16:53:23 2019Raid Level : raid1Array Size : 20953088 (19.98 GiB 21.46 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 2Persistence : Superblock is persistentUpdate Time : Sun Aug 25 17:00:43 2019State : clean, degradedActive Devices : 1Working Devices : 1Failed Devices : 1Spare Devices : 0Consistency Policy : resyncName : localhost:102  (local to host localhost)UUID : 38abac72:74fa8a53:3a21b5e4:01ae64cdEvents : 19Number   Major   Minor   RaidDevice State-       0        0        0      removed1       8       65        1      active sync   /dev/sde10       8       49        -      faulty   /dev/sdd1
[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:Version : 1.2Creation Time : Sun Aug 25 16:56:08 2019Raid Level : raid0Array Size : 41871360 (39.93 GiB 42.88 GB)Raid Devices : 2Total Devices : 2Persistence : Superblock is persistentUpdate Time : Sun Aug 25 16:56:08 2019State : cleanActive Devices : 2Working Devices : 2Failed Devices : 0Spare Devices : 0Chunk Size : 512KConsistency Policy : noneName : localhost:10  (local to host localhost)UUID : 23c6abac:b131a049:db25cac8:686fb045Events : 0Number   Major   Minor   RaidDevice State0       9      101        0      active sync   /dev/md1011       9      102        1      active sync   /dev/md102

14.移除损坏的磁盘。

[root@localhost ~]# mdadm -r /dev/md101 /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md101
[root@localhost ~]# mdadm -r /dev/md102 /dev/sdd1
mdadm: hot removed /dev/sdd1 from /dev/md102
[root@localhost ~]# mdadm -D /dev/md101
/dev/md101:Version : 1.2Creation Time : Sun Aug 25 16:53:00 2019Raid Level : raid1Array Size : 20953088 (19.98 GiB 21.46 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 1Persistence : Superblock is persistentUpdate Time : Sun Aug 25 17:04:59 2019State : clean, degradedActive Devices : 1Working Devices : 1Failed Devices : 0Spare Devices : 0Consistency Policy : resyncName : localhost:101  (local to host localhost)UUID : 80bb4fc5:1a628936:275ba828:17f23330Events : 26Number   Major   Minor   RaidDevice State-       0        0        0      removed1       8       33        1      active sync   /dev/sdc1
[root@localhost ~]# mdadm -D /dev/md102
/dev/md102:Version : 1.2Creation Time : Sun Aug 25 16:53:23 2019Raid Level : raid1Array Size : 20953088 (19.98 GiB 21.46 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 1Persistence : Superblock is persistentUpdate Time : Sun Aug 25 17:05:07 2019State : clean, degradedActive Devices : 1Working Devices : 1Failed Devices : 0Spare Devices : 0Consistency Policy : resyncName : localhost:102  (local to host localhost)UUID : 38abac72:74fa8a53:3a21b5e4:01ae64cdEvents : 20Number   Major   Minor   RaidDevice State-       0        0        0      removed1       8       65        1      active sync   /dev/sde1

15.重新添加热备份盘。

[root@localhost ~]# mdadm -a /dev/md101 /dev/sdb1
mdadm: added /dev/sdb1
[root@localhost ~]# mdadm -a /dev/md102 /dev/sdd1
mdadm: added /dev/sdd1

16.再次查看状态。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid0]
md10 : active raid0 md102[1] md101[0]41871360 blocks super 1.2 512k chunksmd102 : active raid1 sdd1[2] sde1[1]20953088 blocks super 1.2 [2/1] [_U][====>................]  recovery = 23.8% (5000704/20953088) finish=1.2min speed=208362K/secmd101 : active raid1 sdb1[2] sdc1[1]20953088 blocks super 1.2 [2/1] [_U][======>..............]  recovery = 32.0% (6712448/20953088) finish=1.1min speed=203407K/secunused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid0]
md10 : active raid0 md102[1] md101[0]41871360 blocks super 1.2 512k chunksmd102 : active raid1 sdd1[2] sde1[1]20953088 blocks super 1.2 [2/2] [UU]md101 : active raid1 sdb1[2] sdc1[1]20953088 blocks super 1.2 [2/2] [UU]unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md101
/dev/md101:Version : 1.2Creation Time : Sun Aug 25 16:53:00 2019Raid Level : raid1Array Size : 20953088 (19.98 GiB 21.46 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 2Persistence : Superblock is persistentUpdate Time : Sun Aug 25 17:07:28 2019State : cleanActive Devices : 2Working Devices : 2Failed Devices : 0Spare Devices : 0Consistency Policy : resyncName : localhost:101  (local to host localhost)UUID : 80bb4fc5:1a628936:275ba828:17f23330Events : 45Number   Major   Minor   RaidDevice State2       8       17        0      active sync   /dev/sdb11       8       33        1      active sync   /dev/sdc1
[root@localhost ~]# mdadm -D /dev/md102
/dev/md102:Version : 1.2Creation Time : Sun Aug 25 16:53:23 2019Raid Level : raid1Array Size : 20953088 (19.98 GiB 21.46 GB)Used Dev Size : 20953088 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 2Persistence : Superblock is persistentUpdate Time : Sun Aug 25 17:07:36 2019State : cleanActive Devices : 2Working Devices : 2Failed Devices : 0Spare Devices : 0Consistency Policy : resyncName : localhost:102  (local to host localhost)UUID : 38abac72:74fa8a53:3a21b5e4:01ae64cdEvents : 39Number   Major   Minor   RaidDevice State2       8       49        0      active sync   /dev/sdd11       8       65        1      active sync   /dev/sde1

常用 RAID 间比较

名称硬盘数量容量/利用率读性能写性能数据冗余
RAID0NN块总和N倍N倍无,一个故障,丢失所有数据
RAID1N(偶数)50%写两个设备,允许一个故障
RAID5N≥3(N-1)/N↑↑计算校验,允许一个故障
RAID6N≥4(N-2)/N↑↑↓↓双重校验,允许两个故障
RAID10N(偶数,N≥4)50%(N/2)倍(N/2)倍允许基组中的磁盘各损坏一个

一些话

此篇涉及到的操作很简单,但是,有很多的查看占用了大量的篇幅,看关键点,过程都是一个套路,都是重复的。

转载于:https://www.cnblogs.com/llife/p/11408941.html

这篇关于Linux 常见 RAID 及软 RAID 创建的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/399956

相关文章

linux-基础知识3

打包和压缩 zip 安装zip软件包 yum -y install zip unzip 压缩打包命令: zip -q -r -d -u 压缩包文件名 目录和文件名列表 -q:不显示命令执行过程-r:递归处理,打包各级子目录和文件-u:把文件增加/替换到压缩包中-d:从压缩包中删除指定的文件 解压:unzip 压缩包名 打包文件 把压缩包从服务器下载到本地 把压缩包上传到服务器(zip

Linux 网络编程 --- 应用层

一、自定义协议和序列化反序列化 代码: 序列化反序列化实现网络版本计算器 二、HTTP协议 1、谈两个简单的预备知识 https://www.baidu.com/ --- 域名 --- 域名解析 --- IP地址 http的端口号为80端口,https的端口号为443 url为统一资源定位符。CSDNhttps://mp.csdn.net/mp_blog/creation/editor

【Python编程】Linux创建虚拟环境并配置与notebook相连接

1.创建 使用 venv 创建虚拟环境。例如,在当前目录下创建一个名为 myenv 的虚拟环境: python3 -m venv myenv 2.激活 激活虚拟环境使其成为当前终端会话的活动环境。运行: source myenv/bin/activate 3.与notebook连接 在虚拟环境中,使用 pip 安装 Jupyter 和 ipykernel: pip instal

在cscode中通过maven创建java项目

在cscode中创建java项目 可以通过博客完成maven的导入 建立maven项目 使用快捷键 Ctrl + Shift + P 建立一个 Maven 项目 1 Ctrl + Shift + P 打开输入框2 输入 "> java create"3 选择 maven4 选择 No Archetype5 输入 域名6 输入项目名称7 建立一个文件目录存放项目,文件名一般为项目名8 确定

Java 创建图形用户界面(GUI)入门指南(Swing库 JFrame 类)概述

概述 基本概念 Java Swing 的架构 Java Swing 是一个为 Java 设计的 GUI 工具包,是 JAVA 基础类的一部分,基于 Java AWT 构建,提供了一系列轻量级、可定制的图形用户界面(GUI)组件。 与 AWT 相比,Swing 提供了许多比 AWT 更好的屏幕显示元素,更加灵活和可定制,具有更好的跨平台性能。 组件和容器 Java Swing 提供了许多

Linux_kernel驱动开发11

一、改回nfs方式挂载根文件系统         在产品将要上线之前,需要制作不同类型格式的根文件系统         在产品研发阶段,我们还是需要使用nfs的方式挂载根文件系统         优点:可以直接在上位机中修改文件系统内容,延长EMMC的寿命         【1】重启上位机nfs服务         sudo service nfs-kernel-server resta

【Linux 从基础到进阶】Ansible自动化运维工具使用

Ansible自动化运维工具使用 Ansible 是一款开源的自动化运维工具,采用无代理架构(agentless),基于 SSH 连接进行管理,具有简单易用、灵活强大、可扩展性高等特点。它广泛用于服务器管理、应用部署、配置管理等任务。本文将介绍 Ansible 的安装、基本使用方法及一些实际运维场景中的应用,旨在帮助运维人员快速上手并熟练运用 Ansible。 1. Ansible的核心概念

Linux服务器Java启动脚本

Linux服务器Java启动脚本 1、初版2、优化版本3、常用脚本仓库 本文章介绍了如何在Linux服务器上执行Java并启动jar包, 通常我们会使用nohup直接启动,但是还是需要手动停止然后再次启动, 那如何更优雅的在服务器上启动jar包呢,让我们一起探讨一下吧。 1、初版 第一个版本是常用的做法,直接使用nohup后台启动jar包, 并将日志输出到当前文件夹n

[Linux]:进程(下)

✨✨ 欢迎大家来到贝蒂大讲堂✨✨ 🎈🎈养成好习惯,先赞后看哦~🎈🎈 所属专栏:Linux学习 贝蒂的主页:Betty’s blog 1. 进程终止 1.1 进程退出的场景 进程退出只有以下三种情况: 代码运行完毕,结果正确。代码运行完毕,结果不正确。代码异常终止(进程崩溃)。 1.2 进程退出码 在编程中,我们通常认为main函数是代码的入口,但实际上它只是用户级