一起来学ceph 01.ceph nautilus版 安装

2024-03-14 14:32
文章标签 安装 01 起来 ceph nautilus

本文主要是介绍一起来学ceph 01.ceph nautilus版 安装,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

ceph install

在这里插入图片描述

环境准备

  1. 双网卡
  2. 双硬盘

hosts

192.168.126.101 ceph01
192.168.126.102 ceph02
192.168.126.103 ceph03
192.168.126.104 ceph04
192.168.126.105 ceph-admin192.168.48.11 ceph01
192.168.48.12 ceph02
192.168.48.13 ceph03
192.168.48.14 ceph04
192.168.48.15 ceph-admin
###官方要求所有节点内核版本要求4.10以上
uname -r
5.2.2-1.el7.elrepo.x86_64

时间同步

 yum -y install  chrony

统一与ceph-admin的时间同步

[root@ceph-admin ~]# vim /etc/chrony.conf 
....
#allow 192.168.0.0/16
allow 192.168.48.0/24
[root@ceph-admin ~]# systemctl enable chronyd
[root@ceph-admin ~]# systemctl start chronyd

ceph01,ceph02,ceph03,ceph04 删除其他server,只有一个server

vim /etc/chrony.conf...
server 192.168.48.15 iburst
systemctl enable chronyd
systemctl start chronyd
[root@ceph01 ~]# chronyc sources -vMS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* ceph-admin                    3   6    17    12   +100us[ +136us] +/-   52ms

网络划分

cluster网络
192.168.126.101 ceph01
192.168.126.102 ceph02
192.168.126.103 ceph03
192.168.126.104 ceph04
192.168.126.105 ceph-admin
public网络
192.168.48.11 ceph01
192.168.48.12 ceph02
192.168.48.13 ceph03
192.168.48.14 ceph04
192.168.48.15 ceph-admin

硬盘划分

每个节点都准备2个10g硬盘,sdb,sdc

NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  100G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   99G  0 part ├─centos-root 253:0    0   50G  0 lvm  /├─centos-swap 253:1    0    2G  0 lvm  └─centos-home 253:2    0   47G  0 lvm  /home
sdb               8:16   0   10G  0 disk 
sdc               8:32   0   10G  0 disk 

准备ceph yum源

vim /etc/yum.repos.d/ceph.repo[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md

准备epel yum源

cat > /etc/yum.repos.d/epel.repo << EOF
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1
EOF

准备专门的ceph用户账号 cephadm

useradd cephadm
echo "ceph" | passwd --stdin cephadm

sudo 无需密码

vim /etc/sudoers.d/cephadmcephadm  ALL=(root)  NOPASSWD: ALL

添加ssh认证

su - cephadm
ssh-keygen 
ssh-copy-id cephadm@ceph-admin
ssh-copy-id cephadm@ceph01
ssh-copy-id cephadm@ceph02
ssh-copy-id cephadm@ceph03
ssh-copy-id cephadm@ceph04

ceph-admin节点安装

[root@ceph-admin ~]# yum install ceph-deploy python-setuptools python2-subprocess32 ceph-common

除了ceph-admin其他节点安装ceph ceph-radosgw

yum -y install ceph ceph-radosgw

RADOS集群

在ceph-admin节点上以cephadm用户运行操作命令

创建一个ceph工作目录

[cephadm@ceph-admin ~]$ mkdir ceph-cluster
[cephadm@ceph-admin ~]$ cd ceph-cluster/

安装ceph集群文件

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy install  ceph01 ceph02 ceph03 ceph04  --no-adjust-repos

创建一个mon集群

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy new  --cluster-network 192.168.126.0/24 --public-network 192.168.48.0/24  ceph01 ceph02 ceph03
[cephadm@ceph-admin ceph-cluster]$ ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring
[cephadm@ceph-admin ceph-cluster]$ cat ceph.conf 
[global]
fsid = a384da5c-a9ae-464a-8a92-e23042e5d267
public_network = 192.168.48.0/24
cluster_network = 192.168.126.0/24
mon_initial_members = ceph01, ceph02, ceph03
mon_host = 192.168.48.11,192.168.48.12,192.168.48.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

直接初始化monitor

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mon create-initial

后续扩展监视器节点

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mon add  主机名

给节点分配秘钥和配置文件

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy admin ceph01 ceph02 ceph03 ceph04 ceph-admin

所有节点执行赋予cephadm访问秘钥权限

setfacl -m u:cephadm:rw /etc/ceph/ceph.client.admin.keyring

配置Manager节点

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mgr create ceph04

扩展Manager节点

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mgr create ceph03

测试集群的健康状态

[cephadm@ceph-admin ceph-cluster]$ ceph -scluster:id:     8a83b874-efa4-4655-b070-704e63553839health: HEALTH_OKservices:mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 29m)mgr: ceph04(active, since 30s), standbys: ceph03osd: 0 osds: 0 up, 0 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   0 B used, 0 B / 0 B availpgs:     

擦除磁盘数据

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph01 /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph02 /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph03 /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph04 /dev/sdb[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph01 /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph02 /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph03 /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph04 /dev/sdc

添加OSD

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph01 --data /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph02 --data /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph03 --data /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph04 --data /dev/sdb[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph01 --data /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph02 --data /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph03 --data /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph04 --data /dev/sdc

而后可使用”ceph-deploy osd list”命令列出指定节点上的OSD:

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd list ceph01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadm/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy osd list ceph01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fef80f60ea8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['ceph01']
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fef813b1de8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph01][DEBUG ] connection detected need for sudo
[ceph01][DEBUG ] connected to host: ceph01 
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[ceph_deploy.osd][DEBUG ] Listing disks on ceph01...
[ceph01][DEBUG ] find the location of an executable
[ceph01][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm list
[ceph01][DEBUG ] 
[ceph01][DEBUG ] 
[ceph01][DEBUG ] ====== osd.0 =======
[ceph01][DEBUG ] 
[ceph01][DEBUG ]   [block]       /dev/ceph-25b4e0c5-0297-41c4-8c84-3166cf46e5a6/osd-block-d9349281-5ae9-49d7-8c8c-ca3774320fbd
[ceph01][DEBUG ] 
[ceph01][DEBUG ]       block device              /dev/ceph-25b4e0c5-0297-41c4-8c84-3166cf46e5a6/osd-block-d9349281-5ae9-49d7-8c8c-ca3774320fbd
[ceph01][DEBUG ]       block uuid                sKPwP3-o1L3-xbBu-az3d-N0MB-XOq9-0psakY
[ceph01][DEBUG ]       cephx lockbox secret      
[ceph01][DEBUG ]       cluster fsid              8a83b874-efa4-4655-b070-704e63553839
[ceph01][DEBUG ]       cluster name              ceph
[ceph01][DEBUG ]       crush device class        None
[ceph01][DEBUG ]       encrypted                 0
[ceph01][DEBUG ]       osd fsid                  d9349281-5ae9-49d7-8c8c-ca3774320fbd
[ceph01][DEBUG ]       osd id                    0
[ceph01][DEBUG ]       type                      block
[ceph01][DEBUG ]       vdo                       0
[ceph01][DEBUG ]       devices                   /dev/sdb
[ceph01][DEBUG ] 
[ceph01][DEBUG ] ====== osd.4 =======
[ceph01][DEBUG ] 
[ceph01][DEBUG ]   [block]       /dev/ceph-f8d33be2-c8c2-4e7f-97ed-892cbe14487c/osd-block-e3102a32-dfb3-42c7-8d6f-617c030808f7
[ceph01][DEBUG ] 
[ceph01][DEBUG ]       block device              /dev/ceph-f8d33be2-c8c2-4e7f-97ed-892cbe14487c/osd-block-e3102a32-dfb3-42c7-8d6f-617c030808f7
[ceph01][DEBUG ]       block uuid                1vdMB5-bjal-IKY2-PBzw-S0c1-48kV-4Hfszq
[ceph01][DEBUG ]       cephx lockbox secret      
[ceph01][DEBUG ]       cluster fsid              8a83b874-efa4-4655-b070-704e63553839
[ceph01][DEBUG ]       cluster name              ceph
[ceph01][DEBUG ]       crush device class        None
[ceph01][DEBUG ]       encrypted                 0
[ceph01][DEBUG ]       osd fsid                  e3102a32-dfb3-42c7-8d6f-617c030808f7
[ceph01][DEBUG ]       osd id                    4
[ceph01][DEBUG ]       type                      block
[ceph01][DEBUG ]       vdo                       0
[ceph01][DEBUG ]       devices                   /dev/sdc

事实上,管理员也可以使用ceph命令查看OSD的相关信息:

[cephadm@ceph-admin ceph-cluster]$ ceph osd stat
8 osds: 8 up (since 58s), 8 in (since 58s); epoch: e33

或者使用如下命令了解相关的信息:

[cephadm@ceph-admin ceph-cluster]$ ceph osd ls
0
1
2
3
4
5
6
7
[cephadm@ceph-admin ceph-cluster]$ ceph -scluster:id:     8a83b874-efa4-4655-b070-704e63553839health: HEALTH_OKservices:mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 37m)mgr: ceph04(active, since 7m), standbys: ceph03osd: 8 osds: 8 up (since 115s), 8 in (since 115s)data:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   8.0 GiB used, 64 GiB / 72 GiB availpgs:     [cephadm@ceph-admin ceph-cluster]$ ceph osd tree
ID CLASS WEIGHT  TYPE NAME       STATUS REWEIGHT PRI-AFF 
-1       0.07031 root default                            
-3       0.01758     host ceph01                         0   hdd 0.00879         osd.0       up  1.00000 1.00000 4   hdd 0.00879         osd.4       up  1.00000 1.00000 
-5       0.01758     host ceph02                         1   hdd 0.00879         osd.1       up  1.00000 1.00000 5   hdd 0.00879         osd.5       up  1.00000 1.00000 
-7       0.01758     host ceph03                         2   hdd 0.00879         osd.2       up  1.00000 1.00000 6   hdd 0.00879         osd.6       up  1.00000 1.00000 
-9       0.01758     host ceph04                         3   hdd 0.00879         osd.3       up  1.00000 1.00000 7   hdd 0.00879         osd.7       up  1.00000 1.00000 

这篇关于一起来学ceph 01.ceph nautilus版 安装的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/808717

相关文章

MySql9.1.0安装详细教程(最新推荐)

《MySql9.1.0安装详细教程(最新推荐)》MySQL是一个流行的关系型数据库管理系统,支持多线程和多种数据库连接途径,能够处理上千万条记录的大型数据库,本文介绍MySql9.1.0安装详细教程,... 目录mysql介绍:一、下载 Mysql 安装文件二、Mysql 安装教程三、环境配置1.右击此电脑

在 Windows 上安装 DeepSeek 的完整指南(最新推荐)

《在Windows上安装DeepSeek的完整指南(最新推荐)》在Windows上安装DeepSeek的完整指南,包括下载和安装Ollama、下载DeepSeekRXNUMX模型、运行Deep... 目录在www.chinasem.cn Windows 上安装 DeepSeek 的完整指南步骤 1:下载并安装

python管理工具之conda安装部署及使用详解

《python管理工具之conda安装部署及使用详解》这篇文章详细介绍了如何安装和使用conda来管理Python环境,它涵盖了从安装部署、镜像源配置到具体的conda使用方法,包括创建、激活、安装包... 目录pytpshheraerUhon管理工具:conda部署+使用一、安装部署1、 下载2、 安装3

龙蜥操作系统Anolis OS-23.x安装配置图解教程(保姆级)

《龙蜥操作系统AnolisOS-23.x安装配置图解教程(保姆级)》:本文主要介绍了安装和配置AnolisOS23.2系统,包括分区、软件选择、设置root密码、网络配置、主机名设置和禁用SELinux的步骤,详细内容请阅读本文,希望能对你有所帮助... ‌AnolisOS‌是由阿里云推出的开源操作系统,旨

Ubuntu系统怎么安装Warp? 新一代AI 终端神器安装使用方法

《Ubuntu系统怎么安装Warp?新一代AI终端神器安装使用方法》Warp是一款使用Rust开发的现代化AI终端工具,该怎么再Ubuntu系统中安装使用呢?下面我们就来看看详细教程... Warp Terminal 是一款使用 Rust 开发的现代化「AI 终端」工具。最初它只支持 MACOS,但在 20

mysql-8.0.30压缩包版安装和配置MySQL环境过程

《mysql-8.0.30压缩包版安装和配置MySQL环境过程》该文章介绍了如何在Windows系统中下载、安装和配置MySQL数据库,包括下载地址、解压文件、创建和配置my.ini文件、设置环境变量... 目录压缩包安装配置下载配置环境变量下载和初始化总结压缩包安装配置下载下载地址:https://d

LinuxMint怎么安装? Linux Mint22下载安装图文教程

《LinuxMint怎么安装?LinuxMint22下载安装图文教程》LinuxMint22发布以后,有很多新功能,很多朋友想要下载并安装,该怎么操作呢?下面我们就来看看详细安装指南... linux Mint 是一款基于 Ubuntu 的流行发行版,凭借其现代、精致、易于使用的特性,深受小伙伴们所喜爱。对

Linux(Centos7)安装Mysql/Redis/MinIO方式

《Linux(Centos7)安装Mysql/Redis/MinIO方式》文章总结:介绍了如何安装MySQL和Redis,以及如何配置它们为开机自启,还详细讲解了如何安装MinIO,包括配置Syste... 目录安装mysql安装Redis安装MinIO总结安装Mysql安装Redis搜索Red

python安装完成后可以进行的后续步骤和注意事项小结

《python安装完成后可以进行的后续步骤和注意事项小结》本文详细介绍了安装Python3后的后续步骤,包括验证安装、配置环境、安装包、创建和运行脚本,以及使用虚拟环境,还强调了注意事项,如系统更新、... 目录验证安装配置环境(可选)安装python包创建和运行Python脚本虚拟环境(可选)注意事项安装

gradle安装和环境配置全过程

《gradle安装和环境配置全过程》本文介绍了如何安装和配置Gradle环境,包括下载Gradle、配置环境变量、测试Gradle以及在IntelliJIDEA中配置Gradle... 目录gradle安装和环境配置1 下载GRADLE2 环境变量配置3 测试gradle4 设置gradle初始化文件5 i