一起来学ceph 01.ceph nautilus版 安装

2024-03-14 14:32
文章标签 安装 01 起来 ceph nautilus

本文主要是介绍一起来学ceph 01.ceph nautilus版 安装,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

ceph install

在这里插入图片描述

环境准备

  1. 双网卡
  2. 双硬盘

hosts

192.168.126.101 ceph01
192.168.126.102 ceph02
192.168.126.103 ceph03
192.168.126.104 ceph04
192.168.126.105 ceph-admin192.168.48.11 ceph01
192.168.48.12 ceph02
192.168.48.13 ceph03
192.168.48.14 ceph04
192.168.48.15 ceph-admin
###官方要求所有节点内核版本要求4.10以上
uname -r
5.2.2-1.el7.elrepo.x86_64

时间同步

 yum -y install  chrony

统一与ceph-admin的时间同步

[root@ceph-admin ~]# vim /etc/chrony.conf 
....
#allow 192.168.0.0/16
allow 192.168.48.0/24
[root@ceph-admin ~]# systemctl enable chronyd
[root@ceph-admin ~]# systemctl start chronyd

ceph01,ceph02,ceph03,ceph04 删除其他server,只有一个server

vim /etc/chrony.conf...
server 192.168.48.15 iburst
systemctl enable chronyd
systemctl start chronyd
[root@ceph01 ~]# chronyc sources -vMS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* ceph-admin                    3   6    17    12   +100us[ +136us] +/-   52ms

网络划分

cluster网络
192.168.126.101 ceph01
192.168.126.102 ceph02
192.168.126.103 ceph03
192.168.126.104 ceph04
192.168.126.105 ceph-admin
public网络
192.168.48.11 ceph01
192.168.48.12 ceph02
192.168.48.13 ceph03
192.168.48.14 ceph04
192.168.48.15 ceph-admin

硬盘划分

每个节点都准备2个10g硬盘,sdb,sdc

NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  100G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   99G  0 part ├─centos-root 253:0    0   50G  0 lvm  /├─centos-swap 253:1    0    2G  0 lvm  └─centos-home 253:2    0   47G  0 lvm  /home
sdb               8:16   0   10G  0 disk 
sdc               8:32   0   10G  0 disk 

准备ceph yum源

vim /etc/yum.repos.d/ceph.repo[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md

准备epel yum源

cat > /etc/yum.repos.d/epel.repo << EOF
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1
EOF

准备专门的ceph用户账号 cephadm

useradd cephadm
echo "ceph" | passwd --stdin cephadm

sudo 无需密码

vim /etc/sudoers.d/cephadmcephadm  ALL=(root)  NOPASSWD: ALL

添加ssh认证

su - cephadm
ssh-keygen 
ssh-copy-id cephadm@ceph-admin
ssh-copy-id cephadm@ceph01
ssh-copy-id cephadm@ceph02
ssh-copy-id cephadm@ceph03
ssh-copy-id cephadm@ceph04

ceph-admin节点安装

[root@ceph-admin ~]# yum install ceph-deploy python-setuptools python2-subprocess32 ceph-common

除了ceph-admin其他节点安装ceph ceph-radosgw

yum -y install ceph ceph-radosgw

RADOS集群

在ceph-admin节点上以cephadm用户运行操作命令

创建一个ceph工作目录

[cephadm@ceph-admin ~]$ mkdir ceph-cluster
[cephadm@ceph-admin ~]$ cd ceph-cluster/

安装ceph集群文件

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy install  ceph01 ceph02 ceph03 ceph04  --no-adjust-repos

创建一个mon集群

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy new  --cluster-network 192.168.126.0/24 --public-network 192.168.48.0/24  ceph01 ceph02 ceph03
[cephadm@ceph-admin ceph-cluster]$ ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring
[cephadm@ceph-admin ceph-cluster]$ cat ceph.conf 
[global]
fsid = a384da5c-a9ae-464a-8a92-e23042e5d267
public_network = 192.168.48.0/24
cluster_network = 192.168.126.0/24
mon_initial_members = ceph01, ceph02, ceph03
mon_host = 192.168.48.11,192.168.48.12,192.168.48.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

直接初始化monitor

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mon create-initial

后续扩展监视器节点

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mon add  主机名

给节点分配秘钥和配置文件

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy admin ceph01 ceph02 ceph03 ceph04 ceph-admin

所有节点执行赋予cephadm访问秘钥权限

setfacl -m u:cephadm:rw /etc/ceph/ceph.client.admin.keyring

配置Manager节点

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mgr create ceph04

扩展Manager节点

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mgr create ceph03

测试集群的健康状态

[cephadm@ceph-admin ceph-cluster]$ ceph -scluster:id:     8a83b874-efa4-4655-b070-704e63553839health: HEALTH_OKservices:mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 29m)mgr: ceph04(active, since 30s), standbys: ceph03osd: 0 osds: 0 up, 0 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   0 B used, 0 B / 0 B availpgs:     

擦除磁盘数据

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph01 /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph02 /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph03 /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph04 /dev/sdb[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph01 /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph02 /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph03 /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph04 /dev/sdc

添加OSD

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph01 --data /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph02 --data /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph03 --data /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph04 --data /dev/sdb[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph01 --data /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph02 --data /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph03 --data /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph04 --data /dev/sdc

而后可使用”ceph-deploy osd list”命令列出指定节点上的OSD:

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd list ceph01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadm/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy osd list ceph01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fef80f60ea8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['ceph01']
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fef813b1de8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph01][DEBUG ] connection detected need for sudo
[ceph01][DEBUG ] connected to host: ceph01 
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[ceph_deploy.osd][DEBUG ] Listing disks on ceph01...
[ceph01][DEBUG ] find the location of an executable
[ceph01][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm list
[ceph01][DEBUG ] 
[ceph01][DEBUG ] 
[ceph01][DEBUG ] ====== osd.0 =======
[ceph01][DEBUG ] 
[ceph01][DEBUG ]   [block]       /dev/ceph-25b4e0c5-0297-41c4-8c84-3166cf46e5a6/osd-block-d9349281-5ae9-49d7-8c8c-ca3774320fbd
[ceph01][DEBUG ] 
[ceph01][DEBUG ]       block device              /dev/ceph-25b4e0c5-0297-41c4-8c84-3166cf46e5a6/osd-block-d9349281-5ae9-49d7-8c8c-ca3774320fbd
[ceph01][DEBUG ]       block uuid                sKPwP3-o1L3-xbBu-az3d-N0MB-XOq9-0psakY
[ceph01][DEBUG ]       cephx lockbox secret      
[ceph01][DEBUG ]       cluster fsid              8a83b874-efa4-4655-b070-704e63553839
[ceph01][DEBUG ]       cluster name              ceph
[ceph01][DEBUG ]       crush device class        None
[ceph01][DEBUG ]       encrypted                 0
[ceph01][DEBUG ]       osd fsid                  d9349281-5ae9-49d7-8c8c-ca3774320fbd
[ceph01][DEBUG ]       osd id                    0
[ceph01][DEBUG ]       type                      block
[ceph01][DEBUG ]       vdo                       0
[ceph01][DEBUG ]       devices                   /dev/sdb
[ceph01][DEBUG ] 
[ceph01][DEBUG ] ====== osd.4 =======
[ceph01][DEBUG ] 
[ceph01][DEBUG ]   [block]       /dev/ceph-f8d33be2-c8c2-4e7f-97ed-892cbe14487c/osd-block-e3102a32-dfb3-42c7-8d6f-617c030808f7
[ceph01][DEBUG ] 
[ceph01][DEBUG ]       block device              /dev/ceph-f8d33be2-c8c2-4e7f-97ed-892cbe14487c/osd-block-e3102a32-dfb3-42c7-8d6f-617c030808f7
[ceph01][DEBUG ]       block uuid                1vdMB5-bjal-IKY2-PBzw-S0c1-48kV-4Hfszq
[ceph01][DEBUG ]       cephx lockbox secret      
[ceph01][DEBUG ]       cluster fsid              8a83b874-efa4-4655-b070-704e63553839
[ceph01][DEBUG ]       cluster name              ceph
[ceph01][DEBUG ]       crush device class        None
[ceph01][DEBUG ]       encrypted                 0
[ceph01][DEBUG ]       osd fsid                  e3102a32-dfb3-42c7-8d6f-617c030808f7
[ceph01][DEBUG ]       osd id                    4
[ceph01][DEBUG ]       type                      block
[ceph01][DEBUG ]       vdo                       0
[ceph01][DEBUG ]       devices                   /dev/sdc

事实上,管理员也可以使用ceph命令查看OSD的相关信息:

[cephadm@ceph-admin ceph-cluster]$ ceph osd stat
8 osds: 8 up (since 58s), 8 in (since 58s); epoch: e33

或者使用如下命令了解相关的信息:

[cephadm@ceph-admin ceph-cluster]$ ceph osd ls
0
1
2
3
4
5
6
7
[cephadm@ceph-admin ceph-cluster]$ ceph -scluster:id:     8a83b874-efa4-4655-b070-704e63553839health: HEALTH_OKservices:mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 37m)mgr: ceph04(active, since 7m), standbys: ceph03osd: 8 osds: 8 up (since 115s), 8 in (since 115s)data:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   8.0 GiB used, 64 GiB / 72 GiB availpgs:     [cephadm@ceph-admin ceph-cluster]$ ceph osd tree
ID CLASS WEIGHT  TYPE NAME       STATUS REWEIGHT PRI-AFF 
-1       0.07031 root default                            
-3       0.01758     host ceph01                         0   hdd 0.00879         osd.0       up  1.00000 1.00000 4   hdd 0.00879         osd.4       up  1.00000 1.00000 
-5       0.01758     host ceph02                         1   hdd 0.00879         osd.1       up  1.00000 1.00000 5   hdd 0.00879         osd.5       up  1.00000 1.00000 
-7       0.01758     host ceph03                         2   hdd 0.00879         osd.2       up  1.00000 1.00000 6   hdd 0.00879         osd.6       up  1.00000 1.00000 
-9       0.01758     host ceph04                         3   hdd 0.00879         osd.3       up  1.00000 1.00000 7   hdd 0.00879         osd.7       up  1.00000 1.00000 

这篇关于一起来学ceph 01.ceph nautilus版 安装的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/808717

相关文章

Jsoncpp的安装与使用方式

《Jsoncpp的安装与使用方式》JsonCpp是一个用于解析和生成JSON数据的C++库,它支持解析JSON文件或字符串到C++对象,以及将C++对象序列化回JSON格式,安装JsonCpp可以通过... 目录安装jsoncppJsoncpp的使用Value类构造函数检测保存的数据类型提取数据对json数

mac安装redis全过程

《mac安装redis全过程》文章内容主要介绍了如何从官网下载指定版本的Redis,以及如何在自定义目录下安装和启动Redis,还提到了如何修改Redis的密码和配置文件,以及使用RedisInsig... 目录MAC安装Redis安装启动redis 配置redis 常用命令总结mac安装redis官网下

如何安装 Ubuntu 24.04 LTS 桌面版或服务器? Ubuntu安装指南

《如何安装Ubuntu24.04LTS桌面版或服务器?Ubuntu安装指南》对于我们程序员来说,有一个好用的操作系统、好的编程环境也是很重要,如何安装Ubuntu24.04LTS桌面... Ubuntu 24.04 LTS,代号 Noble NumBAT,于 2024 年 4 月 25 日正式发布,引入了众

如何安装HWE内核? Ubuntu安装hwe内核解决硬件太新的问题

《如何安装HWE内核?Ubuntu安装hwe内核解决硬件太新的问题》今天的主角就是hwe内核(hardwareenablementkernel),一般安装的Ubuntu都是初始内核,不能很好地支... 对于追求系统稳定性,又想充分利用最新硬件特性的 Ubuntu 用户来说,HWEXBQgUbdlna(Har

python中poetry安装依赖

《python中poetry安装依赖》本文主要介绍了Poetry工具及其在Python项目中的安装和使用,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随... 目录前言1. 为什么pip install poetry 会造成依赖冲突1.1 全局环境依赖混淆:1

windows端python版本管理工具pyenv-win安装使用

《windows端python版本管理工具pyenv-win安装使用》:本文主要介绍如何通过git方式下载和配置pyenv-win,包括下载、克隆仓库、配置环境变量等步骤,同时还详细介绍了如何使用... 目录pyenv-win 下载配置环境变量使用 pyenv-win 管理 python 版本一、安装 和

Linux下MySQL8.0.26安装教程

《Linux下MySQL8.0.26安装教程》文章详细介绍了如何在Linux系统上安装和配置MySQL,包括下载、解压、安装依赖、启动服务、获取默认密码、设置密码、支持远程登录以及创建表,感兴趣的朋友... 目录1.找到官网下载位置1.访问mysql存档2.下载社区版3.百度网盘中2.linux安装配置1.

Kibana的安装和配置全过程

《Kibana的安装和配置全过程》Kibana是一个开源的数据分析和可视化平台,它与Elasticsearch紧密集成,提供了一个直观的Web界面,使您可以快速地搜索、分析和可视化数据,在本文中,我们... 目录Kibana的安装和配置1.安装Java运行环境2.下载Kibana3.解压缩Kibana4.配

Zookeeper安装和配置说明

一、Zookeeper的搭建方式 Zookeeper安装方式有三种,单机模式和集群模式以及伪集群模式。 ■ 单机模式:Zookeeper只运行在一台服务器上,适合测试环境; ■ 伪集群模式:就是在一台物理机上运行多个Zookeeper 实例; ■ 集群模式:Zookeeper运行于一个集群上,适合生产环境,这个计算机集群被称为一个“集合体”(ensemble) Zookeeper通过复制来实现

CentOS7安装配置mysql5.7 tar免安装版

一、CentOS7.4系统自带mariadb # 查看系统自带的Mariadb[root@localhost~]# rpm -qa|grep mariadbmariadb-libs-5.5.44-2.el7.centos.x86_64# 卸载系统自带的Mariadb[root@localhost ~]# rpm -e --nodeps mariadb-libs-5.5.44-2.el7