一起来学ceph 01.ceph nautilus版 安装

2024-03-14 14:32
文章标签 安装 01 起来 ceph nautilus

本文主要是介绍一起来学ceph 01.ceph nautilus版 安装,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

ceph install

在这里插入图片描述

环境准备

  1. 双网卡
  2. 双硬盘

hosts

192.168.126.101 ceph01
192.168.126.102 ceph02
192.168.126.103 ceph03
192.168.126.104 ceph04
192.168.126.105 ceph-admin192.168.48.11 ceph01
192.168.48.12 ceph02
192.168.48.13 ceph03
192.168.48.14 ceph04
192.168.48.15 ceph-admin
###官方要求所有节点内核版本要求4.10以上
uname -r
5.2.2-1.el7.elrepo.x86_64

时间同步

 yum -y install  chrony

统一与ceph-admin的时间同步

[root@ceph-admin ~]# vim /etc/chrony.conf 
....
#allow 192.168.0.0/16
allow 192.168.48.0/24
[root@ceph-admin ~]# systemctl enable chronyd
[root@ceph-admin ~]# systemctl start chronyd

ceph01,ceph02,ceph03,ceph04 删除其他server,只有一个server

vim /etc/chrony.conf...
server 192.168.48.15 iburst
systemctl enable chronyd
systemctl start chronyd
[root@ceph01 ~]# chronyc sources -vMS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* ceph-admin                    3   6    17    12   +100us[ +136us] +/-   52ms

网络划分

cluster网络
192.168.126.101 ceph01
192.168.126.102 ceph02
192.168.126.103 ceph03
192.168.126.104 ceph04
192.168.126.105 ceph-admin
public网络
192.168.48.11 ceph01
192.168.48.12 ceph02
192.168.48.13 ceph03
192.168.48.14 ceph04
192.168.48.15 ceph-admin

硬盘划分

每个节点都准备2个10g硬盘,sdb,sdc

NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  100G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   99G  0 part ├─centos-root 253:0    0   50G  0 lvm  /├─centos-swap 253:1    0    2G  0 lvm  └─centos-home 253:2    0   47G  0 lvm  /home
sdb               8:16   0   10G  0 disk 
sdc               8:32   0   10G  0 disk 

准备ceph yum源

vim /etc/yum.repos.d/ceph.repo[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md

准备epel yum源

cat > /etc/yum.repos.d/epel.repo << EOF
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1
EOF

准备专门的ceph用户账号 cephadm

useradd cephadm
echo "ceph" | passwd --stdin cephadm

sudo 无需密码

vim /etc/sudoers.d/cephadmcephadm  ALL=(root)  NOPASSWD: ALL

添加ssh认证

su - cephadm
ssh-keygen 
ssh-copy-id cephadm@ceph-admin
ssh-copy-id cephadm@ceph01
ssh-copy-id cephadm@ceph02
ssh-copy-id cephadm@ceph03
ssh-copy-id cephadm@ceph04

ceph-admin节点安装

[root@ceph-admin ~]# yum install ceph-deploy python-setuptools python2-subprocess32 ceph-common

除了ceph-admin其他节点安装ceph ceph-radosgw

yum -y install ceph ceph-radosgw

RADOS集群

在ceph-admin节点上以cephadm用户运行操作命令

创建一个ceph工作目录

[cephadm@ceph-admin ~]$ mkdir ceph-cluster
[cephadm@ceph-admin ~]$ cd ceph-cluster/

安装ceph集群文件

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy install  ceph01 ceph02 ceph03 ceph04  --no-adjust-repos

创建一个mon集群

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy new  --cluster-network 192.168.126.0/24 --public-network 192.168.48.0/24  ceph01 ceph02 ceph03
[cephadm@ceph-admin ceph-cluster]$ ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring
[cephadm@ceph-admin ceph-cluster]$ cat ceph.conf 
[global]
fsid = a384da5c-a9ae-464a-8a92-e23042e5d267
public_network = 192.168.48.0/24
cluster_network = 192.168.126.0/24
mon_initial_members = ceph01, ceph02, ceph03
mon_host = 192.168.48.11,192.168.48.12,192.168.48.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

直接初始化monitor

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mon create-initial

后续扩展监视器节点

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mon add  主机名

给节点分配秘钥和配置文件

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy admin ceph01 ceph02 ceph03 ceph04 ceph-admin

所有节点执行赋予cephadm访问秘钥权限

setfacl -m u:cephadm:rw /etc/ceph/ceph.client.admin.keyring

配置Manager节点

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mgr create ceph04

扩展Manager节点

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mgr create ceph03

测试集群的健康状态

[cephadm@ceph-admin ceph-cluster]$ ceph -scluster:id:     8a83b874-efa4-4655-b070-704e63553839health: HEALTH_OKservices:mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 29m)mgr: ceph04(active, since 30s), standbys: ceph03osd: 0 osds: 0 up, 0 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   0 B used, 0 B / 0 B availpgs:     

擦除磁盘数据

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph01 /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph02 /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph03 /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph04 /dev/sdb[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph01 /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph02 /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph03 /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph04 /dev/sdc

添加OSD

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph01 --data /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph02 --data /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph03 --data /dev/sdb
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph04 --data /dev/sdb[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph01 --data /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph02 --data /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph03 --data /dev/sdc
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph04 --data /dev/sdc

而后可使用”ceph-deploy osd list”命令列出指定节点上的OSD:

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd list ceph01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadm/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy osd list ceph01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fef80f60ea8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['ceph01']
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fef813b1de8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph01][DEBUG ] connection detected need for sudo
[ceph01][DEBUG ] connected to host: ceph01 
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[ceph_deploy.osd][DEBUG ] Listing disks on ceph01...
[ceph01][DEBUG ] find the location of an executable
[ceph01][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm list
[ceph01][DEBUG ] 
[ceph01][DEBUG ] 
[ceph01][DEBUG ] ====== osd.0 =======
[ceph01][DEBUG ] 
[ceph01][DEBUG ]   [block]       /dev/ceph-25b4e0c5-0297-41c4-8c84-3166cf46e5a6/osd-block-d9349281-5ae9-49d7-8c8c-ca3774320fbd
[ceph01][DEBUG ] 
[ceph01][DEBUG ]       block device              /dev/ceph-25b4e0c5-0297-41c4-8c84-3166cf46e5a6/osd-block-d9349281-5ae9-49d7-8c8c-ca3774320fbd
[ceph01][DEBUG ]       block uuid                sKPwP3-o1L3-xbBu-az3d-N0MB-XOq9-0psakY
[ceph01][DEBUG ]       cephx lockbox secret      
[ceph01][DEBUG ]       cluster fsid              8a83b874-efa4-4655-b070-704e63553839
[ceph01][DEBUG ]       cluster name              ceph
[ceph01][DEBUG ]       crush device class        None
[ceph01][DEBUG ]       encrypted                 0
[ceph01][DEBUG ]       osd fsid                  d9349281-5ae9-49d7-8c8c-ca3774320fbd
[ceph01][DEBUG ]       osd id                    0
[ceph01][DEBUG ]       type                      block
[ceph01][DEBUG ]       vdo                       0
[ceph01][DEBUG ]       devices                   /dev/sdb
[ceph01][DEBUG ] 
[ceph01][DEBUG ] ====== osd.4 =======
[ceph01][DEBUG ] 
[ceph01][DEBUG ]   [block]       /dev/ceph-f8d33be2-c8c2-4e7f-97ed-892cbe14487c/osd-block-e3102a32-dfb3-42c7-8d6f-617c030808f7
[ceph01][DEBUG ] 
[ceph01][DEBUG ]       block device              /dev/ceph-f8d33be2-c8c2-4e7f-97ed-892cbe14487c/osd-block-e3102a32-dfb3-42c7-8d6f-617c030808f7
[ceph01][DEBUG ]       block uuid                1vdMB5-bjal-IKY2-PBzw-S0c1-48kV-4Hfszq
[ceph01][DEBUG ]       cephx lockbox secret      
[ceph01][DEBUG ]       cluster fsid              8a83b874-efa4-4655-b070-704e63553839
[ceph01][DEBUG ]       cluster name              ceph
[ceph01][DEBUG ]       crush device class        None
[ceph01][DEBUG ]       encrypted                 0
[ceph01][DEBUG ]       osd fsid                  e3102a32-dfb3-42c7-8d6f-617c030808f7
[ceph01][DEBUG ]       osd id                    4
[ceph01][DEBUG ]       type                      block
[ceph01][DEBUG ]       vdo                       0
[ceph01][DEBUG ]       devices                   /dev/sdc

事实上,管理员也可以使用ceph命令查看OSD的相关信息:

[cephadm@ceph-admin ceph-cluster]$ ceph osd stat
8 osds: 8 up (since 58s), 8 in (since 58s); epoch: e33

或者使用如下命令了解相关的信息:

[cephadm@ceph-admin ceph-cluster]$ ceph osd ls
0
1
2
3
4
5
6
7
[cephadm@ceph-admin ceph-cluster]$ ceph -scluster:id:     8a83b874-efa4-4655-b070-704e63553839health: HEALTH_OKservices:mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 37m)mgr: ceph04(active, since 7m), standbys: ceph03osd: 8 osds: 8 up (since 115s), 8 in (since 115s)data:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   8.0 GiB used, 64 GiB / 72 GiB availpgs:     [cephadm@ceph-admin ceph-cluster]$ ceph osd tree
ID CLASS WEIGHT  TYPE NAME       STATUS REWEIGHT PRI-AFF 
-1       0.07031 root default                            
-3       0.01758     host ceph01                         0   hdd 0.00879         osd.0       up  1.00000 1.00000 4   hdd 0.00879         osd.4       up  1.00000 1.00000 
-5       0.01758     host ceph02                         1   hdd 0.00879         osd.1       up  1.00000 1.00000 5   hdd 0.00879         osd.5       up  1.00000 1.00000 
-7       0.01758     host ceph03                         2   hdd 0.00879         osd.2       up  1.00000 1.00000 6   hdd 0.00879         osd.6       up  1.00000 1.00000 
-9       0.01758     host ceph04                         3   hdd 0.00879         osd.3       up  1.00000 1.00000 7   hdd 0.00879         osd.7       up  1.00000 1.00000 

这篇关于一起来学ceph 01.ceph nautilus版 安装的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/808717

相关文章

Zookeeper安装和配置说明

一、Zookeeper的搭建方式 Zookeeper安装方式有三种,单机模式和集群模式以及伪集群模式。 ■ 单机模式:Zookeeper只运行在一台服务器上,适合测试环境; ■ 伪集群模式:就是在一台物理机上运行多个Zookeeper 实例; ■ 集群模式:Zookeeper运行于一个集群上,适合生产环境,这个计算机集群被称为一个“集合体”(ensemble) Zookeeper通过复制来实现

CentOS7安装配置mysql5.7 tar免安装版

一、CentOS7.4系统自带mariadb # 查看系统自带的Mariadb[root@localhost~]# rpm -qa|grep mariadbmariadb-libs-5.5.44-2.el7.centos.x86_64# 卸载系统自带的Mariadb[root@localhost ~]# rpm -e --nodeps mariadb-libs-5.5.44-2.el7

Centos7安装Mongodb4

1、下载源码包 curl -O https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.2.1.tgz 2、解压 放到 /usr/local/ 目录下 tar -zxvf mongodb-linux-x86_64-rhel70-4.2.1.tgzmv mongodb-linux-x86_64-rhel70-4.2.1/

MySQL数据库宕机,启动不起来,教你一招搞定!

作者介绍:老苏,10余年DBA工作运维经验,擅长Oracle、MySQL、PG、Mongodb数据库运维(如安装迁移,性能优化、故障应急处理等)公众号:老苏畅谈运维欢迎关注本人公众号,更多精彩与您分享。 MySQL数据库宕机,数据页损坏问题,启动不起来,该如何排查和解决,本文将为你说明具体的排查过程。 查看MySQL error日志 查看 MySQL error日志,排查哪个表(表空间

Centos7安装JDK1.8保姆版

工欲善其事,必先利其器。这句话同样适用于学习Java编程。在开始Java的学习旅程之前,我们必须首先配置好适合的开发环境。 通过事先准备好这些工具和配置,我们可以避免在学习过程中遇到因环境问题导致的代码异常或错误。一个稳定、高效的开发环境能够让我们更加专注于代码的学习和编写,提升学习效率,减少不必要的困扰和挫折感。因此,在学习Java之初,投入一些时间和精力来配置好开发环境是非常值得的。这将为我

hdu 2602 and poj 3624(01背包)

01背包的模板题。 hdu2602代码: #include<stdio.h>#include<string.h>const int MaxN = 1001;int max(int a, int b){return a > b ? a : b;}int w[MaxN];int v[MaxN];int dp[MaxN];int main(){int T;int N, V;s

安装nodejs环境

本文介绍了如何通过nvm(NodeVersionManager)安装和管理Node.js及npm的不同版本,包括下载安装脚本、检查版本并安装特定版本的方法。 1、安装nvm curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash 2、查看nvm版本 nvm --version 3、安装

计算机毕业设计 大学志愿填报系统 Java+SpringBoot+Vue 前后端分离 文档报告 代码讲解 安装调试

🍊作者:计算机编程-吉哥 🍊简介:专业从事JavaWeb程序开发,微信小程序开发,定制化项目、 源码、代码讲解、文档撰写、ppt制作。做自己喜欢的事,生活就是快乐的。 🍊心愿:点赞 👍 收藏 ⭐评论 📝 🍅 文末获取源码联系 👇🏻 精彩专栏推荐订阅 👇🏻 不然下次找不到哟~Java毕业设计项目~热门选题推荐《1000套》 目录 1.技术选型 2.开发工具 3.功能

SWAP作物生长模型安装教程、数据制备、敏感性分析、气候变化影响、R模型敏感性分析与贝叶斯优化、Fortran源代码分析、气候数据降尺度与变化影响分析

查看原文>>>全流程SWAP农业模型数据制备、敏感性分析及气候变化影响实践技术应用 SWAP模型是由荷兰瓦赫宁根大学开发的先进农作物模型,它综合考虑了土壤-水分-大气以及植被间的相互作用;是一种描述作物生长过程的一种机理性作物生长模型。它不但运用Richard方程,使其能够精确的模拟土壤中水分的运动,而且耦合了WOFOST作物模型使作物的生长描述更为科学。 本文让更多的科研人员和农业工作者

K8S(Kubernetes)开源的容器编排平台安装步骤详解

K8S(Kubernetes)是一个开源的容器编排平台,用于自动化部署、扩展和管理容器化应用程序。以下是K8S容器编排平台的安装步骤、使用方式及特点的概述: 安装步骤: 安装Docker:K8S需要基于Docker来运行容器化应用程序。首先要在所有节点上安装Docker引擎。 安装Kubernetes Master:在集群中选择一台主机作为Master节点,安装K8S的控制平面组件,如AP