您的位置 首页 > 德语词汇

Childrenamp39sDay?Cephadm全功能安装Ceph Pacific

大家好,Childrenamp39sDay相信很多的网友都不是很明白,包括Cephadm全功能安装Ceph Pacific也是一样,不过没有关系,接下来就来为大家分享关于Childrenamp39sDay和Cephadm全功能安装Ceph Pacific的一些知识点,大家可以关注收藏,免得下次来找不到哦,下面我们开始吧!

在一年前的写的文章中我提到了cephadm安装工具,那会刚出来有不少功能还无法安装,经过一年的时间的等待,一个月前发再ceph16版本基本功能都差不多了,就开始了cephadm的研究,但因为是想写一个比较全面的文章,不像现有网上的文章只有基本的一些功能,所以在研究iscsi和ingress安装的时候遇到了些问题,安装过数十次也未解决,在前段时候ceph出了最新的ceph16.2.5版本,终于所有功能都正常了,才有了本篇文章。

PS:个人感悟,如没有必要还是不要研究比较新的开源的软件,太折腾太累,新的东西总可能有些BUG。

Childrenamp39sDay?Cephadm全功能安装Ceph Pacific

Cephadm是随着Ceph新版本v15.2.0(Octopus)发布的安装工具,并且不支持Ceph的旧版本,Ceph中已经Cephadm不依赖于外部配置工具,如Ansible、Rook和Salt,它通过SSH将管理器守护进程连接到主机来实现这一点。管理器守护进程可以添加、删除和更新Ceph容器。

RedhatCeph5和SUSECeph7的最新版本中也已经使用了Cephadm,所以Cephadm是Ceph的安装工具的未来,并且现在Cephadm也差不多已经成熟可用了,像NFSiSCSI服务官方也宣称已经稳定可用,未来也会加入CIFS等服务,其他功能也在不停地完善中,所以是时候系统的学习下Cephadm了。

通过下图可以看出ceph新版本中的部署架构,通过一个编排接口,可以向下对接rook和cephadm两种编排工具,向上通过命令行"cephorch"或cephdashboard实现编排。

其实这个编排和我们理解中的容器编排是相同的,我们可以理解成Cephadm就是一个迷你版本的kubernetes,当然rook有点不同,rook可以理解成一种对接kubernetes编排的中间层,实际编排还是由kubernetes实现,而Cephadm是自己就实现了一些调度编排的逻辑。

两种编排工具支持的功能上是有些差别的,具体差别如下表所示:

Cepdm管理Ceph集群的整个生命周期。这个生命周期从引导过程开始,当cephdm在单个节点上创建一个单节点的Ceph集群时。这个集群由一个MON和一个MGR组成。

在创建了单节点的Ceph集群后,Cephadm然后会使用编排接口(“day2”命令)扩展集群,添加相应主机并部署相应的Ceph守护进程(daemons)和服务(services)。这个动作可以通过Ceph命令行界面(CLI)或者通过仪表板(GUI)来执行。

cephadm还在开发完善中,有些功能是文档方面没有很完善,比如,RWG,有些功能方面是没有最终确定思路,将来可能有大的变化,比如,ingress(之前叫rgw-ha),cephfs-mirror等。

上文中提到了两个名词servicet和daemon,这两个概念在cephadm中很重要,会贯穿整个ceph的生命周期的管理中。在之前的ceph-deploy中没有这个概念,当我们要部署三个MON的话,我们会在三台主机上分别部署三个MON进程,在cephadm中由于引入了编排,当我们部署一个高可用的MON服务时,会说我要部署一个monservice,但实际还是得由三个容器去承载mon,这里还引出另一个问题,这三个承载MON的容器是启动在一台主机上还是启动在三台主机上,这个就是由“放置规范”决定的了(后面我们会将放置规范),servcie和deamon的关系其实就如下图所示:

上文中提示放置规范也就是部署一个Servcie,会有几个Deamon,这几个Deamon是在哪几台主机上,这就是放置规范需要规定的内容,编排调度再根据放置规范具体调度,cephadm总共有5种放置规范,如下:

让我们实现看下Cephadm服务的展示,下图中PLACEMENT中*表示部署在所有主机,count:1表示使用了指定数量匹配,ceph1;ceph2;ceph3表示直接使用了明确指定匹配,RUNNING表示实现期望运行的Deamon和正在运行的Deamon比例,如mon显示3/5表示mon默认指定了5个Deamon,但现在只有3台主机,所有只有3个在运行,当然这个期望数量也是可以修改的,PORTS表示该Service暴露的IP和端口。

[root@ceph1~]#cephorchls\nNAMEPORTSRUNNINGREFRESHEDAGEPLACEMENT\nalertmanager?:9093,90941/14mago3dcount:1\ncrash3/34mago3d*\ngrafana?:30001/14mago3dcount:1\ningress.nfs.nfs192.168.149.201:2050,19686/64mago6hcount:3\ningress.rgw.rgw192.168.149.200:8080,19676/64mago6hcount:3\niscsi.gw3/34mago7hceph1;ceph2;ceph3\nmds.cephfs3/34mago7hcount:3\nmgr2/24mago3dcount:2\nmon3/54mago3dcount:5\nnfs.nfs3/34mago7hcount:3\nnode-exporter?:91003/34mago3d*\nosd.all-available-devices9/124mago3d*\nprometheus?:90951/14mago3dcount:1\nrbd-mirror3/34mago7hcount:3\nrgw.rgw?:803/34mago7hcount:3\n

注意事项

如果想修改默认mon数量,可以使用cephorchappymon3,如果想单独指定mon部署到某几台主机上,想禁用自动部署,可以使用cephorchapplymon--unmanaged

Cehpadm需要有如下软件安装在系统中。

序号类型版本1OSCentOS8.3(mini)2Podman3.0.2-dev3CephPacific(16.2.5)4Python3.6.8

序号主机名磁盘(20GB)角色1ceph1sdb,sdd,sdccephadm,mon,mgr,osd,rgw,nfs,cephfs,iscsi,prometheus,grafana,rbd-mirror,cephfs-mirror,nfs-ingress,rgw-ingress2ceph2sdb,sdd,sdcmon,mgr,osd,rgw,nfs,cephfs,iscsi,prometheus,grafana,rbd-mirror,cephfs-mirror,nfs-ingress,rgw-ingress3ceph3sdb,sdd,sdcmon,mgr,osd,rgw,nfs,cephfs,iscsi,prometheus,grafana,rbd-mirror,cephfs-mirror,nfs-ingress,rgw-ingress

lvm因为系统自带的都有,所以就不用单独安装了,

#dnfinstallepel-release-y\n#dnfinstallpython3-y\n#dnfinstallpodman-y\n#dnfinstall-ychrony\n#systemctlstartchronyd&&systemctlenablechronyd\n

注意事项

chrony时间服务为必须安装,具体有2点原因:1为不安装在添加主机的时候会报错,2为即使安装成功ceph-s会也提示时间不同步!

#systemctlstartchronyd&&systemctlenablechronyd\n#systemctldisablefirewalld&&systemctlstopfirewalld\n#setenforce0\n#sed-i"s/SELINUX=enforcing/SELINUX=disabled/g"/etc/selinux/config\n

(可选)如果在安装系统的时候就设置过短主机名,就可以不用这一步骤。

#hostnamectlset-hostnameceph1\n#hostnamectlset-hostnameceph2\n#hostnamectlset-hostnameceph3\n

注意事项

cephadm需要主机名为短主机名,不能为FQDN,否则在添加主机会报错!

添加hosts文件中的主机名和IP关系,主机名需要和上面一致

#cat/etc/hosts\n127.0.0.1localhostlocalhost.localdomainlocalhost4localhost4.localdomain4\n::1localhostlocalhost.localdomainlocalhost6localhost6.localdomain6\n192.168.149.128ceph1\n192.168.149.145ceph2\n192.168.149.146ceph3\n

配置时间同步

#vi/etc/chrony.conf\nallow192.168.123.0/24\n#systemctlrestartchronyd\n

配置其它Ceph2,Ceph3节点为ntpclient。

#vi/etc/chrony.conf\nserverceph-adminiburst\n#systemctlrestartchronyd\n

在Ceph2和Ceph3确认配置是否成功。

#chronycsources\n210Numberofsources=4\nMSName/IPaddressStratumPollReachLastRxLastsample\n===============================================================================\n^*ceph-admin39068m+39us[+45us]+/-18ms\n^-time.cloudflare.com31035249m+5653us[+5653us]+/-71ms\n^?de-user.deepinid.deepin.>3102121m-2286us[-2286us]+/-97ms\n^?2402:f000:1:416:101:6:6:>060-+0ns[+0ns]+/-0ns\n5.部署Ceph75.1下载cephadm脚本

下载cephadm脚本,并安装相应版本的容器源。

#curl--silent--remote-name--locationhttps://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm\n#chmod+xcephadm\n#./cephadmadd-repo--releasepacific\n#./cephadminstall\n#./cephadminstallceph-common\n

注意事项

官方文档中还提到了另一种安装cephadm方式,就是通过dnfinstall-ycephadm安装,实践证明最好不要使用这种方式,这种方式安装的cephadm可能不是最新版本的,但cephadm去拉的容器版本又是最新的,会导致两个版本不一致!

查看编排后端是cephadm,如果是使用的rook这里后端显示的就是rook。

[root@ceph1~]#cephorchstatus\nBackend:cephadm\nAvailable:Yes\nPaused:No\n85.2引导单群集Ceph安装

开始通过cephadm“引导”一个单节点Ceph集群起来,示意图如下:

[root@ceph1~]#cephadmbootstrap--mon-ip192.168.149.128\nVerifyingpodman|dockerispresent...\nVerifyinglvm2ispresent...\nVerifyingtimesynchronizationisinplace...\nUnitchronyd.serviceisenabledandrunning\nRepeatingthefinalhostcheck...\npodman|docker(/usr/bin/podman)ispresent\nsystemctlispresent\nlvcreateispresent\nUnitchronyd.serviceisenabledandrunning\nHostlooksOK\nClusterfsid:36e7a21c-e3f7-11eb-8960-000c299df6ef\nVerifyingIP192.168.149.128port3300...\nVerifyingIP192.168.149.128port6789...\nMonIP`192.168.149.128`isinCIDRnetwork`192.168.149.0/24`\n-internalnetwork(--cluster-network)hasnotbeenprovided,OSDreplicationwilldefaulttothepublic_network\nPullingcontainerimagedocker.io/ceph/ceph:v16...\nCephversion:cephversion16.2.5(0883bdea7337b95e4b611c768c0279868462204a)pacific(stable)\nExtractingcephuseruid/gidfromcontainerimage...\nCreatinginitialkeys...\nCreatinginitialmonmap...\nCreatingmon...\nWaitingformontostart...\nWaitingformon...\nmonisavailable\nAssimilatinganythingwecanfromceph.conf...\nGeneratingnewminimalceph.conf...\nRestartingthemonitor...\nSettingmonpublic_networkto192.168.149.0/24\nWroteconfigto/etc/ceph/ceph.conf\nWrotekeyringto/etc/ceph/ceph.client.admin.keyring\nCreatingmgr...\nVerifyingport9283...\nWaitingformgrtostart...\nWaitingformgr...\nmgrnotavailable,waiting(1/15)...\nmgrnotavailable,waiting(2/15)...\nmgrnotavailable,waiting(3/15)...\nmgrisavailable\nEnablingcephadmmodule...\nWaitingforthemgrtorestart...\nWaitingformgrepoch5...\nmgrepoch5isavailable\nSettingorchestratorbackendtocephadm...\nGeneratingsshkey...\nWrotepublicSSHkeyto/etc/ceph/ceph.pub\nAddingkeytoroot@localhostauthorized_keys...\nAddinghostceph1...\nDeployingmonservicewithdefaultplacement...\nDeployingmgrservicewithdefaultplacement...\nDeployingcrashservicewithdefaultplacement...\nEnablingmgrprometheusmodule...\nDeployingprometheusservicewithdefaultplacement...\nDeployinggrafanaservicewithdefaultplacement...\nDeployingnode-exporterservicewithdefaultplacement...\nDeployingalertmanagerservicewithdefaultplacement...\nEnablingthedashboardmodule...\nWaitingforthemgrtorestart...\nWaitingformgrepoch13...\nmgrepoch13isavailable\nGeneratingadashboardself-signedcertificate...\nCreatinginitialadminuser...\nFetchingdashboardportnumber...\nCephDashboardisnowavailableat:\n\nURL:https://ceph1:8443/\nUser:admin\nPassword:dqhiov5x4v\n\nEnablingclient.adminkeyringandconfonhostswith"admin"label\nYoucanaccesstheCephCLIwith:\n\nsudo/usr/sbin/cephadmshell--fsid36e7a21c-e3f7-11eb-8960-000c299df6ef-c/etc/ceph/ceph.conf-k/etc/ceph/ceph.client.admin.keyring\n\nPleaseconsiderenablingtelemetrytohelpimproveCeph:\n\ncephtelemetryon\n\nFormoreinformationsee:\n\nhttps://docs.ceph.com/docs/pacific/mgr/telemetry/\n\nBootstrapcomplete.\n\n

引导完成一个单节点群集,程序会做如下事情:

完成后记录以上了IP以及用户和密码,打开CephDashboard并根据提示修改密码,打开后提示要激活计量模块。

(可选)如果忘记记录密码可以通过以下方法重置密码(将密码写入password文件中,通过命令导入密码)

cephdashboardac-user-set-passwordadmin-ipassword\n{"username":"admin","password":"$2b$12$6oFrEpssXCzLnKTWQy5fM.YZwlHjn8CuQRdeSSJR9hBGgVuwGCxoa","roles":["administrator"],"name":null,"email":null,"lastUpdate":1620495653,"enabled":true,"pwdExpirationDate":null,"pwdUpdateRequired":false}\n

image-20210717120935815

如果CephDashboard中错过了启用,也可以使用命令启用,命令是“cephtelemetryon--licensesharing-1-0”。

上文中提示了在引导成功单节点Ceph群集后会引导程序会将publickey的副本写入/etc/ceph/ceph.pub,在添加主机节点前需要将该key分发到要加入群集的主机上,示意图如下所示:

[root@ceph1~]#ssh-copy-id-f-i/etc/ceph/ceph.pubroot@ceph2/usr/bin/ssh-copy-id:INFO:Sourceofkey(s)tobeinstalled:"/etc/ceph/ceph.pub"Theauthenticityofhost'ceph2(192.168.149.145)'can'tbeestablished.ECDSAkeyfingerprintisSHA256:1fioQmugbtBCiRuwNNKr/aa3Z/hm5zeqUrIfOZi2nS8.Areyousureyouwanttocontinueconnecting(yes/no/[fingerprint])?yesroot@ceph2'spassword:Numberofkey(s)added:1Nowtryloggingintothemachine,with:"ssh'root@ceph2'"andchecktomakesurethatonlythekey(s)youwantedwereadded.[root@ceph1~]#ssh-copy-id-f-i/etc/ceph/ceph.pubroot@ceph3/usr/bin/ssh-copy-id:INFO:Sourceofkey(s)tobeinstalled:"/etc/ceph/ceph.pub"Theauthenticityofhost'ceph3(192.168.149.146)'can'tbeestablished.ECDSAkeyfingerprintisSHA256:eBmb4q2ptVYS55njTzmQYCNo4p3yguNi85nHyAuR4XU.Areyousureyouwanttocontinueconnecting(yes/no/[fingerprint])?yesroot@ceph3'spassword:Numberofkey(s)added:1Nowtryloggingintothemachine,with:"ssh'root@ceph3'"andchecktomakesurethatonlythekey(s)youwantedwereadded.

[root@ceph1~]#cephorchhostaddceph2192.168.149.145Addedhost'ceph2'withaddr'192.168.149.145'[root@ceph1~]#cephorchhostaddceph3192.168.149.146Addedhost'ceph3'withaddr'192.168.149.146'

注意这里添加主机有时候不用写IP地址是不能添加的,网上有些文章是没写IP的,但有些情况下不加IP会报错,所以还是直接加上,并且官方文档中也是加的IP的!

[root@ceph1~]#cephorchhostls\nHOSTADDRLABELSSTATUS\nceph1192.168.149.128_admin\nceph2192.168.149.145\nceph3192.168.149.146\n105.4添加OSD

添加OSD需求满足以下所有条件:

添加OSD有2种方式,1为自动添加所有满足条件的OSD。

#cephorchapplyosd--all-available-devices\n

2为通过手工指定的方式添加OSD。

#cephorchdaemonaddosdceph1:/dev/sdb\n

222

本次使用第一种自动部署的方式,部署完成后查看设备列表,显示为NO就完成了。

[root@ceph1~]#cephorchdevicels\nHostnamePathTypeSerialSizeHealthIdentFaultAvailable\nceph1/dev/sdbhdd21.4GUnknownN/AN/ANo\nceph1/dev/sdchdd21.4GUnknownN/AN/ANo\nceph1/dev/sddhdd21.4GUnknownN/AN/ANo\nceph2/dev/sdbhdd21.4GUnknownN/AN/ANo\nceph2/dev/sdchdd21.4GUnknownN/AN/ANo\nceph2/dev/sddhdd21.4GUnknownN/AN/ANo\nceph3/dev/sdbhdd21.4GUnknownN/AN/ANo\nceph3/dev/sdchdd21.4GUnknownN/AN/ANo\nceph3/dev/sddhdd21.4GUnknownN/AN/ANo\n115.5查看Ceph部署服务

命令行查看Ceph状态正常。

[root@ceph1~]#ceph-s\ncluster:\nid:36e7a21c-e3f7-11eb-8960-000c299df6ef\nhealth:HEALTH_OK\n\nservices:\nmon:3daemons,quorumceph1,ceph2,ceph3(age8s)\nmgr:ceph1.nwbihh(active,since3d),standbys:ceph2.ednijf\nosd:9osds:9up(since3s),9in(since3d)\n\ndata:\npools:1pools,1pgs\nobjects:0objects,0B\nusage:48MiBused,180GiB/180GiBavail\npgs:1active+clean\n\n\n

我们打开dashboard看下,监控而不能显示。

其实这个原因是使用了https,又没有安全证书,所有不能显示。

我们只需求通过浏览器直接打开grafana地址,手工点击“接受风险并继续”再回到CephDashboard中再查看就正常了。

可以看到告警模块也自动集成了,这点不会像之前Ceph-deploy安装方式使用那复杂的步骤手工集成了。

下面我们逐步一个个部署集成这些服务。

#cephorchapplyrgwrgw--placement=3\n

333

通过Service查看命令cephorchls查看该服务状态。

[root@ceph1~]#cephorchls\nNAMEPORTSRUNNINGREFRESHEDAGEPLACEMENT\nalertmanager?:9093,90941/110mago3dcount:1\ncrash3/310mago3d*\ngrafana?:30001/110mago3dcount:1\nmgr2/210mago3dcount:2\nmon3/510mago3dcount:5\nnode-exporter?:91003/310mago3d*\nosd.all-available-devices9/1210mago3d*\nprometheus?:90951/110mago3dcount:1\nrgw.rgw?:803/31sago12scount:3\n

通过Deamon查看命令cephorchps查看该进程状态。

[root@ceph1~]#cephorchps\nNAMEHOSTPORTSSTATUSREFRESHEDAGEMEMUSEMEMLIMVERSIONIMAGEIDCONTAINERID\nalertmanager.ceph1ceph1*:9093,9094running(3d)30sago3d25.4M-0.20.00881eb8f169f32812d14049d\ncrash.ceph1ceph1running(3d)30sago3d2675k-16.2.56933c2a0b7dd4e28e82a2c92\ncrash.ceph2ceph2running(3d)34sago3d12.3M-16.2.56933c2a0b7dd45e02925199a\ncrash.ceph3ceph3running(3d)34sago3d12.9M-16.2.56933c2a0b7dde98dc6157ba2\ngrafana.ceph1ceph1*:3000running(3d)30sago3d54.0M-6.7.4ae5c36c3d3cddf3a2a73271c\nmgr.ceph1.nwbihhceph1*:9283running(3d)30sago3d428M-16.2.56933c2a0b7dd8210247b8cef\nmgr.ceph2.ednijfceph2*:8443,9283running(3d)34sago3d399M-16.2.56933c2a0b7ddcf556f5d2527\nmon.ceph1ceph1running(3d)30sago3d475M2048M16.2.56933c2a0b7ddeef3a01cca72\nmon.ceph2ceph2running(3d)34sago3d293M2048M16.2.56933c2a0b7dd4130307d82f2\nmon.ceph3ceph3running(3d)34sago3d261M2048M16.2.56933c2a0b7ddfa0cdd7af8a9\nnode-exporter.ceph1ceph1*:9100running(3d)30sago3d14.9M-0.18.1e5a616e4b9cf3fc396d01969\nnode-exporter.ceph2ceph2*:9100running(3d)34sago3d11.3M-0.18.1e5a616e4b9cf2b0081864a94\nnode-exporter.ceph3ceph3*:9100running(3d)34sago3d11.2M-0.18.1e5a616e4b9cf73a7bbe0831c\nosd.0ceph2running(3d)34sago3d41.2M4096M16.2.56933c2a0b7dd2046f2cc358e\nosd.1ceph3running(3d)34sago3d45.4M4096M16.2.56933c2a0b7dd09058507fe6e\nosd.2ceph1running(3d)30sago3d33.0M4096M16.2.56933c2a0b7dd80d58366f0dc\nosd.3ceph2running(3d)34sago3d42.5M4096M16.2.56933c2a0b7dd63654f9d8082\nosd.4ceph3running(3d)34sago3d43.3M4096M16.2.56933c2a0b7ddd3a82429878d\nosd.5ceph1running(3d)30sago3d31.3M4096M16.2.56933c2a0b7ddebfc2a71bc3c\nosd.6ceph2running(3d)34sago3d38.8M4096M16.2.56933c2a0b7dd235189a4bd54\nosd.7ceph3running(3d)34sago3d44.3M4096M16.2.56933c2a0b7ddb7f8a457a3b1\nosd.8ceph1running(3d)30sago3d33.2M4096M16.2.56933c2a0b7ddeb1bf3e567fd\nprometheus.ceph1ceph1*:9095running(3d)30sago3d81.5M-2.18.1de242295e2254da5a8d98259\nrgw.rgw.ceph1.lgcvfwceph1*:80running(43s)30sago42s48.0M-16.2.56933c2a0b7dd20fb488d35ad\nrgw.rgw.ceph2.eqykjtceph2*:80running(40s)34sago40s40.1M-16.2.56933c2a0b7dde9c9538b064e\nrgw.rgw.ceph3.eybvweceph3*:80running(37s)34sago37s41.8M-16.2.56933c2a0b7dd75e93ef56bb7\n\n

集成到dashboard

#radosgw-adminusercreate--uid=rgw--display-name=rgw--system\n"keys":[\n{\n"user":"rgw",\n"access_key":"M0XRR80H4AGGE4PP0A5B",\n"secret_key":"Tbln48sfIceDGNill5muCrX0oMCHrQcl2oC9OURe"\n}\n],\n......\n

记录access_key和secret_key的值保存为access_key.txt和secret_key.txt,通过命令集成到dahsboard中。

#cephdashboardset-rgw-api-access-key-iaccess_key.txt\nOptionRGW_API_ACCESS_KEYupdated\n#cephdashboardset-rgw-api-secret-key-isecret_key.txt\nOptionRGW_API_SECRET_KEYupdated\n

在CephDashboard中看到RGW已经集成成功(看官方文档未来Cephadm安装会自动集成到CephDashboard中,不像现在还需要手动集成)。

部署cephfs服务并创建cepfs,创建cephfs有两种方式,一种是使用的是cephfs命令该命令会自动创建相应的池,另一种手工创建池并创建Service,下面方法任选一种。

#cephfsvolumecreatecephfs--placement=3\n

#cephosdpoolcreatecephfs_data32\n#cephosdpoolcreatecephfs_metadata32\n#cephfsnewcephfscephfs_metadatacephfs_data\n#cephorchapplymdscephfs--placement=3\n

444

[root@ceph1~]#cephorchls\nNAMEPORTSRUNNINGREFRESHEDAGEPLACEMENT\nalertmanager?:9093,90941/11sago3dcount:1\ncrash3/35sago3d*\ngrafana?:30001/11sago3dcount:1\nmds.cephfs3/35sago15scount:3\nmgr2/25sago3dcount:2\nmon3/55sago3dcount:5\nnode-exporter?:91003/35sago3d*\nosd.all-available-devices9/125sago3d*\nprometheus?:90951/11sago3dcount:1\nrgw.rgw?:803/35sago21mcount:3\n

查看Deamon状态。

[root@ceph1~]#cephorchps\nNAMEHOSTPORTSSTATUSREFRESHEDAGEMEMUSEMEMLIMVERSIONIMAGEIDCONTAINERID\nalertmanager.ceph1ceph1*:9093,9094running(7m)32sago3d31.9M-0.20.00881eb8f169f66918f633189\ncrash.ceph1ceph1running(7m)32sago3d6287k-16.2.56933c2a0b7dddbe00b18ef37\ncrash.ceph2ceph2running(7m)36sago3d7889k-16.2.56933c2a0b7ddc24dcd762121\ncrash.ceph3ceph3running(7m)36sago3d10.1M-16.2.56933c2a0b7ddeed1c74352f0\ngrafana.ceph1ceph1*:3000running(7m)32sago3d70.5M-6.7.4ae5c36c3d3cdcbbfae0f90e4\nmds.cephfs.ceph1.ooraizceph1running(44s)32sago43s24.0M-16.2.56933c2a0b7ddf668ab1ad9bb\nmds.cephfs.ceph2.qfmprjceph2running(41s)36sago41s19.5M-16.2.56933c2a0b7ddd8bb9979aca2\nmds.cephfs.ceph3.ifskbaceph3running(39s)36sago39s19.8M-16.2.56933c2a0b7ddd0fd9a78c1d8\nmgr.ceph1.nwbihhceph1*:9283running(7m)32sago3d480M-16.2.56933c2a0b7dd90bd2d0704b3\nmgr.ceph2.ednijfceph2*:8443,9283running(7m)36sago3d468M-16.2.56933c2a0b7dd9b69c446355f\nmon.ceph1ceph1running(7m)32sago3d139M2048M16.2.56933c2a0b7dd736fd1630a06\nmon.ceph2ceph2running(7m)36sago3d108M2048M16.2.56933c2a0b7dd629bc53c8992\nmon.ceph3ceph3running(7m)36sago3d125M2048M16.2.56933c2a0b7dd1b56b8a0d9ac\nnode-exporter.ceph1ceph1*:9100running(7m)32sago3d23.3M-0.18.1e5a616e4b9cf3664470b1641\nnode-exporter.ceph2ceph2*:9100running(7m)36sago3d24.9M-0.18.1e5a616e4b9cf140f7e84ba38\nnode-exporter.ceph3ceph3*:9100running(7m)36sago3d24.8M-0.18.1e5a616e4b9cfde98304ceaea\nosd.0ceph2running(7m)36sago3d59.2M4096M16.2.56933c2a0b7dd6ebce33417c2\nosd.1ceph3running(7m)36sago3d82.7M4096M16.2.56933c2a0b7dd6d43f1b7bfde\nosd.2ceph1running(7m)32sago3d56.0M4096M16.2.56933c2a0b7dd07d1183306d1\nosd.3ceph2running(7m)36sago3d67.5M4096M16.2.56933c2a0b7dda77d771d3e6d\nosd.4ceph3running(7m)36sago3d48.7M4096M16.2.56933c2a0b7dd3d82752a8fb1\nosd.5ceph1running(7m)32sago3d61.6M4096M16.2.56933c2a0b7dd6f7b5df090d5\nosd.6ceph2running(7m)36sago3d59.2M4096M16.2.56933c2a0b7dd7f769655adc7\nosd.7ceph3running(7m)36sago3d51.9M4096M16.2.56933c2a0b7dd24d63cd18dde\nosd.8ceph1running(7m)32sago3d44.8M4096M16.2.56933c2a0b7dd667fb8831e53\nprometheus.ceph1ceph1*:9095running(7m)32sago3d97.0M-2.18.1de242295e225f67fbc035cba\nrgw.rgw.ceph1.lgcvfwceph1*:80running(7m)32sago22m71.1M-16.2.56933c2a0b7ddc8e1c9701010\nrgw.rgw.ceph2.eqykjtceph2*:80running(7m)36sago22m82.9M-16.2.56933c2a0b7ddd8ba326c22b8\nrgw.rgw.ceph3.eybvweceph3*:80running(7m)36sago22m81.4M-16.2.56933c2a0b7ddd89c2b87e8e4\n\n

查看CephDashboard中状态。

#cephosdpoolcreateganesha_data32\n#cephosdpoolapplicationenableganesha_datanfs\n

555

#cephorchapplynfsnfsganesha_data--placement=3\n

查看Service状态。

[root@ceph1~]#cephorchls\nNAMEPORTSRUNNINGREFRESHEDAGEPLACEMENT\nalertmanager?:9093,90941/13sago3dcount:1\ncrash3/38sago3d*\ngrafana?:30001/13sago3dcount:1\nmds.cephfs3/38sago3mcount:3\nmgr2/28sago3dcount:2\nmon3/58sago3dcount:5\nnfs.nfs3/38sago28scount:3\nnode-exporter?:91003/38sago3d*\nosd.all-available-devices9/128sago3d*\nprometheus?:90951/13sago3dcount:1\nrgw.rgw?:803/38sago25mcount:3\n\n

查看Deamon状态。

[root@ceph1~]#cephorchps\nNAMEHOSTPORTSSTATUSREFRESHEDAGEMEMUSEMEMLIMVERSIONIMAGEIDCONTAINERID\nalertmanager.ceph1ceph1*:9093,9094running(11m)27sago3d28.9M-0.20.00881eb8f169f66918f633189\ncrash.ceph1ceph1running(11m)27sago3d5628k-16.2.56933c2a0b7dddbe00b18ef37\ncrash.ceph2ceph2running(10m)33sago3d7889k-16.2.56933c2a0b7ddc24dcd762121\ncrash.ceph3ceph3running(10m)33sago3d10.1M-16.2.56933c2a0b7ddeed1c74352f0\ngrafana.ceph1ceph1*:3000running(11m)27sago3d70.6M-6.7.4ae5c36c3d3cdcbbfae0f90e4\nmds.cephfs.ceph1.ooraizceph1running(3m)27sago3m26.6M-16.2.56933c2a0b7ddf668ab1ad9bb\nmds.cephfs.ceph2.qfmprjceph2running(3m)33sago3m21.4M-16.2.56933c2a0b7ddd8bb9979aca2\nmds.cephfs.ceph3.ifskbaceph3running(3m)33sago3m21.7M-16.2.56933c2a0b7ddd0fd9a78c1d8\nmgr.ceph1.nwbihhceph1*:9283running(11m)27sago3d489M-16.2.56933c2a0b7dd90bd2d0704b3\nmgr.ceph2.ednijfceph2*:8443,9283running(10m)33sago3d468M-16.2.56933c2a0b7dd9b69c446355f\nmon.ceph1ceph1running(11m)27sago3d126M2048M16.2.56933c2a0b7dd736fd1630a06\nmon.ceph2ceph2running(10m)33sago3d119M2048M16.2.56933c2a0b7dd629bc53c8992\nmon.ceph3ceph3running(10m)33sago3d135M2048M16.2.56933c2a0b7dd1b56b8a0d9ac\nnfs.nfs.0.0.ceph1.ufsfpfceph1*:2049running(47s)27sago47s50.5M-3.56933c2a0b7dd8af0f7d10e2b\nnfs.nfs.1.0.ceph2.blgrazceph2*:2049running(42s)33sago41s27.4M-3.56933c2a0b7ddd2e2238859e8\nnfs.nfs.2.0.ceph3.edjsczceph3*:2049running(36s)33sago36s27.2M-3.56933c2a0b7dd31e148631f9f\nnode-exporter.ceph1ceph1*:9100running(11m)27sago3d22.6M-0.18.1e5a616e4b9cf3664470b1641\nnode-exporter.ceph2ceph2*:9100running(11m)33sago3d24.9M-0.18.1e5a616e4b9cf140f7e84ba38\nnode-exporter.ceph3ceph3*:9100running(10m)33sago3d25.0M-0.18.1e5a616e4b9cfde98304ceaea\nosd.0ceph2running(10m)33sago3d62.4M4096M16.2.56933c2a0b7dd6ebce33417c2\nosd.1ceph3running(10m)33sago3d85.9M4096M16.2.56933c2a0b7dd6d43f1b7bfde\nosd.2ceph1running(10m)27sago3d52.4M4096M16.2.56933c2a0b7dd07d1183306d1\nosd.3ceph2running(10m)33sago3d71.3M4096M16.2.56933c2a0b7dda77d771d3e6d\nosd.4ceph3running(10m)33sago3d52.1M4096M16.2.56933c2a0b7dd3d82752a8fb1\nosd.5ceph1running(10m)27sago3d56.3M4096M16.2.56933c2a0b7dd6f7b5df090d5\nosd.6ceph2running(10m)33sago3d61.5M4096M16.2.56933c2a0b7dd7f769655adc7\nosd.7ceph3running(10m)33sago3d54.7M4096M16.2.56933c2a0b7dd24d63cd18dde\nosd.8ceph1running(10m)27sago3d46.7M4096M16.2.56933c2a0b7dd667fb8831e53\nprometheus.ceph1ceph1*:9095running(11m)27sago3d97.2M-2.18.1de242295e225f67fbc035cba\nrgw.rgw.ceph1.lgcvfwceph1*:80running(11m)27sago25m62.2M-16.2.56933c2a0b7ddc8e1c9701010\nrgw.rgw.ceph2.eqykjtceph2*:80running(11m)33sago25m83.3M-16.2.56933c2a0b7ddd8ba326c22b8\nrgw.rgw.ceph3.eybvweceph3*:80running(10m)33sago25m81.9M-16.2.56933c2a0b7ddd89c2b87e8e4\n\n

查看CephDashboard状态。

#cephosdpoolcreateiscsi_pool3232\n#cephosdpoolapplicationenableiscsi_pooliscsi\n

前面几个我们都是通过命令指定放置规范,由于iscsi配置的参数有点多,所以部署iscsi我们换YAM方式(当初iscsi也支持命令指定方式)。

#viiscsi.yaml\nservice_type:iscsi\nservice_id:gw\nplacement:\nhosts:\n-ceph1\n-ceph2\n-ceph3\nspec:\npool:iscsi_pool\ntrusted_ip_list:"192.168.149.128,192.168.149.145,192.168.149.146"\napi_user:admin\napi_password:admin\napi_secure:false\n

通过apply命令部署,看到apply熟悉kuernetes就很明白他的意思了,cephadm也是声明式的,所以如果想修改配置参数只需要直接修改YAML文件,再apply就可以了。

[root@ceph1~]#cephorchapply-iiscsi.yaml\nSchedulediscsi.gwupdate...\n

666

[root@ceph1~]#cephorchls\nNAMEPORTSRUNNINGREFRESHEDAGEPLACEMENT\nalertmanager?:9093,90941/15sago3dcount:1\ncrash3/39sago3d*\ngrafana?:30001/15sago3dcount:1\niscsi.gw3/39sago21sceph1;ceph2;ceph3\nmds.cephfs3/39sago8mcount:3\nmgr2/29sago3dcount:2\nmon3/59sago3dcount:5\nnfs.nfs3/39sago5mcount:3\nnode-exporter?:91003/39sago3d*\nosd.all-available-devices9/129sago3d*\nprometheus?:90951/15sago3dcount:1\nrgw.rgw?:803/39sago30mcount:3\n

查看Deamon状态。

[root@ceph1~]#cephorchps\nNAMEHOSTPORTSSTATUSREFRESHEDAGEMEMUSEMEMLIMVERSIONIMAGEIDCONTAINERID\nalertmanager.ceph1ceph1*:9093,9094running(15m)28sago3d23.6M-0.20.00881eb8f169f66918f633189\ncrash.ceph1ceph1running(15m)28sago3d4072k-16.2.56933c2a0b7dddbe00b18ef37\ncrash.ceph2ceph2running(15m)32sago3d7470k-16.2.56933c2a0b7ddc24dcd762121\ncrash.ceph3ceph3running(15m)32sago3d10.1M-16.2.56933c2a0b7ddeed1c74352f0\ngrafana.ceph1ceph1*:3000running(15m)28sago3d47.3M-6.7.4ae5c36c3d3cdcbbfae0f90e4\niscsi.gw.ceph1.esngzeceph1running(42s)28sago41s64.2M-3.56933c2a0b7dd96f1250ba7f1\niscsi.gw.ceph2.ypjkrxceph2running(39s)32sago39s61.9M-3.56933c2a0b7dd9cc090aa85ec\niscsi.gw.ceph3.hntxjsceph3running(36s)32sago36s28.2M-3.56933c2a0b7ddd9a6906671a4\nmds.cephfs.ceph1.ooraizceph1running(8m)28sago8m22.1M-16.2.56933c2a0b7ddf668ab1ad9bb\nmds.cephfs.ceph2.qfmprjceph2running(8m)32sago8m20.3M-16.2.56933c2a0b7ddd8bb9979aca2\nmds.cephfs.ceph3.ifskbaceph3running(8m)32sago8m22.9M-16.2.56933c2a0b7ddd0fd9a78c1d8\nmgr.ceph1.nwbihhceph1*:9283running(15m)28sago3d465M-16.2.56933c2a0b7dd90bd2d0704b3\nmgr.ceph2.ednijfceph2*:8443,9283running(15m)32sago3d434M-16.2.56933c2a0b7dd9b69c446355f\nmon.ceph1ceph1running(15m)28sago3d128M2048M16.2.56933c2a0b7dd736fd1630a06\nmon.ceph2ceph2running(15m)32sago3d115M2048M16.2.56933c2a0b7dd629bc53c8992\nmon.ceph3ceph3running(15m)32sago3d161M2048M16.2.56933c2a0b7dd1b56b8a0d9ac\nnfs.nfs.0.0.ceph1.ufsfpfceph1*:2049running(5m)28sago5m70.3M-3.56933c2a0b7dd8af0f7d10e2b\nnfs.nfs.1.0.ceph2.blgrazceph2*:2049running(5m)32sago5m77.7M-3.56933c2a0b7ddd2e2238859e8\nnfs.nfs.2.0.ceph3.edjsczceph3*:2049running(5m)32sago5m80.7M-3.56933c2a0b7dd31e148631f9f\nnode-exporter.ceph1ceph1*:9100running(15m)28sago3d16.0M-0.18.1e5a616e4b9cf3664470b1641\nnode-exporter.ceph2ceph2*:9100running(15m)32sago3d23.5M-0.18.1e5a616e4b9cf140f7e84ba38\nnode-exporter.ceph3ceph3*:9100running(15m)32sago3d27.5M-0.18.1e5a616e4b9cfde98304ceaea\nosd.0ceph2running(15m)32sago3d60.7M4096M16.2.56933c2a0b7dd6ebce33417c2\nosd.1ceph3running(15m)32sago3d90.3M4096M16.2.56933c2a0b7dd6d43f1b7bfde\nosd.2ceph1running(15m)28sago3d49.1M4096M16.2.56933c2a0b7dd07d1183306d1\nosd.3ceph2running(15m)32sago3d71.8M4096M16.2.56933c2a0b7dda77d771d3e6d\nosd.4ceph3running(15m)32sago3d55.8M4096M16.2.56933c2a0b7dd3d82752a8fb1\nosd.5ceph1running(15m)28sago3d51.5M4096M16.2.56933c2a0b7dd6f7b5df090d5\nosd.6ceph2running(15m)32sago3d62.1M4096M16.2.56933c2a0b7dd7f769655adc7\nosd.7ceph3running(15m)32sago3d57.4M4096M16.2.56933c2a0b7dd24d63cd18dde\nosd.8ceph1running(15m)28sago3d49.2M4096M16.2.56933c2a0b7dd667fb8831e53\nprometheus.ceph1ceph1*:9095running(15m)28sago3d80.4M-2.18.1de242295e225f67fbc035cba\nrgw.rgw.ceph1.lgcvfwceph1*:80running(15m)28sago30m55.8M-16.2.56933c2a0b7ddc8e1c9701010\nrgw.rgw.ceph2.eqykjtceph2*:80running(15m)32sago30m74.9M-16.2.56933c2a0b7ddd8ba326c22b8\nrgw.rgw.ceph3.eybvweceph3*:80running(15m)32sago30m83.8M-16.2.56933c2a0b7ddd89c2b87e8e4\n

查看CephDashboard状态。

[root@ceph-node1~]#cephorchapplyrbd-mirror--placement=3\nScheduledrbd-mirrorupdate...\n

777

[root@ceph1~]#cephorchls\nNAMEPORTSRUNNINGREFRESHEDAGEPLACEMENT\nalertmanager?:9093,90941/19sago3dcount:1\ncrash3/312sago3d*\ngrafana?:30001/19sago3dcount:1\niscsi.gw3/312sago3mceph1;ceph2;ceph3\nmds.cephfs3/312sago11mcount:3\nmgr2/212sago3dcount:2\nmon3/512sago3dcount:5\nnfs.nfs3/312sago8mcount:3\nnode-exporter?:91003/312sago3d*\nosd.all-available-devices9/1212sago3d*\nprometheus?:90951/19sago3dcount:1\nrbd-mirror3/312sago23scount:3\nrgw.rgw?:803/312sago32mcount:3\n

查看Deamon状态。

[root@ceph1~]#cephorchps\nNAMEHOSTPORTSSTATUSREFRESHEDAGEMEMUSEMEMLIMVERSIONIMAGEIDCONTAINERID\nalertmanager.ceph1ceph1*:9093,9094running(18m)34sago3d24.5M-0.20.00881eb8f169f66918f633189\ncrash.ceph1ceph1running(18m)34sago3d3309k-16.2.56933c2a0b7dddbe00b18ef37\ncrash.ceph2ceph2running(18m)37sago3d11.6M-16.2.56933c2a0b7ddc24dcd762121\ncrash.ceph3ceph3running(18m)37sago3d10.1M-16.2.56933c2a0b7ddeed1c74352f0\ngrafana.ceph1ceph1*:3000running(18m)34sago3d60.2M-6.7.4ae5c36c3d3cdcbbfae0f90e4\niscsi.gw.ceph1.esngzeceph1running(3m)34sago3m53.1M-3.56933c2a0b7dd96f1250ba7f1\niscsi.gw.ceph2.ypjkrxceph2running(3m)37sago3m66.1M-3.56933c2a0b7dd9cc090aa85ec\niscsi.gw.ceph3.hntxjsceph3running(3m)37sago3m59.7M-3.56933c2a0b7ddd9a6906671a4\nmds.cephfs.ceph1.ooraizceph1running(11m)34sago11m18.4M-16.2.56933c2a0b7ddf668ab1ad9bb\nmds.cephfs.ceph2.qfmprjceph2running(11m)37sago11m20.1M-16.2.56933c2a0b7ddd8bb9979aca2\nmds.cephfs.ceph3.ifskbaceph3running(11m)37sago11m23.4M-16.2.56933c2a0b7ddd0fd9a78c1d8\nmgr.ceph1.nwbihhceph1*:9283running(18m)34sago3d456M-16.2.56933c2a0b7dd90bd2d0704b3\nmgr.ceph2.ednijfceph2*:8443,9283running(18m)37sago3d423M-16.2.56933c2a0b7dd9b69c446355f\nmon.ceph1ceph1running(18m)34sago3d129M2048M16.2.56933c2a0b7dd736fd1630a06\nmon.ceph2ceph2running(18m)37sago3d133M2048M16.2.56933c2a0b7dd629bc53c8992\nmon.ceph3ceph3running(18m)37sago3d158M2048M16.2.56933c2a0b7dd1b56b8a0d9ac\nnfs.nfs.0.0.ceph1.ufsfpfceph1*:2049running(8m)34sago8m71.9M-3.56933c2a0b7dd8af0f7d10e2b\nnfs.nfs.1.0.ceph2.blgrazceph2*:2049running(8m)37sago8m86.8M-3.56933c2a0b7ddd2e2238859e8\nnfs.nfs.2.0.ceph3.edjsczceph3*:2049running(8m)37sago8m80.9M-3.56933c2a0b7dd31e148631f9f\nnode-exporter.ceph1ceph1*:9100running(18m)34sago3d22.2M-0.18.1e5a616e4b9cf3664470b1641\nnode-exporter.ceph2ceph2*:9100running(18m)37sago3d22.0M-0.18.1e5a616e4b9cf140f7e84ba38\nnode-exporter.ceph3ceph3*:9100running(18m)37sago3d27.6M-0.18.1e5a616e4b9cfde98304ceaea\nosd.0ceph2running(18m)37sago3d58.4M4096M16.2.56933c2a0b7dd6ebce33417c2\nosd.1ceph3running(18m)37sago3d91.3M4096M16.2.56933c2a0b7dd6d43f1b7bfde\nosd.2ceph1running(18m)34sago3d42.7M4096M16.2.56933c2a0b7dd07d1183306d1\nosd.3ceph2running(18m)37sago3d67.0M4096M16.2.56933c2a0b7dda77d771d3e6d\nosd.4ceph3running(18m)37sago3d56.9M4096M16.2.56933c2a0b7dd3d82752a8fb1\nosd.5ceph1running(18m)34sago3d58.5M4096M16.2.56933c2a0b7dd6f7b5df090d5\nosd.6ceph2running(18m)37sago3d60.2M4096M16.2.56933c2a0b7dd7f769655adc7\nosd.7ceph3running(18m)37sago3d58.0M4096M16.2.56933c2a0b7dd24d63cd18dde\nosd.8ceph1running(18m)34sago3d47.8M4096M16.2.56933c2a0b7dd667fb8831e53\nprometheus.ceph1ceph1*:9095running(18m)34sago3d95.1M-2.18.1de242295e225f67fbc035cba\nrbd-mirror.ceph1.rkchvqceph1running(46s)34sago46s32.6M-16.2.56933c2a0b7dd46c56c1528f0\nrbd-mirror.ceph2.zdhnvtceph2running(44s)37sago44s33.7M-16.2.56933c2a0b7ddde9df26682c7\nrbd-mirror.ceph3.mssyuuceph3running(42s)37sago41s29.9M-16.2.56933c2a0b7dd679eabd8dd5c\nrgw.rgw.ceph1.lgcvfwceph1*:80running(18m)34sago33m50.1M-16.2.56933c2a0b7ddc8e1c9701010\nrgw.rgw.ceph2.eqykjtceph2*:80running(18m)37sago33m73.9M-16.2.56933c2a0b7ddd8ba326c22b8\nrgw.rgw.ceph3.eybvweceph3*:80running(18m)37sago33m84.4M-16.2.56933c2a0b7ddd89c2b87e8e4\n\n

查看CephDashboard状态。

#cephorchapplycephfs-mirror--placement=3\nScheduledcephfs-mirrorupdate...\n

888

[root@ceph1~]#cephorchls\nNAMEPORTSRUNNINGREFRESHEDAGEPLACEMENT\nalertmanager?:9093,90941/122sago4dcount:1\ncephfs-mirror3/325sago52scount:3\ncrash3/325sago4d*\ngrafana?:30001/122sago4dcount:1\ningress.nfs.nfs192.168.149.201:2050,19686/625sago7hcount:3\ningress.rgw.rgw192.168.149.200:8080,19676/625sago7hcount:3\niscsi.gw3/325sago8hceph1;ceph2;ceph3\nmds.cephfs3/325sago8hcount:3\nmgr2/225sago4dcount:2\nmon3/525sago4dcount:5\nnfs.nfs3/325sago8hcount:3\nnode-exporter?:91003/325sago4d*\nosd.all-available-devices9/1225sago3d*\nprometheus?:90951/122sago4dcount:1\nrbd-mirror3/325sago8hcount:3\nrgw.rgw?:803/325sago8hcount:3\n

查看Deamon状态。

[root@ceph1~]#cephorchps\nNAMEHOSTPORTSSTATUSREFRESHEDAGEMEMUSEMEMLIMVERSIONIMAGEIDCONTAINERID\nalertmanager.ceph1ceph1*:9093,9094running(7h)61sago4d29.5M-0.20.00881eb8f169fb9640288c91f\ncephfs-mirror.ceph1.cclxkuceph1running(86s)61sago85s26.1M-16.2.56933c2a0b7dd88b6f8918aa8\ncephfs-mirror.ceph2.ozwgqgceph2running(116s)64sago116s30.8M-16.2.56933c2a0b7dddcd8c6dcc4f0\ncephfs-mirror.ceph3.iefmkoceph3running(2m)64sago2m30.5M-16.2.56933c2a0b7dd581ba61bad28\ncrash.ceph1ceph1running(7h)61sago4d1975k-16.2.56933c2a0b7dd116ff5ce3646\ncrash.ceph2ceph2running(7h)64sago3d5448k-16.2.56933c2a0b7dd354b8903892b\ncrash.ceph3ceph3running(7h)64sago3d5737k-16.2.56933c2a0b7dda5c223e5362c\ngrafana.ceph1ceph1*:3000running(7h)61sago4d50.8M-6.7.4ae5c36c3d3cdd284162e0da4\nhaproxy.nfs.nfs.ceph1.bdmrwhceph1*:2050,1968running(7h)61sago7h1899k-2.3.12-b99e499b2284eda22213d9c640d828e\nhaproxy.nfs.nfs.ceph2.ryaaaqceph2*:2050,1968running(7h)64sago7h1879k-2.3.12-b99e499b2284eda2221cffaa94c6c4a\nhaproxy.nfs.nfs.ceph3.ysfamhceph3*:2050,1968running(7h)64sago7h1644k-2.3.12-b99e499b2284eda22210e48492c6233\nhaproxy.rgw.rgw.ceph1.nmvtigceph1*:8080,1967running(7h)61sago7h7574k-2.3.12-b99e499b2284eda22215e521b875eba\nhaproxy.rgw.rgw.ceph2.bbcnsoceph2*:8080,1967running(7h)64sago7h4512k-2.3.12-b99e499b2284eda2221cbd03c96a1fd\nhaproxy.rgw.rgw.ceph3.bcvvxlceph3*:8080,1967running(7h)64sago7h5775k-2.3.12-b99e499b2284eda2221535a37749437\niscsi.gw.ceph1.esngzeceph1running(7h)61sago8h23.8M-3.56933c2a0b7dd04d1bfd141a9\niscsi.gw.ceph2.ypjkrxceph2running(7h)64sago8h21.2M-3.56933c2a0b7dd7f78b9c0deb0\niscsi.gw.ceph3.hntxjsceph3running(7h)64sago8h28.5M-3.56933c2a0b7dd2fe9bb0b7bfe\nkeepalived.nfs.nfs.ceph1.jprcvwceph1running(7h)61sago7h3862k-2.0.5073e0c3cd1b98e337e634255\nkeepalived.nfs.nfs.ceph2.oiynikceph2running(7h)64sago7h3493k-2.0.5073e0c3cd1b9a6d10588190e\nkeepalived.nfs.nfs.ceph3.guulfcceph3running(7h)64sago7h1853k-2.0.5073e0c3cd1b923680bf04f55\nkeepalived.rgw.rgw.ceph1.keriqfceph1running(7h)61sago7h2902k-2.0.5073e0c3cd1b9a11d221ae1af\nkeepalived.rgw.rgw.ceph2.gxshhgceph2running(7h)64sago7h2461k-2.0.5073e0c3cd1b959026981ef10\nkeepalived.rgw.rgw.ceph3.tqaixqceph3running(7h)64sago7h4294k-2.0.5073e0c3cd1b925cfdab23bfe\nmds.cephfs.ceph1.ooraizceph1running(7h)61sago8h14.6M-16.2.56933c2a0b7dd64246a825e21\nmds.cephfs.ceph2.qfmprjceph2running(7h)64sago8h9323k-16.2.56933c2a0b7ddf0faa5b9a7d5\nmds.cephfs.ceph3.ifskbaceph3running(7h)64sago8h9072k-16.2.56933c2a0b7dd7f294af518f7\nmgr.ceph1.nwbihhceph1*:9283running(7h)61sago4d161M-16.2.56933c2a0b7ddf7b2470c5797\nmgr.ceph2.ednijfceph2*:8443,9283running(7h)64sago3d28.2M-16.2.56933c2a0b7dde24b40d99e6d\nmon.ceph1ceph1running(7h)61sago4d483M2048M16.2.56933c2a0b7dd9936a8b6587b\nmon.ceph2ceph2running(7h)64sago3d878M2048M16.2.56933c2a0b7ddabe7eed2d100\nmon.ceph3ceph3running(7h)64sago3d881M2048M16.2.56933c2a0b7dd52d747e3d011\nnfs.nfs.0.0.ceph1.ufsfpfceph1*:2049running(7h)61sago8h25.4M-3.56933c2a0b7ddfff5327b8415\nnfs.nfs.1.0.ceph2.blgrazceph2*:2049running(7h)64sago8h33.0M-3.56933c2a0b7ddea09e2429950\nnfs.nfs.2.0.ceph3.edjsczceph3*:2049running(7h)64sago8h41.8M-3.56933c2a0b7dddcc6fd5aa85f\nnode-exporter.ceph1ceph1*:9100running(7h)61sago4d13.8M-0.18.1e5a616e4b9cf69e5d1f3e310\nnode-exporter.ceph2ceph2*:9100running(7h)64sago3d15.8M-0.18.1e5a616e4b9cfcd41893212ca\nnode-exporter.ceph3ceph3*:9100running(7h)64sago3d15.3M-0.18.1e5a616e4b9cf9ab2b99dabd6\nosd.0ceph2running(7h)64sago3d31.6M4096M16.2.56933c2a0b7dd0c9dc80a1d74\nosd.1ceph3running(7h)64sago3d47.2M4096M16.2.56933c2a0b7ddc698fa4c50c0\nosd.2ceph1running(7h)61sago3d39.3M4096M16.2.56933c2a0b7dd69501861396e\nosd.3ceph2running(7h)64sago3d50.0M4096M16.2.56933c2a0b7ddfa9549c63716\nosd.4ceph3running(7h)64sago3d37.8M4096M16.2.56933c2a0b7ddd1fc644bd8f6\nosd.5ceph1running(7h)61sago3d38.5M4096M16.2.56933c2a0b7dded81fef0dd1c\nosd.6ceph2running(7h)64sago3d29.7M4096M16.2.56933c2a0b7dd68d0dcad316b\nosd.7ceph3running(7h)64sago3d28.4M4096M16.2.56933c2a0b7ddd3dc04b1d1ff\nosd.8ceph1running(7h)61sago3d38.4M4096M16.2.56933c2a0b7dd13f0d2b109ff\nprometheus.ceph1ceph1*:9095running(7h)61sago4d131M-2.18.1de242295e225e5fb2f5703d5\nrbd-mirror.ceph1.rkchvqceph1running(7h)61sago8h7453k-16.2.56933c2a0b7dde5ca469f9184\nrbd-mirror.ceph2.zdhnvtceph2running(7h)64sago8h6505k-16.2.56933c2a0b7dd2933e22fd669\nrbd-mirror.ceph3.mssyuuceph3running(7h)64sago8h10.5M-16.2.56933c2a0b7dd862ef07ae291\nrgw.rgw.ceph1.lgcvfwceph1*:80running(7h)61sago8h66.9M-16.2.56933c2a0b7dd9002673bd55a\nrgw.rgw.ceph2.eqykjtceph2*:80running(7h)64sago8h67.9M-16.2.56933c2a0b7dddbdddd0f10cd\nrgw.rgw.ceph3.eybvweceph3*:80running(7h)64sago8h69.0M-16.2.56933c2a0b7ddb896b40ef1d8\n

命令行查看部署的哪些服务。

[root@ceph1~]#ceph-s\ncluster:\nid:36e7a21c-e3f7-11eb-8960-000c299df6ef\nhealth:HEALTH_WARN\nclockskewdetectedonmon.ceph2,mon.ceph3\n\nservices:\nmon:3daemons,quorumceph1,ceph2,ceph3(age58m)\nmgr:ceph1.nwbihh(active,since7h),standbys:ceph2.ednijf\nmds:1/1daemonsup,2standby\nosd:9osds:9up(since7h),9in(since3d)\ncephfs-mirror:3daemonsactive(3hosts)\nrbd-mirror:3daemonsactive(3hosts)\nrgw:3daemonsactive(3hosts,1zones)\n\ndata:\nvolumes:1/1healthy\npools:10pools,241pgs\nobjects:233objects,9.4KiB\nusage:323MiBused,180GiB/180GiBavail\npgs:241active+clean\n\nio:\nclient:4.1KiB/srd,4op/srd,0op/swr\n\n\n\n

CephDashboard中查看部署了哪些服务。

在前几个步骤中我们添加了RGW和NFS,并且都部署了3个Deamon,但其实三个还是独立服务的,也就是前端没有负载均衡实现统一访问,在Cephadm中将haproxy和keepalived两个封装成了ingress服务,rgw-ingress架构示意图如下:

编写rgw-ingress放置规范及配置参数,并部署。

[root@ceph1~]#virgw-ingress.yaml\nservice_type:ingress\nservice_id:rgw.rgw\nplacement:\ncount:3\nspec:\nbackend_service:rgw.rgw\nvirtual_ip:192.168.149.200/24\nfrontend_port:8080\nmonitor_port:1967\n#cephorchapply-irgw-ingress.yaml\n

注意事项

注意这里的backend_service一定要通过cephorchls查看到实际的名称为准,frontend_port为VIP的端口,注意到这里有个monitor_port这个的意思是haproxy状态页端口。

比如想验证monitor_port的意思,可以进入haproxy容器中,查看haproxy配置文件frontendstats段落中有1967端口以及用户名和密码。

#podmanexec-it5e521b875ebabash\n#cat/var/lib/haproxy/haproxy.cfg\nroot@ceph1:/var/lib/haproxy#cathaproxy.cfg\n#Thisfileisgeneratedbycephadm.\nglobal\nlog127.0.0.1local2\nchroot/var/lib/haproxy\npidfile/var/lib/haproxy/haproxy.pid\nmaxconn8000\ndaemon\nstatssocket/var/lib/haproxy/stats\n\ndefaults\nmodehttp\nlogglobal\noptionhttplog\noptiondontlognull\noptionhttp-server-close\noptionforwardforexcept127.0.0.0/8\noptionredispatch\nretries3\ntimeoutqueue20s\ntimeoutconnect5s\ntimeouthttp-request1s\ntimeouthttp-keep-alive5s\ntimeoutclient1s\ntimeoutserver1s\ntimeoutcheck5s\nmaxconn8000\n\nfrontendstats\nmodehttp\nbind*:1967\nstatsenable\nstatsuri/stats\nstatsrefresh10s\nstatsauthadmin:vpxpoqmhxsetlmoerbck\nhttp-requestuse-serviceprometheus-exporterif{path/metrics}\nmonitor-uri/health\n\nfrontendfrontend\nbind*:8080\ndefault_backendbackend\n\nbackendbackend\noptionforwardfor\nbalancestatic-rr\noptionhttpchkHEAD/HTTP/1.0\nserverrgw.rgw.ceph1.lgcvfw192.168.149.128:80checkweight100\nserverrgw.rgw.ceph2.eqykjt192.168.149.145:80checkweight100\nserverrgw.rgw.ceph3.eybvwe192.168.149.146:80checkweight100\n\n

根据得到的VIP、monitor_port、uri得出访问地址为http://192.168.149.201:1967/stats,通过浏览器打开该地址,输入用户和密码可以看到确实是haproxy状态页内容。

[root@ceph1~]#cephorchls\nNAMEPORTSRUNNINGREFRESHEDAGEPLACEMENT\nalertmanager?:9093,90941/167sago3dcount:1\ncrash3/32mago3d*\ngrafana?:30001/167sago3dcount:1\ningress.rgw.rgw192.168.149.200:8080,19676/62mago17mcount:3\niscsi.gw3/32mago76mceph1;ceph2;ceph3\nmds.cephfs3/32mago84mcount:3\nmgr2/22mago3dcount:2\nmon3/52mago3dcount:5\nnfs.nfs3/32mago81mcount:3\nnode-exporter?:91003/32mago3d*\nosd.all-available-devices9/122mago3d*\nprometheus?:90951/167sago3dcount:1\nrbd-mirror3/32mago73mcount:3\nrgw.rgw?:803/32mago105mcount:3\n

查看Deamon状态。

[root@ceph1~]#cephorchps\nNAMEHOSTPORTSSTATUSREFRESHEDAGEMEMUSEMEMLIMVERSIONIMAGEIDCONTAINERID\nalertmanager.ceph1ceph1*:9093,9094running(7h)7mago4d31.5M-0.20.00881eb8f169fb9640288c91f\ncephfs-mirror.ceph1.cclxkuceph1running(28m)7mago28m16.5M-16.2.56933c2a0b7dd88b6f8918aa8\ncephfs-mirror.ceph2.ozwgqgceph2running(29m)7mago29m13.6M-16.2.56933c2a0b7dddcd8c6dcc4f0\ncephfs-mirror.ceph3.iefmkoceph3running(30m)7mago30m14.2M-16.2.56933c2a0b7dd581ba61bad28\ncrash.ceph1ceph1running(7h)7mago4d1929k-16.2.56933c2a0b7dd116ff5ce3646\ncrash.ceph2ceph2running(7h)7mago3d3451k-16.2.56933c2a0b7dd354b8903892b\ncrash.ceph3ceph3running(7h)7mago3d4047k-16.2.56933c2a0b7dda5c223e5362c\ngrafana.ceph1ceph1*:3000running(7h)7mago4d44.0M-6.7.4ae5c36c3d3cdd284162e0da4\nhaproxy.nfs.nfs.ceph1.bdmrwhceph1*:2050,1968running(7h)7mago7h1870k-2.3.12-b99e499b2284eda22213d9c640d828e\nhaproxy.nfs.nfs.ceph2.ryaaaqceph2*:2050,1968running(7h)7mago7h1862k-2.3.12-b99e499b2284eda2221cffaa94c6c4a\nhaproxy.nfs.nfs.ceph3.ysfamhceph3*:2050,1968running(7h)7mago7h1904k-2.3.12-b99e499b2284eda22210e48492c6233\nhaproxy.rgw.rgw.ceph1.nmvtigceph1*:8080,1967running(7h)7mago7h6089k-2.3.12-b99e499b2284eda22215e521b875eba\nhaproxy.rgw.rgw.ceph2.bbcnsoceph2*:8080,1967running(7h)7mago7h2877k-2.3.12-b99e499b2284eda2221cbd03c96a1fd\nhaproxy.rgw.rgw.ceph3.bcvvxlceph3*:8080,1967running(7h)7mago7h4265k-2.3.12-b99e499b2284eda2221535a37749437\niscsi.gw.ceph1.esngzeceph1running(7h)7mago8h25.5M-3.56933c2a0b7dd04d1bfd141a9\niscsi.gw.ceph2.ypjkrxceph2running(7h)7mago8h22.3M-3.56933c2a0b7dd7f78b9c0deb0\niscsi.gw.ceph3.hntxjsceph3running(7h)7mago8h29.7M-3.56933c2a0b7dd2fe9bb0b7bfe\nkeepalived.nfs.nfs.ceph1.jprcvwceph1running(7h)7mago7h3837k-2.0.5073e0c3cd1b98e337e634255\nkeepalived.nfs.nfs.ceph2.oiynikceph2running(7h)7mago7h3468k-2.0.5073e0c3cd1b9a6d10588190e\nkeepalived.nfs.nfs.ceph3.guulfcceph3running(7h)7mago7h1732k-2.0.5073e0c3cd1b923680bf04f55\nkeepalived.rgw.rgw.ceph1.keriqfceph1running(7h)7mago7h2667k-2.0.5073e0c3cd1b9a11d221ae1af\nkeepalived.rgw.rgw.ceph2.gxshhgceph2running(7h)7mago7h2969k-2.0.5073e0c3cd1b959026981ef10\nkeepalived.rgw.rgw.ceph3.tqaixqceph3running(7h)7mago7h4600k-2.0.5073e0c3cd1b925cfdab23bfe\nmds.cephfs.ceph1.ooraizceph1running(7h)7mago8h19.3M-16.2.56933c2a0b7dd64246a825e21\nmds.cephfs.ceph2.qfmprjceph2running(7h)7mago8h6790k-16.2.56933c2a0b7ddf0faa5b9a7d5\nmds.cephfs.ceph3.ifskbaceph3running(7h)7mago8h10.0M-16.2.56933c2a0b7dd7f294af518f7\nmgr.ceph1.nwbihhceph1*:9283running(7h)7mago4d119M-16.2.56933c2a0b7ddf7b2470c5797\nmgr.ceph2.ednijfceph2*:8443,9283running(7h)7mago3d30.2M-16.2.56933c2a0b7dde24b40d99e6d\nmon.ceph1ceph1running(7h)7mago4d526M2048M16.2.56933c2a0b7dd9936a8b6587b\nmon.ceph2ceph2running(7h)7mago3d898M2048M16.2.56933c2a0b7ddabe7eed2d100\nmon.ceph3ceph3running(7h)7mago3d894M2048M16.2.56933c2a0b7dd52d747e3d011\nnfs.nfs.0.0.ceph1.ufsfpfceph1*:2049running(7h)7mago8h23.0M-3.56933c2a0b7ddfff5327b8415\nnfs.nfs.1.0.ceph2.blgrazceph2*:2049running(7h)7mago8h31.3M-3.56933c2a0b7ddea09e2429950\nnfs.nfs.2.0.ceph3.edjsczceph3*:2049running(7h)7mago8h38.3M-3.56933c2a0b7dddcc6fd5aa85f\nnode-exporter.ceph1ceph1*:9100running(7h)7mago4d17.8M-0.18.1e5a616e4b9cf69e5d1f3e310\nnode-exporter.ceph2ceph2*:9100running(7h)7mago3d18.3M-0.18.1e5a616e4b9cfcd41893212ca\nnode-exporter.ceph3ceph3*:9100running(7h)7mago3d16.9M-0.18.1e5a616e4b9cf9ab2b99dabd6\nosd.0ceph2running(7h)7mago3d34.2M4096M16.2.56933c2a0b7dd0c9dc80a1d74\nosd.1ceph3running(7h)7mago3d56.3M4096M16.2.56933c2a0b7ddc698fa4c50c0\nosd.2ceph1running(7h)7mago3d44.1M4096M16.2.56933c2a0b7dd69501861396e\nosd.3ceph2running(7h)7mago3d49.1M4096M16.2.56933c2a0b7ddfa9549c63716\nosd.4ceph3running(7h)7mago3d36.3M4096M16.2.56933c2a0b7ddd1fc644bd8f6\nosd.5ceph1running(7h)7mago3d42.7M4096M16.2.56933c2a0b7dded81fef0dd1c\nosd.6ceph2running(7h)7mago3d31.6M4096M16.2.56933c2a0b7dd68d0dcad316b\nosd.7ceph3running(7h)7mago3d29.7M4096M16.2.56933c2a0b7ddd3dc04b1d1ff\nosd.8ceph1running(7h)7mago3d40.7M4096M16.2.56933c2a0b7dd13f0d2b109ff\nprometheus.ceph1ceph1*:9095running(7h)7mago4d107M-2.18.1de242295e225e5fb2f5703d5\nrbd-mirror.ceph1.rkchvqceph1running(7h)7mago8h9223k-16.2.56933c2a0b7dde5ca469f9184\nrbd-mirror.ceph2.zdhnvtceph2running(7h)7mago8h8652k-16.2.56933c2a0b7dd2933e22fd669\nrbd-mirror.ceph3.mssyuuceph3running(7h)7mago8h12.6M-16.2.56933c2a0b7dd862ef07ae291\nrgw.rgw.ceph1.lgcvfwceph1*:80running(7h)7mago9h73.1M-16.2.56933c2a0b7dd9002673bd55a\nrgw.rgw.ceph2.eqykjtceph2*:80running(7h)7mago9h67.5M-16.2.56933c2a0b7dddbdddd0f10cd\nrgw.rgw.ceph3.eybvweceph3*:80running(7h)7mago9h67.5M-16.2.56933c2a0b7ddb896b40ef1d8\n

ingress是每个服务对应一个ingress,比如上面部署了rgwingress,nfs也有对应的ingress,未来还会有CephDashboard的ingress,现在能部署的只有rgw和nfsingress,nfsingress示意图如下:

编辑nfsingress配置文件并部署。

[root@ceph1~]#catnfs-ingress.yaml\nservice_type:ingress\nservice_id:nfs.nfs\nplacement:\ncount:3\nspec:\nbackend_service:nfs.nfs\nvirtual_ip:192.168.149.201/24\nfrontend_port:2050\nmonitor_port:1968\n[root@ceph1~]#cephorchapply-irgw-ingress.yaml\n

查看Service状态。

[root@ceph1~]#cephorchls\nNAMEPORTSRUNNINGREFRESHEDAGEPLACEMENT\nalertmanager?:9093,90941/191sago4dcount:1\ncephfs-mirror3/394sago32mcount:3\ncrash3/394sago4d*\ngrafana?:30001/191sago4dcount:1\ningress.nfs.nfs192.168.149.201:2050,19686/694sago7hcount:3\ningress.rgw.rgw192.168.149.200:8080,19676/694sago7hcount:3\niscsi.gw3/394sago8hceph1;ceph2;ceph3\nmds.cephfs3/394sago8hcount:3\nmgr2/294sago4dcount:2\nmon3/594sago4dcount:5\nnfs.nfs3/394sago8hcount:3\nnode-exporter?:91003/394sago4d*\nosd.all-available-devices9/1294sago3d*\nprometheus?:90951/191sago4dcount:1\nrbd-mirror3/394sago8hcount:3\nrgw.rgw?:803/394sago9hcount:3\n

查看Deamon状态。

[root@ceph1~]#cephorchps\nNAMEHOSTPORTSSTATUSREFRESHEDAGEMEMUSEMEMLIMVERSIONIMAGEIDCONTAINERID\nalertmanager.ceph1ceph1*:9093,9094running(4m)13sago3d22.7M-0.20.00881eb8f169fb9640288c91f\ncrash.ceph1ceph1running(4m)13sago3d2721k-16.2.56933c2a0b7dd116ff5ce3646\ncrash.ceph2ceph2running(4m)29sago3d17.1M-16.2.56933c2a0b7dd354b8903892b\ncrash.ceph3ceph3running(4m)29sago3d7792k-16.2.56933c2a0b7dda5c223e5362c\ngrafana.ceph1ceph1*:3000running(4m)13sago3d52.5M-6.7.4ae5c36c3d3cdd284162e0da4\nhaproxy.nfs.nfs.ceph1.bdmrwhceph1*:2050,1968running(64s)13sago63s5603k-2.3.12-b99e499b2284eda22213d9c640d828e\nhaproxy.nfs.nfs.ceph2.ryaaaqceph2*:2050,1968running(61s)29sago61s6341k-2.3.12-b99e499b2284eda2221cffaa94c6c4a\nhaproxy.nfs.nfs.ceph3.ysfamhceph3*:2050,1968running(59s)29sago59s6353k-2.3.12-b99e499b2284eda22210e48492c6233\nhaproxy.rgw.rgw.ceph1.nmvtigceph1*:8080,1967running(2m)13sago2m13.9M-2.3.12-b99e499b2284eda22215e521b875eba\nhaproxy.rgw.rgw.ceph2.bbcnsoceph2*:8080,1967running(2m)29sago2m4916k-2.3.12-b99e499b2284eda2221cbd03c96a1fd\nhaproxy.rgw.rgw.ceph3.bcvvxlceph3*:8080,1967running(2m)29sago2m4362k-2.3.12-b99e499b2284eda2221535a37749437\niscsi.gw.ceph1.esngzeceph1running(4m)13sago61m49.2M-3.56933c2a0b7dd04d1bfd141a9\niscsi.gw.ceph2.ypjkrxceph2running(4m)29sago61m62.3M-3.56933c2a0b7dd7f78b9c0deb0\niscsi.gw.ceph3.hntxjsceph3running(4m)29sago61m64.5M-3.56933c2a0b7dd2fe9bb0b7bfe\nkeepalived.nfs.nfs.ceph1.jprcvwceph1running(53s)13sago52s2679k-2.0.5073e0c3cd1b98e337e634255\nkeepalived.nfs.nfs.ceph2.oiynikceph2running(57s)29sago56s1581k-2.0.5073e0c3cd1b9a6d10588190e\nkeepalived.nfs.nfs.ceph3.guulfcceph3running(50s)29sago50s1623k-2.0.5073e0c3cd1b923680bf04f55\nkeepalived.rgw.rgw.ceph1.keriqfceph1running(84s)13sago83s1979k-2.0.5073e0c3cd1b9a11d221ae1af\nkeepalived.rgw.rgw.ceph2.gxshhgceph2running(105s)29sago105s6455k-2.0.5073e0c3cd1b959026981ef10\nkeepalived.rgw.rgw.ceph3.tqaixqceph3running(67s)29sago66s1581k-2.0.5073e0c3cd1b925cfdab23bfe\nmds.cephfs.ceph1.ooraizceph1running(4m)13sago69m16.5M-16.2.56933c2a0b7dd64246a825e21\nmds.cephfs.ceph2.qfmprjceph2running(4m)29sago69m19.6M-16.2.56933c2a0b7ddf0faa5b9a7d5\nmds.cephfs.ceph3.ifskbaceph3running(4m)29sago69m22.0M-16.2.56933c2a0b7dd7f294af518f7\nmgr.ceph1.nwbihhceph1*:9283running(4m)13sago3d439M-16.2.56933c2a0b7ddf7b2470c5797\nmgr.ceph2.ednijfceph2*:8443,9283running(4m)29sago3d412M-16.2.56933c2a0b7dde24b40d99e6d\nmon.ceph1ceph1running(4m)13sago3d86.0M2048M16.2.56933c2a0b7dd9936a8b6587b\nmon.ceph2ceph2running(4m)29sago3d113M2048M16.2.56933c2a0b7ddabe7eed2d100\nmon.ceph3ceph3running(4m)29sago3d127M2048M16.2.56933c2a0b7dd52d747e3d011\nnfs.nfs.0.0.ceph1.ufsfpfceph1*:2049running(4m)13sago66m64.5M-3.56933c2a0b7ddfff5327b8415\nnfs.nfs.1.0.ceph2.blgrazceph2*:2049running(4m)29sago66m72.0M-3.56933c2a0b7ddea09e2429950\nnfs.nfs.2.0.ceph3.edjsczceph3*:2049running(4m)29sago66m58.7M-3.56933c2a0b7dddcc6fd5aa85f\nnode-exporter.ceph1ceph1*:9100running(4m)13sago3d15.0M-0.18.1e5a616e4b9cf69e5d1f3e310\nnode-exporter.ceph2ceph2*:9100running(4m)29sago3d24.0M-0.18.1e5a616e4b9cfcd41893212ca\nnode-exporter.ceph3ceph3*:9100running(4m)29sago3d24.7M-0.18.1e5a616e4b9cf9ab2b99dabd6\nosd.0ceph2running(4m)29sago3d66.2M4096M16.2.56933c2a0b7dd0c9dc80a1d74\nosd.1ceph3running(4m)29sago3d70.5M4096M16.2.56933c2a0b7ddc698fa4c50c0\nosd.2ceph1running(4m)13sago3d42.4M4096M16.2.56933c2a0b7dd69501861396e\nosd.3ceph2running(4m)29sago3d54.2M4096M16.2.56933c2a0b7ddfa9549c63716\nosd.4ceph3running(4m)29sago3d84.9M4096M16.2.56933c2a0b7ddd1fc644bd8f6\nosd.5ceph1running(4m)13sago3d36.8M4096M16.2.56933c2a0b7dded81fef0dd1c\nosd.6ceph2running(4m)29sago3d58.6M4096M16.2.56933c2a0b7dd68d0dcad316b\nosd.7ceph3running(4m)29sago3d65.3M4096M16.2.56933c2a0b7ddd3dc04b1d1ff\nosd.8ceph1running(4m)13sago3d42.9M4096M16.2.56933c2a0b7dd13f0d2b109ff\nprometheus.ceph1ceph1*:9095running(35s)13sago3d110M-2.18.1de242295e225e5fb2f5703d5\nrbd-mirror.ceph1.rkchvqceph1running(4m)13sago58m15.3M-16.2.56933c2a0b7dde5ca469f9184\nrbd-mirror.ceph2.zdhnvtceph2running(4m)29sago58m24.7M-16.2.56933c2a0b7dd2933e22fd669\nrbd-mirror.ceph3.mssyuuceph3running(4m)29sago58m33.4M-16.2.56933c2a0b7dd862ef07ae291\nrgw.rgw.ceph1.lgcvfwceph1*:80running(4m)13sago91m58.0M-16.2.56933c2a0b7dd9002673bd55a\nrgw.rgw.ceph2.eqykjtceph2*:80running(4m)29sago91m75.6M-16.2.56933c2a0b7dddbdddd0f10cd\nrgw.rgw.ceph3.eybvweceph3*:80running(4m)29sago91m75.7M-16.2.56933c2a0b7ddb896b40ef1d8\n\n6.排错

当部署出现问题可以执行以下命令查看详细信息。

#cephloglastcephadm\n

也可以直接查看Service级别或Daemon级别的日志。

#cephorchls--service_name=alertmanager--formatyaml\n#cephorchps--service-name<service-name>--daemon-id<daemon-id>--formatyaml\n

当daemon出现error,或是stop状态,可以使用以下命令启动

cephorchdaemonrestartrgw.rgw.ceph3.sfepof\n

当有多个Daemon状态不对时,也可以直接重启Service,就会自动重启关联的Daemon。

[root@ceph1~]#cephorchstartmds.cephfs\nScheduledtostartmds.cephfs.ceph1.znbbqqonhost'ceph1'\nScheduledtostartmds.cephfs.ceph2.iazuafonhost'ceph2'\nScheduledtostartmds.cephfs.ceph3.hjuvueonhost'ceph3'\n

如果删除服务遇到一直在删除中,可以重启电脑。

OK,本文到此结束,希望对大家有所帮助。

本站涵盖的内容、图片、视频等数据,部分未能与原作者取得联系。若涉及版权问题,请及时通知我们并提供相关证明材料,我们将及时予以删除!谢谢大家的理解与支持!

Copyright © 2023