ImageVerifierCode 换一换
格式:DOCX , 页数:22 ,大小:23.43KB ,
资源ID:8596809      下载积分:3 金币
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.bdocx.com/down/8596809.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(ceph维护.docx)为本站会员(b****6)主动上传,冰豆网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知冰豆网(发送邮件至service@bdocx.com或直接QQ联系客服),我们立即给予删除!

ceph维护.docx

1、ceph维护 Ceph维护命令(持续更新中)eddy一、集群1、启动一个ceph 进程启动mon进程service ceph start mon.node1启动msd进程service ceph start mds.node1启动osd进程service ceph start osd.02、查看机器的监控状态rootclient # ceph healthHEALTH_OK3、查看ceph的实时运行状态rootclient # ceph -w cluster be1756f2-54f7-4d8f-8790-820c82721f17 health HEALTH_OK monmap e2: 3 m

2、ons at node1=10.240.240.211:6789/0,node2=10.240.240.212:6789/0,node3=10.240.240.213:6789/0, election epoch 294, quorum 0,1,2 node1,node2,node3 mdsmap e95: 1/1/1 up 0=node2=up:active, 1 up:standby osdmap e88: 3 osds: 3 up, 3 in pgmap v1164: 448 pgs, 4 pools, 10003 MB data, 2520 objects 23617 MB used,

3、 37792 MB / 61410 MB avail 448 active+clean2014-06-30 00:48:28.756948 mon.0 INF pgmap v1163: 448 pgs: 448 active+clean; 10003 MB data, 23617 MB used, 37792 MB / 61410 MB avail4、检查信息状态信息rootclient # ceph -s cluster be1756f2-54f7-4d8f-8790-820c82721f17 health HEALTH_OK monmap e2: 3 mons at node1=10.24

4、0.240.211:6789/0,node2=10.240.240.212:6789/0,node3=10.240.240.213:6789/0, election epoch 294, quorum 0,1,2 node1,node2,node3 mdsmap e95: 1/1/1 up 0=node2=up:active, 1 up:standby osdmap e88: 3 osds: 3 up, 3 in pgmap v1164: 448 pgs, 4 pools, 10003 MB data, 2520 objects 23617 MB used, 37792 MB / 61410

5、MB avail 448 active+cleanrootclient #5、查看ceph存储空间rootclient # ceph dfGLOBAL: SIZE AVAIL RAW USED %RAW USED 61410M 37792M 23617M 38.46 POOLS: NAME ID USED %USED OBJECTS data 0 10000M 16.28 2500 metadata 1 3354k 0 20 rbd 2 0 0 0 jiayuan 3 0 0 0 rootclient #6、删除一个节点的所有的ceph数据包rootnode1 # ceph-deploy pu

6、rge node1rootnode1 # ceph-deploy purgedata node17、为ceph创建一个admin用户并为admin用户创建一个密钥,把密钥保存到/etc/ceph目录下:ceph auth get-or-create client.admin mds allow osd allow * mon allow * /etc/ceph/ceph.client.admin.keyring或ceph auth get-or-create client.admin mds allow osd allow * mon allow * -o /etc/ceph/ceph.cli

7、ent.admin.keyring8、为osd.0创建一个用户并创建一个keyceph auth get-or-create osd.0 mon allow rwx osd allow * -o /var/lib/ceph/osd/ceph-0/keyring9、为mds.node1创建一个用户并创建一个keyceph auth get-or-create mds.node1 mon allow rwx osd allow * mds allow * -o /var/lib/ceph/mds/ceph-node1/keyring10、查看ceph集群中的认证用户及相关的keyceph auth

8、 list11、删除集群中的一个认证用户ceph auth del osd.012、查看集群的详细配置rootnode1 # ceph daemon mon.node1 config show | more13、查看集群健康状态细节rootadmin # ceph health detailHEALTH_WARN 12 pgs down; 12 pgs peering; 12 pgs stuck inactive; 12 pgs stuck uncleanpg 3.3b is stuck inactive since forever, current state down+peering, l

9、ast acting 1,2pg 3.36 is stuck inactive since forever, current state down+peering, last acting 1,2pg 3.79 is stuck inactive since forever, current state down+peering, last acting 1,0pg 3.5 is stuck inactive since forever, current state down+peering, last acting 1,2pg 3.30 is stuck inactive since for

10、ever, current state down+peering, last acting 1,2pg 3.1a is stuck inactive since forever, current state down+peering, last acting 1,0pg 3.2d is stuck inactive since forever, current state down+peering, last acting 1,0pg 3.16 is stuck inactive since forever, current state down+peering, last acting 1,

11、214、查看ceph log日志所在的目录rootnode1 # ceph-conf -name mon.node1 -show-config-value log_file/var/log/ceph/ceph-mon.node1.log二、mon1、查看mon的状态信息rootclient # ceph mon state2: 3 mons at node1=10.240.240.211:6789/0,node2=10.240.240.212:6789/0,node3=10.240.240.213:6789/0, election epoch 294, quorum 0,1,2 node1,n

12、ode2,node32、查看mon的选举状态rootclient # ceph quorum_statuselection_epoch:294,quorum:0,1,2,quorum_names:node1,node2,node3,quorum_leader_name:node1,monmap:epoch:2,fsid:be1756f2-54f7-4d8f-8790-820c82721f17,modified:2014-06-26 18:43:51.671106,created:0.000000,mons:rank:0,name:node1,addr:10.240.240.211:6789/0

13、,rank:1,name:node2,addr:10.240.240.212:6789/0,rank:2,name:node3,addr:10.240.240.213:6789/03、查看mon的映射信息rootclient # ceph mon dumpdumped monmap epoch 2epoch 2fsid be1756f2-54f7-4d8f-8790-820c82721f17last_changed 2014-06-26 18:43:51.671106created 0.0000000: 10.240.240.211:6789/0 mon.node11: 10.240.240.

14、212:6789/0 mon.node22: 10.240.240.213:6789/0 mon.node34、删除一个mon节点rootnode1 # ceph mon remove node1removed mon.node1 at 10.39.101.1:6789/0, there are now 3 monitors2014-07-07 18:11:04.974188 7f4d16bfd700 0 monclient: hunting for new mon5、获得一个正在运行的mon map,并保存在1.txt文件中rootnode3 # ceph mon getmap -o 1.t

15、xtgot monmap epoch 66、查看上面获得的maprootnode3 # monmaptool -print 1.txtmonmaptool: monmap file 1.txtepoch 6fsid 92552333-a0a8-41b8-8b45-c93a8730525elast_changed 2014-07-07 18:22:51.927205created 0.0000000: 10.39.101.1:6789/0 mon.node11: 10.39.101.2:6789/0 mon.node22: 10.39.101.3:6789/0 mon.node3rootnode

16、3 #7、把上面的mon map注入新加入的节点ceph-mon -i node4 -inject-monmap 1.txt8、查看mon的amin socketrootnode1 # ceph-conf -name mon.node1 -show-config-value admin_socket/var/run/ceph/ceph-mon.node1.asok9、查看mon的详细状态rootnode1 # ceph daemon mon.node1 mon_status name: node1, rank: 0, state: leader, election_epoch: 96, quo

17、rum: 0, 1, 2, outside_quorum: , extra_probe_peers: 10.39.101.4:6789/0, sync_provider: , monmap: epoch: 6, fsid: 92552333-a0a8-41b8-8b45-c93a8730525e, modified: 2014-07-07 18:22:51.927205, created: 0.000000, mons: rank: 0, name: node1, addr: 10.39.101.1:6789/0, rank: 1, name: node2, addr: 10.39.101.2

18、:6789/0, rank: 2, name: node3, addr: 10.39.101.3:6789/010、删除一个mon节点rootos-node1 # ceph mon remove os-node1removed mon.os-node1 at 10.40.10.64:6789/0, there are now 3 monitors三、msd1、查看msd状态rootclient # ceph mds state95: 1/1/1 up 0=node2=up:active, 1 up:standby2、查看msd的映射信息rootclient # ceph mds dumpdum

19、ped mdsmap epoch 95epoch 95flags 0created 2014-06-26 18:41:57.686801modified 2014-06-30 00:24:11.749967tableserver 0root 0session_timeout 60session_autoclose 300max_file_size 1099511627776last_failure 84last_failure_osd_epoch 81compat compat=,rocompat=,incompat=1=base v0.20,2=client writeable ranges

20、,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omapmax_mds 1in 0up 0=5015failedstoppeddata_pools 0metadata_pool 1inline_data disabled5015: 10.240.240.212:6808/3032 node2 mds.0.12 up:active seq 305012: 10.240.240.211:6807/3459 node1

21、 mds.-1.0 up:standby seq 383、删除一个mds节点rootnode1 # ceph mds rm 0 mds.node1mds gid 0 dne四、osd1、查看ceph osd运行状态rootclient # ceph osd stat osdmap e88: 3 osds: 3 up, 3 in2、查看osd映射信息rootclient # ceph osd dumpepoch 88fsid be1756f2-54f7-4d8f-8790-820c82721f17created 2014-06-26 18:41:57.687442modified 2014-06

22、-30 00:46:27.179793flagspool 0 data replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool crash_replay_interval 45 stripe_width 0pool 1 metadata replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num

23、 64 last_change 1 owner 0 flags hashpspool stripe_width 0pool 2 rbd replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool stripe_width 0pool 3 jiayuan replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256

24、pgp_num 256 last_change 73 owner 0 flags hashpspool stripe_width 0max_osd 3osd.0 up in weight 1 up_from 65 up_thru 75 down_at 64 last_clean_interval 53,55) 10.240.240.211:6800/3089 10.240.240.211:6801/3089 10.240.240.211:6802/3089 10.240.240.211:6803/3089 exists,up 8a24ad16-a483-4bac-a56a-6ed44ab74f

25、f0osd.1 up in weight 1 up_from 59 up_thru 74 down_at 58 last_clean_interval 31,55) 10.240.240.212:6800/2696 10.240.240.212:6801/2696 10.240.240.212:6802/2696 10.240.240.212:6803/2696 exists,up 8619c083-0273-4203-ba57-4b1dabb89339osd.2 up in weight 1 up_from 62 up_thru 74 down_at 61 last_clean_interv

26、al 39,55) 10.240.240.213:6800/2662 10.240.240.213:6801/2662 10.240.240.213:6802/2662 10.240.240.213:6803/2662 exists,up f8107c04-35d7-4fb8-8c82-09eb885f0e58rootclient #3、查看osd的目录树rootclient # ceph osd tree# id weight type name up/down reweight-1 3 root default-2 1 host node10 1 osd.0 up 1-3 1 host n

27、ode21 1 osd.1 up 1-4 1 host node32 1 osd.2 up 14、down掉一个osd硬盘rootnode1 # ceph osd down 0 #down掉osd.0节点5、在集群中删除一个osd硬盘rootnode4 # ceph osd rm 0removed osd.06、在集群中删除一个osd 硬盘 crush maprootnode1 # ceph osd crush rm osd.07、在集群中删除一个osd的host节点rootnode1 # ceph osd crush rm node1removed item id -2 name node1 from crush map查看最大osd的个数rootnode1 # ceph osd getmaxosdmax_osd = 4 in epoch 514 #默认最大是4个osd节点8、设置最大的osd的个数(当扩大osd节点的时候必须扩大这个值)rootnode1 # ceph osd setmaxosd 109、设置osd crush的权重为1.0ceph osd crush set id weight loc1 loc2 .例如:rootadmin # ceph osd crush set 3 3.0 host=node

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1