Rook部署可扩展NFS集群与导出CephFS课件.pptx
- 【下载声明】
1. 本站全部试题类文档,若标题没写含答案,则无答案;标题注明含答案的文档,主观题也可能无答案。请谨慎下单,一旦售出,不予退换。
2. 本站全部PPT文档均不含视频和音频,PPT中出现的音频或视频标识(或文字)仅表示流程,实际无音频或视频文件。请谨慎下单,一旦售出,不予退换。
3. 本页资料《Rook部署可扩展NFS集群与导出CephFS课件.pptx》由用户(ziliao2023)主动上传,其收益全归该用户。163文库仅提供信息存储空间,仅对该用户上传内容的表现方式做保护处理,对上传内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知163文库(点击联系客服),我们立即给予删除!
4. 请根据预览情况,自愿下载本文。本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
5. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007及以上版本和PDF阅读器,压缩文件请下载最新的WinRAR软件解压。
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- Rook 部署 扩展 NFS 集群 导出 CephFS 课件
- 资源描述:
-
1、Rook-Deployed Scalable NFS Clusters Exporting CephFSRook部署可扩展NFS集群与导出CephFSWhat is Ceph(FS)?Rook-Deployed Scalable NFS Clusters Exporting CephFS2 CephFS is a POSIX distributed file system.Clients and MDS cooperatively maintain a distributed cache of metadata including inodes and directories MDS hand
2、s out capabilities(aka caps)to clients,to allow them delegated access to parts of inode metadata Clients directly perform file I/O on RADOSClientActiveMDSJournalMetadata MutationsStandby MDSActive MDSJournalRADOSData PoolMetadata PoolreadwriteJournal FlushMetadata ExchangeopenmkdirlistdirCeph,an int
3、egral component of hybrid cloudsRook-Deployed Scalable NFS Clusters Exporting CephFS3Manila:A file share service for VMs Uses CephFS to provision shared volumes.Cinder:A block device provisioner for VMs.Uses Cephs RADOS Block Device(RBD).CephFS usage in community OpenstackRook-Deployed Scalable NFS
4、Clusters Exporting CephFS4 Most Openstack users are also running a Ceph cluster already Open source storage solution CephFS metadata scalability is ideally suited to cloud environments.https:/www.openstack.org/user-survey/survey-2017So why Kubernetes?Rook-Deployed Scalable NFS Clusters Exporting Cep
5、hFS5Lightweight Containers!Trivial to spin up services in response to changing application needs.Extensible service infrastructure!Parallelism Containers are lightweight enough for lazy and optimal parallelism!Fast/Cheap Failover Service failover only requires a new pod.Fast IP Failover/Management F
6、ilesystem shared between multiple nodes.but also tenant-awareWhy Would You Need a Ceph/NFS Gateway?Rook-Deployed Scalable NFS Clusters Exporting CephFS6Clients that cant speak Ceph properly old,questionable,or unknown ceph drivers(old kernel)3rd party OSsSecurity Partition Ceph cluster from less tru
7、sted clients GSSAPI(kerberos)Openstack Manila Filesystem shared between multiple nodes.but also tenant-aware.and self-managed by tenant adminsCephRADOSOSDOSDMDSceph-fuseMetadata RPCFile I/OkernelActive/Passive DeploymentsRook-Deployed Scalable NFS Clusters Exporting CephFS7 One active server at a ti
8、me that“floats”between physical hosts Traditional“failover”NFS server,running under pacemaker/corosync Scales poorly+requires idle resources Available since Ceph Luminous(Aug 2017)Goal:Active/Active DeploymentRook-Deployed Scalable NFS Clusters Exporting CephFS8NFS ServersNFS ClientsGoals and Requir
9、ementsRook-Deployed Scalable NFS Clusters Exporting CephFS9Scale OutActive/Active cluster of mostly independent servers.Keep coordination between them to a bare minimum.ContainerizableLeverage container orchestration technologies to simplify deployment and handle networking.No failover of resources.
10、Just rebuild containers from scratch when they fail.NFSv4.1+Avoid legacy NFS protocol versions,allowing us to rely on new protocol features for better performance,and possibility for pNFS later.Ceph/RADOS for CommunicationAvoid need for any 3rd party clustering or communication between cluster nodes
11、.Use Ceph and RADOS to coordinateGanesha NFS ServerRook-Deployed Scalable NFS Clusters Exporting CephFS10 Open-source NFS server that runs in userspace(LGPLv3)Plugin interface for exports and client recovery databases well suited for exporting userland filesystems FSAL_CEPH uses libcephfs to interac
12、t with Ceph cluster Can use librados to store client recovery records and configuration files in RADOS objects Amenable to containerization Store configuration and recovery info in RADOS No need for writeable local fs storage Can run in unprivileged container Rebuild server from r/o image if it fail
13、sNFS ProtocolRook-Deployed Scalable NFS Clusters Exporting CephFS11 Based on ONC-RPC(aka SunRPC)Early versions(NFSv2 and v3)were statelesswith sidecar protocols to handle file locking(NLM and NSM)NFSv4 was designed to be stateful,and state is leased to the clientclient must contact server at least o
14、nce every lease period to renew(45-60s is typical)NFSv4.1 revamped protocol using a sessions layer to provide exactly-once semantics added RECLAIM_COMPLETE operation(allows lifting grace period early)more clustering and migration support NFSv4.2 mostly new features on top of v4.1NFS Client RecoveryR
15、ook-Deployed Scalable NFS Clusters Exporting CephFS12After restart,NFS servers come up with no ephemeral stateOpens,locks,delegations and layouts are lostAllow clients to reclaim state they held earlier for 2 lease periodsDetailed state tracking on stable storage is quite expensiveGanesha had suppor
16、t for storing this in RADOS for single-server configurationsDuring the grace period:No new state can be established.Clients may only reclaim old state.Allow reclaim only from clients present at time of crashNecessary to handle certain cases involving network partitionsMust keep stable-storage record
17、s of which clients are allowed to reclaim after a rebootPrior to a client doing its first OPEN,set a stable-storage record for client if there isnt oneRemove after last file is closed,or when client state expiresAtomically replace old client db with new just prior to ending grace periodNFS Server Re
18、boot EpochsRook-Deployed Scalable NFS Clusters Exporting CephFS13Grace PeriodNormalOpsGrace PeriodNormalOpsGrace PeriodNormalOpsEpoch 1Epoch 2Epoch 3 Consider each reboot the start of a new epoch As clients perform first open(reclaim or regular),set a record for them in a database associated with cu
展开阅读全文