Ceph v10.2.0 Jewel 发布,分布式文件系统

oschina
 oschina
发布于 2016年04月22日
收藏 12

Ceph v10.2.0 Jewel 发布了,Ceph是加州大学Santa Cruz分校的Sage Weil(DreamHost的联合创始人)专为博士论文设计的新一代自由软件分布式文件系统。自2007年毕业之后,Sage开始全职投入到Ceph开 发之中,使其能适用于生产环境。Ceph的主要目标是设计成基于POSIX的没有单点故障的分布式文件系统,使数据能容错和无缝的复制。2010年3 月,Linus Torvalds将Ceph client合并到内 核2.6.34中。IBM开发者园地的一篇文章 探讨了Ceph的架构,它的容错实现和简化海量数据管理的功能。

改进日志:

  • CephFS:

    • This is the first release in which CephFS is declared stable and production ready! Several features are disabled by default, including snapshots and multiple active MDS servers.

    • The repair and disaster recovery tools are now feature-complete.

    • A new cephfs-volume-manager module is included that provides a high-level interface for creating “shares” for OpenStack Manila and similar projects.

    • There is now experimental support for multiple CephFS file systems within a single cluster.

  • RGW:

    • The multisite feature has been almost completely rearchitected and rewritten to support any number of clusters/sites, bidirectional fail-over, and active/active configurations.

    • You can now access radosgw buckets via NFS (experimental).

    • The AWS4 authentication protocol is now supported.

    • There is now support for S3 request payer buckets.

    • The new multitenancy infrastructure improves compatibility with Swift, which provides a separate container namespace for each user/tenant.

    • The OpenStack Keystone v3 API is now supported. There are a range of other small Swift API features and compatibility improvements as well, including bulk delete and SLO (static large objects).

  • RBD:

    • There is new support for mirroring (asynchronous replication) of RBD images across clusters. This is implemented as a per-RBD image journal that can be streamed across a WAN to another site, and a new rbd-mirror daemon that performs the cross-cluster replication.

    • The exclusive-lock, object-map, fast-diff, and journaling features can be enabled or disabled dynamically. The deep-flatten features can be disabled dynamically but not re-enabled.

    • The RBD CLI has been rewritten to provide command-specific help and full bash completion support.

    • RBD snapshots can now be renamed.

  • RADOS:

    • BlueStore, a new OSD backend, is included as an experimental feature. The plan is for it to become the default backend in the K or L release.

    • The OSD now persists scrub results and provides a librados API to query results in detail.

    • We have revised our documentation to recommend against using ext4 as the underlying filesystem for Ceph OSD daemons due to problems supporting our long object name handling.

完整的发布说明,可以在这里查看。

本站文章除注明转载外,均为本站原创或编译。欢迎任何形式的转载,但请务必注明出处,尊重他人劳动共创开源社区。
转载请注明:文章转载自 OSCHINA 社区 [http://www.oschina.net]
本文标题:Ceph v10.2.0 Jewel 发布,分布式文件系统
加载中

最新评论(8

i
istuary_denali
glusterfs与cephfs那个性能更好一点,有测试过的朋友吗?
G_Young
G_Young
留痕
AkataMoKa
AkataMoKa
哈哈,gitOSC 就是用的这个文件系统?
Yashin
Yashin
CephFS终于production ready了
alexleft
alexleft
过几天就v10.2.1了,上面会写This is a fast bugfix release... 囧
扁豆焖面先生
扁豆焖面先生

引用来自“purple_grape”的评论

CephFS:
This is the first release in which CephFS is declared stable and production ready!

这是我今早看到的最好的消息。
nice
惊鸟
惊鸟
这东西我之前还以为是有生之年了呢
purple_grape
purple_grape
CephFS:
This is the first release in which CephFS is declared stable and production ready!

这是我今早看到的最好的消息。
返回顶部
顶部