Ceph 11.1.0 发布,分布式存储系统

发布于 2016年12月13日
收藏 0

Ceph 11.1.0  发布了。Ceph是加州大学Santa Cruz分校的Sage Weil(DreamHost的联合创始人)专为博士论文设计的新一代自由软件分布式文件系统。自2007年毕业之后,Sage开始全职投入到Ceph开 发之中,使其能适用于生产环境。Ceph的主要目标是设计成基于POSIX的没有单点故障的分布式文件系统,使数据能容错和无缝的复制。


  • RADOS:

    • The new BlueStore backend now has a stable disk format and is passing our failure and stress testing. Although the backend is still flagged as experimental, we encourage users to try it out for non-production clusters and non-critical data sets.

    • RADOS now has experimental support for overwrites on erasure-coded pools. Because the disk format and implementation are not yet finalized, there is a special pool option that must be enabled to test the new feature.  Enabling this option on a cluster will permanently bar that cluster from being upgraded to future versions.

    • We now default to the AsyncMessenger (ms type = async) instead of the legacy SimpleMessenger.  The most noticeable difference is that we now use a fixed sized thread pool for network connections (instead of two threads per socket with SimpleMessenger).

    • Some OSD failures are now detected almost immediately, whereas previously the heartbeat timeout (which defaults to 20 seconds) had to expire.  This prevents IO from blocking for an extended period for failures where the host remains up but the ceph-osd process is no longer running.

    • There is a new ceph-mgr daemon.  It is currently collocated with the monitors by default, and is not yet used for much, but the basic infrastructure is now in place.

    • The size of encoded OSDMaps has been reduced.

    • The OSDs now quiesce scrubbing when recovery or rebalancing is in progress.

  • RGW:

    • RGW now supports a new zone type that can be used for metadata indexing via Elasticseasrch.

    • RGW now supports the S3 multipart object copy-part API.

    • It is possible now to reshard an existing bucket. Note that bucket resharding currently requires that all IO (especially writes) to the specific bucket is quiesced.

    • RGW now supports data compression for objects.

    • Civetweb version has been upgraded to 1.8

    • The Swift static website API is now supported (S3 support has been added previously).

    • S3 bucket lifecycle API has been added. Note that currently it only supports object expiration.

    • Support for custom search filters has been added to the LDAP auth implementation.

    • Support for NFS version 3 has been added to the RGW NFS gateway.

    • A Python binding has been created for librgw.

  • RBD:

    • RBD now supports images stored in an erasure-coded RADOS pool using the new (experimental) overwrite support. Images must be created using the new rbd CLI “–data-pool <ec pool>” option to specify the EC pool where the backing data objects are stored. Attempting to create an image directly on an EC pool will not be successful since the image’s backing metadata is only supported on a replicated pool.

    • The rbd-mirror daemon now supports replicating dynamic image feature updates and image metadata key/value pairs from the primary image to the non-primary image.

    • The number of image snapshots can be optionally restricted to a configurable maximum.

    • The rbd Python API now supports asynchronous IO operations.

  • CephFS:

    • libcephfs function definitions have been changed to enable proper uid/gid control. The library version has been increased to reflect the interface change.

    • Standby replay MDS daemons now consume less memory on workloads doing deletions.

    • Scrub now repairs backtrace, and populates damage ls with discovered errors.

    • A new pg_files subcommand to cephfs-data-scan can identify files affected by a damaged or lost RADOS PG.

    • The false-positive “failing to respond to cache pressure” warnings have been fixed.


更多请查看完整 更新日志


转载请注明:文章转载自 OSCHINA 社区 [http://www.oschina.net]
本文标题:Ceph 11.1.0 发布,分布式存储系统