GlusterFS 3.8 发布了，Gluster 是一个集群的文件系统，支持 PB 级的数据量。GlusterFS 通过 RDMA 和 TCP/IP 方式将分布到不同服务器上的存储空间汇集成一个大的网络并行文件系统。
containers with inclusion of Heketi
protocol improvements with NFS Ganesha
Automatic conflict resolution, self-healing improvements (Facebook)
Synchronous Replication receives a major boost with features contributed from Facebook. Multi-threaded self-healing makes self-heal perform at a faster rate than before. Automatic Conflict resolution ensures that conflicts due to network partitions are handled without the need for administrative intervention
NFSv4.1 (Ganesha) – protocol
Gluster’s native NFSv3 server is disabled by default with this release. Gluster’s integration with NFS Ganesha provides NFS v3, v4 and v4.1 accesses to data stored in Gluster volume.
BareOS – backup / data protection
Gluster 3.8 is ready for integration with BareOS 16.2. BareOS 16.2 leverages glusterfind for intelligently backing up objects stored in a Gluster volume.
“Next generation” tiering and sharding – VM images
Sharding is now stable for VM image storage. Geo-replication has been enhanced to integrate with sharding for offsite backup/disaster recovery of VM images. Self-healing and data tiering with sharding makes it an excellent candidate for hyperconverged virtual machine image storage.
block device & iSCSI with LIO – containers
File backed block devices are usable from Gluster through iSCSI. This release of Gluster integrates with tcmu-runner [https://github.com/agrover/tcmu-runner] to access block devices natively through libgfapi.
Heketi – containers, dynamic provisioning
Heketi provides the ability to dynamically provision Gluster volumes without administrative intervention. Heketi can manage multiple Gluster clusters and will be the cornerstone for integration with Container and Storage as a Service management ecosystems.
glusterfs-coreutils (Facebook) – containers
Native coreutils for Gluster developed by Facebook that uses libgfapi to interact with gluster volumes. Useful for systems and containers that do not have FUSE.