SEARCH
NEW RPMS
DIRECTORIES
ABOUT
FAQ
VARIOUS
BLOG

 
 
Changelog for libgfapi0-4.1.7-100.3.i586.rpm :

* Thu Jan 17 2019 kkeithle at redhat.com- GlusterFS 4.1.7 GA
* Wed Nov 14 2018 kkeithle at redhat.com- GlusterFS 4.1.6 GA
* Wed Sep 19 2018 kkeithle at redhat.com- GlusterFS 4.1.5 GA
* Thu Sep 06 2018 kkeithle at redhat.com- GlusterFS 4.1.4 GA
* Tue Sep 04 2018 kkeithle at redhat.com- GlusterFS 4.1.3 create /var/run/gluster/metrics
* Mon Aug 27 2018 kkeithle at redhat.com- GlusterFS 4.1.3 GA
* Wed Jul 25 2018 kkeithle at redhat.com- GlusterFS 4.1.2 GA
* Fri Jun 29 2018 kkeithle at redhat.com- GlusterFS 4.1.1 GA
* Wed Jun 13 2018 kkeithle at redhat.com- GlusterFS 4.1.0 GA
* Thu Apr 26 2018 kkeithle at redhat.com- GlusterFS 4.0.2 GA
* Wed Mar 21 2018 kkeithle at redhat.com- GlusterFS 4.0.1 GA
* Mon Mar 12 2018 kkeithle at redhat.com- GlusterFS 4.0.0 GA (v4.0.0-2 respin)
* Tue Mar 06 2018 kkeithle at redhat.com- GlusterFS 4.0.0 GA
* Sat Jan 20 2018 kkeithle at redhat.com- GlusterFS 3.13.2 GA
* Thu Dec 21 2017 kkeithle at redhat.com- GlusterFS 3.13.1 GA
* Sat Dec 02 2017 kkeithle at redhat.com- GlusterFS 3.13.0 GA
* Fri Dec 01 2017 kkeithle at redhat.com- GlusterFS 3.12.3, python-request_s_, w/o python-jwt
* Mon Nov 13 2017 kkeithle at redhat.com- GlusterFS 3.12.3 GA
* Fri Oct 13 2017 kkeithle at redhat.com- GlusterFS 3.12.2 GA
* Fri Sep 29 2017 kkeithle at redhat.com- GlusterFS 3.12.1 w/ BZ 1495858
* Mon Sep 11 2017 kkeithle at redhat.com- GlusterFS 3.12.1 GA
* Wed Aug 30 2017 kkeithle at redhat.com- GlusterFS 3.12.0 GA
* Mon Aug 21 2017 kkeithle at redhat.com- GlusterFS 3.11.3 GA
* Fri Jul 21 2017 kkeithle at redhat.com- GlusterFS 3.11.2 GA
* Wed Jun 28 2017 kkeithle at redhat.com- GlusterFS 3.11.1 GA
* Tue May 30 2017 kkeithle at redhat.com- GlusterFS 3.11.0 GA
* Mon May 15 2017 kkeithle at redhat.com- GlusterFS 3.10.2 GA
* Fri Mar 31 2017 kkeithle at redhat.com- GlusterFS 3.10.1 GA
* Fri Feb 24 2017 kkeithle at redhat.com- GlusterFS 3.10.0 GA
* Wed Feb 22 2017 kkeithle at redhat.com- GlusterFS 3.10.0 RC1
* Wed Feb 08 2017 kkeithle at redhat.com- GlusterFS 3.10.0 RC0
* Wed Jan 18 2017 kkeithle at redhat.com- GlusterFS 3.9.1 GA
* Wed Nov 16 2016 kkeithle at redhat.com- GlusterFS 3.9.0 GA
* Thu Oct 20 2016 kkeithle at redhat.com- GlusterFS 3.8.5 GA
* Mon Aug 22 2016 kkeithle at redhat.com- GlusterFS 3.8.3 GA
* Wed Aug 10 2016 kkeithle at redhat.com- GlusterFS 3.8.2 GA
* Mon Jul 11 2016 kkeithle at redhat.com- GlusterFS 3.8.1 GA
* Thu Jun 16 2016 kkeithle at redhat.com- GlusterFS 3.8.0 GA
* Mon Apr 18 2016 kkeithle at redhat.com- GlusterFS 3.7.11 GA
* Mon Mar 21 2016 kkeithle at redhat.com- GlusterFS 3.7.9 GA
* Mon Nov 09 2015 kkeithle at redhat.com
* Fri Feb 12 2016 kkeithle at redhat.com- GlusterFS 3.7.8 GA
* Mon Nov 09 2015 kkeithle at redhat.com- GlusterFS 3.7.6 GA
* Wed Oct 07 2015 kkeithle at redhat.com- GlusterFS 3.7.5 GA- CVE-2014-3619: add multifrag.diff [bnc#919879]
* Fri Feb 27 2015 jengelhAATTinai.de- CVE-2014-3619: add multifrag.diff [bnc#919879]
* Mon Aug 04 2014 scorotAATTfree.fr- Update to new upstream release 3.5.2
* NFS server crashes in _socket_read_vectored_request
* Can\'t write to quota enable folder
* nfs: reset command does not alter the result for nfs options earlier set
* features/gfid-access: stat on .gfid virtual directory return EINVAL
* creating symlinks generates errors on stripe volume
* Self-heal errors with \"afr crawl failed for child 0 with ret -1\" while performing rolling upgrade.
* [AFR] I/O fails when one of the replica nodes go down
* Fix inode leaks in gfid-access xlator
* NFS subdir authentication doesn\'t correctly handle multi-(homed,protocol,etc) network addresses
* nfs-utils should be installed as dependency while installing glusterfs-server
* Excessive logging in quotad.log of the kind \'null client\'
* client_t clienttable cliententries are never expanded when all entries are used
* AFR : self-heal of few files not happening when a AWS EC2 Instance is back online after a restart
* Dist-geo-rep : deletion of files on master, geo-rep fails to propagate to slaves.
* Allow the usage of the wildcard character \'
*\' to the options \"nfs.rpc-auth-allow\" and \"nfs.rpc-auth-reject\"
* glfsheal: Improve the way in which we check the presence of replica volumes
* Resource cleanup doesn\'t happen for clients on servers after disconnect
* mounting a volume over NFS (TCP) with MOUNT over UDP fails
* backport \'gluster volume status --xml\' issues
* Glustershd memory usage too high
* Tue Jul 29 2014 scorotAATTfree.fr- Update to new upstream release 3.5.1
* A new volume option server.manage-gids has been added. This option should be used when users of a volume are in more than approximately 93 groups (Bug 1096425).
* Duplicate Request Cache for NFS has now been disabled by default, this may reduce performance for certain workloads, but improves the overall stability and memory footprint for most users.
* Others changes are mostly bug fixes.- disable systemd pre an post scripts for old product and then fix build on SLE 11
* Mon May 05 2014 jengelhAATTinai.de- Update to new upstream release 3.5.0
* AFR_CLI_enhancements: Improved logging with more clarity and statistical information. It allows visibility into why a self-heal process was initiated and which files are affected, for example. Prior to this enhancement, clearly identifying split-brain issues from the logs was often difficult, and there was no facility to identify which files were affected by a split brain issue automatically. Remediating split brain without quorum will still require some manual effort, but with the tools provided, this will become much simpler.
* Exposing Volume Capabilities: Provides client-side insight into whether a volume is using the BD translator and, if so, which capabilities are being utilized.
* File Snapshot: Provides a mechanism for snapshotting individual files. The most prevalent use case for this feature will be to snapshot running VMs, allowing for point-in-time capture. This also allows a mechanism to revert VMs to a previous state directly from Gluster, without needing to use external tools.
* GFID Access: A new method for accessing data directly by GFID. With this method, the data can be directly consumed in changelog translator, which is logging ‘gfid’ internally, very efficiently.
* On-Wire Compression + Decompression: Use of this feature reduces the overall network overhead for Gluster operations from a client.
* Prevent NFS restart on Volume change (Part 1): Previously, any volume change (volume option, volume start, volume stop, volume delete, brick add, etc.) would restart the NFS server, which led to service disruptions. This feature allow modifying certain NFS-based volume options without such interruptions occurring. Part 1 is anything not requiring a graph change.
* Quota Scalability: Massively increase the amount of quota configurations from a few hundred to 65536 per volume.
* readdir_ahead: Gluster now provides read-ahead support for directories to improve sequential directory read performance.
* zerofill: Enhancement to allow zeroing out of VM disk images, which is useful in first time provisioning or for overwriting an existing disk.
* Brick Failure Detection: Detecting failures on the filesystem that a brick uses makes it possible to handle errors that are caused from outside of the Gluster environment.
* Disk encryption: Implement the previous work done in HekaFS into Gluster. This allows a volume (or per-tenant part of a volume) to be encrypted “at rest” on the server using keys only available on the client. [Note: Only content of regular files is encrypted. File names are not encrypted! Also, encryption does not work in NFS mounts.]
* Geo-Replication Enhancement: Previously, the geo-replication process, gsyncd, was a single point of failure as it only ran on one node in the cluster. If the node running gsyncd failed, the entire geo-replication process was offline until the issue was addressed. In this latest incarnation, the improvement is extended even further by foregoing use of xattrs to identify change candidates and directly consuming from the volume changelog, which will improve performance twofold: one, by keeping a running list of only those files that may need to be synced; and two, the changelog is maintained in memory, which will allow near instant access to which data needs to be changed and where by the gsync daemon.
* Thu Feb 28 2013 jengelhAATTinai.de- Update to new upstream release 3.4.0alpha (rpm: 3.4.0~qa9)
* automake-1.13 support- Enable AIO support
* Tue Nov 27 2012 jengelhAATTinai.de- Use `glusterd -N` in glusterd.service to run in foreground as required
* Tue Nov 27 2012 cfarrellAATTsuse.com- license update: GPL-2.0 or LGPL-3.0+
* Fri Nov 09 2012 jengelhAATTinai.de- Update to new upstream release 3.4.0qa2
* No changelog provided by upstream- Remove glusterfs-init.diff, merged upstream- Provide systemd service file
* Wed Oct 31 2012 jengelhAATTinai.de- Update to new upstream release 3.3.1
* mount.glusterfs: Add support for {attribute,entry}-timeout options
* cli: Proper xml output for \"gluster peer status\"
* self-heald: Fix inode leak
* storage/posix: implement native linux AIO support
* Mon Sep 24 2012 jengelhAATTinai.de- Update to new upstream release 3.3.0
* New: Unified File & Object access
* New: Hadoop hooks - HDFS compatibility layer
* New volume type: Repstr - replicated + striped (+ distributed) volumes
* Fri Dec 02 2011 cooloAATTsuse.com- add automake as buildrequire to avoid implicit dependency
* Wed Oct 05 2011 jengelhAATTmedozas.de- Initial package for build.opensuse.org
 
ICM