Testing Gzip and Gnupg throughput for compressing and encrypting large files on-the-fly on Intel Core i5-6200U @2.30 GHz.
gzip --best: 23,8 MB/s
pigz -p 2 --best: 51 MB/s
pigs -p 4 --best: 60 MB/s
gpg --encrypt: 19,5 MB/s
gpg2 --encrypt: 22,5 MB/s
gpg --encrypt --compress-level 0: 56,5 MB/s
gpg2 --encrypt --compress-level 0: 98,5 MB/s
- Gzip compression (with
--bestlevel) is the limiting factor: use
pigzto parallelize and maximize compression throughput.
- Gnupg performs by default a compression which is redundant if you feed him already compressed files: use
pigzto parallelize and maximize the throughput AND disable Gnupg’s compression (
--compress-level 0). Also, if you leave Gnupg’s compression on, it might not be able to parallelize it as
pigzdoes, hence the benefit of performing the compression separately with
pigzto maximize throughput.
gpg2is almost two time faster than
gpgwhen performing AES256 encryption: use
gpg2to maximize throughput.
- The overall best « combo » for compressing and encrypting large files on-the-fly might be:
pigz --best | gpg2 --encrypt --compress-level 0