Testing Gzip and Gnupg throughput for compressing and encrypting large files on-the-fly on Intel Core i5-6200U @2.30 GHz.
gzip vs. pigs:
gzip --best: 23,8 MB/spigz -p 2 --best: 51 MB/spigs -p 4 --best: 60 MB/s
gpg vs. gpg2:
gpg --encrypt: 19,5 MB/s
gpg2 --encrypt: 22,5 MB/s
gpg --encrypt --compress-level 0: 56,5 MB/s
gpg2 --encrypt --compress-level 0: 98,5 MB/s
Summary :
- Gzip compression (with
--bestlevel) is the limiting factor: usepigzto parallelize and maximize compression throughput. - Gnupg performs by default a compression which is redundant if you feed him already compressed files: use
pigzto parallelize and maximize the throughput AND disable Gnupg’s compression (--compress-level 0). Also, if you leave Gnupg’s compression on, it might not be able to parallelize it aspigzdoes, hence the benefit of performing the compression separately withpigzto maximize throughput. gpg2is almost two time faster thangpgwhen performing AES256 encryption: usegpg2to maximize throughput.- The overall best « combo » for compressing and encrypting large files on-the-fly might be:
pigz --best | gpg2 --encrypt --compress-level 0