Testing Gzip and Gnupg throughput for compressing and encrypting large files on-the-fly on Intel Core i5-6200U @2.30 GHz.
gzip
vs. pigs
:
gzip --best
: 23,8 MB/spigz -p 2 --best
: 51 MB/spigs -p 4 --best
: 60 MB/s
gpg
vs. gpg2
:
gpg --encrypt
: 19,5 MB/s
gpg2 --encrypt
: 22,5 MB/s
gpg --encrypt --compress-level 0
: 56,5 MB/s
gpg2 --encrypt --compress-level 0
: 98,5 MB/s
Summary :
- Gzip compression (with
--best
level) is the limiting factor: usepigz
to parallelize and maximize compression throughput. - Gnupg performs by default a compression which is redundant if you feed him already compressed files: use
pigz
to parallelize and maximize the throughput AND disable Gnupg’s compression (--compress-level 0
). Also, if you leave Gnupg’s compression on, it might not be able to parallelize it aspigz
does, hence the benefit of performing the compression separately withpigz
to maximize throughput. gpg2
is almost two time faster thangpg
when performing AES256 encryption: usegpg2
to maximize throughput.- The overall best « combo » for compressing and encrypting large files on-the-fly might be:
pigz --best | gpg2 --encrypt --compress-level 0