Quick benchmark of Gzip and Gnupg throughput

Testing Gzip and Gnupg throughput for compressing and encrypting large files on-the-fly on Intel Core i5-6200U @2.30 GHz.

gzip vs. pigs:

  • gzip --best : 23,8 MB/s
  • pigz -p 2 --best : 51 MB/s
  • pigs -p 4 --best : 60 MB/s

gpg vs. gpg2:

  • gpg --encrypt : 19,5 MB/s
  • gpg2 --encrypt : 22,5 MB/s
  • gpg --encrypt --compress-level 0 : 56,5 MB/s
  • gpg2 --encrypt --compress-level 0 : 98,5 MB/s

Summary :

  • Gzip compression (with --best level) is the limiting factor: use pigz to parallelize and maximize compression throughput.
  • Gnupg performs by default a compression which is redundant if you feed him already compressed files: use pigz to parallelize and maximize the throughput AND disable Gnupg’s compression (--compress-level 0). Also, if you leave Gnupg’s compression on, it might not be able to parallelize it as pigz does, hence the benefit of performing the compression separately with pigz to maximize throughput.
  • gpg2 is almost two time faster than gpg when performing AES256 encryption: use gpg2 to maximize throughput.
  • The overall best « combo » for compressing and encrypting large files on-the-fly might be: pigz --best | gpg2 --encrypt --compress-level 0

Ce contenu a été publié dans Uncategorized. Vous pouvez le mettre en favoris avec ce permalien.

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

*