Skip to navigation

How To Pre-Compress Files Generated By Hugo And Serve Them With Nginx

Written by and published on

Instead of minifying HTML, CSS and JS files it is much more effective – in terms of file size – to compress them using GZip or similar. The advantage of pre-compression is mainly that it saves CPU time compared to on-the-fly compression. It is also safe to use with HTTPS, unlike HTTP compression that is currently vulnerable to Breach attack. Also, since the compressed files are static, there is no problem using ETags with them whereas with HTTP compression the checksum can change with server configuration even if the resource stays the same.

Let’s get to work.

GZipping The Files

A simple way to compress all files of certain type in a given directory is to add the following to your deploy script:

find public/ -type f \( -name '*.html' -o -name '*.js' -o -name '*.css' -o -name '*.xml' -o -name '*.svg' \) -exec gzip -v -k -f --best {} \;

The above line of code finds all files that have appropriate extension and for each file executes GZip. The GZip settings are:

The {} is replaced with the filename each time GZip is executed. Next we need to tell the server to use those files instead of the uncompressed ones.

Nginx Settings

On Ubuntu most of the nginx packages come with ngx_http_gzip_static_module installed, which is great because otherwise we would need to build nginx from the source. Now all we need to do is enable the module in the server configuration and optionally set some additional headers to fine tune file caching.

# Enable ngx_http_gzip_static_module
gzip_static on;

# Allow proxies to cache files
add_header  Cache-Control public;

# Set cache expiry based on the file extension.
# expires sets both Expires and Cache-Control: max-age headers
location ~ \.(html)$ {
    expires   7d;

# Max is around until the year 2038
location ~ \.(ico|gif|jpe?g|png|svg|js|css)$ {
    expires   max;

# open_file_cache caches file descriptors, file metadata, the existence
# of files and lookup errors. It doesn't cache actual files.
# These have nothing to do with compression and they probably won't
# affect performance as much as one would like, but they can be somewhat
# useful.
# Note that with these exact settings it can take up to 5 minutes for
# nginx to notice changes in the files.

# Cache at most 1000 elements and remove those from the cache that
# haven't been accessed in 5 minutes
open_file_cache          max=1000 inactive=5m;

# How long the file information in the cache is valid, ie. how often to
# revalidate
open_file_cache_valid    5m;

# How often a file must be accessed during the period defined in
# open_file_cache in order to keep file descriptor open in the cache
open_file_cache_min_uses 1;

# Cache file lookup errors, so missing files won't slow us down
open_file_cache_errors   on;

Reload the service for changes to take effect.

$ sudo service nginx reload

After reloading the configuration, these are the headers that nginx sends for HTML files:

HTTP/1.1 200 OK
Cache-Control: max-age=604800
Cache-Control: public
Connection: keep-alive
Content-Encoding: gzip
Content-Length: 2883
Content-Type: text/html
Date: Fri, 27 May 2016 15:38:27 GMT
Expires: Fri, 03 Jun 2016 15:38:27 GMT
Last-Modified: Fri, 27 May 2016 04:45:34 GMT
Server: nginx/1.10.0 (Ubuntu)

And the headers for SVG files:

HTTP/1.1 200 OK
Cache-Control: max-age=315360000
Cache-Control: public
Connection: keep-alive
Content-Encoding: gzip
Content-Length: 477
Content-Type: image/svg+xml
Date: Fri, 27 May 2016 15:38:27 GMT
Expires: Thu, 31 Dec 2037 23:55:55 GMT
Last-Modified: Wed, 25 May 2016 05:44:37 GMT
Server: nginx/1.10.0 (Ubuntu)

So it looks like it’s working!

Even Better Compression With Zopfli?

Google’s Zopfli compression tool has been around for some time now, and I began to wonder how much better compression I would get if I used that instead of GZip. Let’s see!

$ sudo apt-get install zopfli

See zopfli -h for instructions.

I replaced the GZip with Zopfli in my deployment script and used different iteration counts, 15 being the default. The more iterations, the better the compression – up to a point, of course.

Here are the results. File size is in bytes. The sample is ridiculously small but I’m not doing any science here.

File Original zopfli –i15 zopfli –i50 zopfli –i500 gzip –best
index.html 15896 2334 2334 2334 2393
index.xml 129670 14792 14787 14777 15478
sitemap.xml 1411 455 454 454 473

Talk about diminishing returns… It looks like the default 15 iterations is close to the Goldilocks zone.

The files are smaller, but considering how much slower Zopfli is I feel the difference is not worth the time. Maybe if you have a really busy site with lots of static files those small savings add up, but that is not the case for me.

It is obvious, however, that compression has a major effect on file size. After gzipping index.html is just 15% of its original size and index.xml merely 12%. Sitemap.xml doesn’t compress as well, but shaving 66% of the original size is stil a lot.

The Effect Of Minification On Compression

Just out of curiosity I tested how minification affects the outcome. Can we make the compressed files even smaller?

In this case the original size is the size of the minified file.

File Original zopfli –i15 zopfli –i50 zopfli –i500 gzip –best
index.html 13580 2141 2140 2139 2198
index.xml 127683 14517 14515 14505 15201
sitemap.xml 1248 441 440 439 460

Well, it seems minification does help, and maybe even enough for it to be meaningful.

Even More Better Compression With Brotli?

Just after writing this I found out there has been a new player in town since September of 2015: Brotli. That’s what you get for not subscribing to Compression Algorithms Monthly, I guess.

Brotli is Google’s new compression algorithm, named after Brötli, a type of Swiss pastry. Is it any better than Zopfli or GZip? Oh, yes. Significantly better. Let me show you:

File Original brotli q11 brotli q11 i10 min brotli q11
index.html 15896 1850 1850 1759
index.xml 130214 12446 12446 12225
sitemap.xml 1411 388 388 366

The index.xml is a bit larger than in previous tests because I had edited a post after running those, but it doesn’t matter because we’re interested in relative sizes. In any case, Brotli still creates a smaller file than either Zopfli or GZip.

The second last column is the results after ten iterations, which don’t seem to have any effect on file size. The last column shows the file size after minification and compression.

Unfortunately, not all browsers support Brotli compression and the nginx packages that are available for Ubuntu do not contain a module for serving Brotli files. If you want to serve Brotli you need to build nginx from source and manually add one of the Brotli modules as an option. Not that difficult a task but I find it easier to keep software up to date when all I have to do is run apt-get upgrade.

So for the time being I’m settling for GZip.

Final results

Here are the results in percentages for easier comparison. The figure is the size of the compressed file compared to the original unminified/minified file.

File Brotli Zopfli GZip
index.html 11.64% / 11.07% 14.68% / 13.46% 15.05% / 13.83%
index.xml 9.56% / 9.39% 11.40% / 11.19% 11.94% / 11.72%
sitemap.xml 27.50% / 25.94% 32.18% / 31.11% 33.52% / 32.60%

For my purposes, Brotli is the clear winner here. It’s not as fast as GZip but it’s fast enough, and the size difference is significant. Now I just have to wait until I can actually use it without jumping through hoops.

Current Hugo Deployment Scripts

To recap, here are my current deployment scripts for Hugo:

hugo -s ~/sites/ -d ~/sites/
# This here is waiting for the time when minify starts working properly
# minify -r -v --match=\.*ml ~/sites/
find ~/sites/ -type f \( -name '*.html' -o -name '*.js' -o -name '*.css' -o -name '*.xml' -o -name '*.svg' \) -exec gzip -v -k -f --best  \;

while :
  find ~/sites/ -type f | entr -d ~/sites/


Commenting has been disabled until I get a proper spam protection working. =(

External Links

Back to beginning