I have been doing some reading into key-value databases (schema-free/NOSQL), and am particularly taken with redis, which has had some limelight of late.

Many people have been advocating it as an alternative to memcached as a web-app caching system, so I thought I'd dip my toe in the ocean of caching with redis.

Note that I don't actually need to cache anything, it's just the performance angle appeals to me.

After moving from the standard apt-get install nginx to a from-source version of Nginx (compiled with the Nginx redis module, of course), I was able to do a few basic tests with ab (ApacheBench).

Using the following command:

ab -n 1000 -c 50 localhost/

I get some results:

  • Without redis caching: 2,864 requests per second (mean)
  • With redis caching: 3,354 requests per second (mean)


Not really, just indicative. We all know Nginx is very accomplished at serving static files, and this was a very simple "Hello" index page, 15 bytes long.

However, this would certainly make me think hard about deploying redis alongside Nginx for any low-write/high-read web applications or sites.

The results aren't particularly astounding, but the simplicity of integrating redis with Nginx and the fact that it is so transparent should make any Nginx user think about going down this route.