Show me the numbers!
Whenever I come up with some cool little algorithm that will certainly speed things up, a little voice in the back of my head starts screaming this at me. Actually, the voice is that of my friend and mentor, Mike Barry, who demanded this of me many a time when I was sure that I had devised a terribly clever optimization. Eventually I heard it so often that the voice is permanently embedded in my brain.
And I am hearing it right now.
Last night, I was able to get my Sinatra / CouchDB app running under nginx and passenger. I was really pleased with myself and ready to move on, until I heard that voice...
Prior to switching to nginx / passenger, I had the same application running in a cluster of 4 thin servers behind HAProxy. Is nginx / passenger really better than HAProxy / thin?
Show me the numbers!
OK! OK! Stupid little voice, I could be adding new features now!
The configuration for HAProxy / thin is fairly vanilla. I am running 4 thin servers behind the HAProxy, as described the other night.
Similarly, the configuration for nginx / passenger is very close to the default produced from the passenger / nginx install, as described last night. There are two
worker_processes
(I also tried 4, with similar results).For both the nginx and HAProxy servers, I am going to access a recipe and image from the underlying Sinatra app. For both the recipe and the image, I will use a single process and 4 concurrent process. The apache benchmarking commands for these four scenarios are:
ab -n 100 http://beta.eeecooks.com/recipes/2008-08-04-popoversThe results (requests per second) with both servers:
ab -n 100 http://beta.eeecooks.com/images/2008-08-04-popovers/popover_0039.jpg
ab -n 100 -c 4 http://beta.eeecooks.com/recipes/2008-08-04-popovers
ab -n 100 -c 4 http://beta.eeecooks.com/images/2008-08-04-popovers/popover_0039.jpg
Config | recipe | image | recipe w/ 4 procs | image w/ 4 procs |
---|---|---|---|---|
nginx / passenger | 4.82 | 1.59 | 12.69 | 1.96 |
HAProxy / thin | 5.45 | 1.67 | 16.96 | 1.96 |
I have a couple of "takeaways" from this. First of all, I really need to institute some caching. 12 requests per second ain't gonna cut it. Also, I need to be careful when testing these things over my internet connection—the values for the 4 process image download saturated my DSL connection (which is why they are identical).
Most importantly, the HAProxy / thin combination is up to 33% faster than nginx / passenger. As cool as nginx / passenger sounds, I will have to stick with HAProxy / thin.
Good thing I listened to that little voice.
Well, to be fair, at such extremely low loads, the difference cannot be caused by haproxy or nginx, both of which have been running below 0.1% of their respective capacities. So the difference comes from the other two :-)
ReplyDeleteHowever, over a small link, haproxy-1.4-dev may get an advantage because it is able to reduce the number of TCP packets per session, which in turn reduces overall latency.
Ah, good point re: haproxy vs. nginx -- I hadn't given that much thought, but makes sense. I am using the stock haproxy from debian testing (version 1.3.18-1), not 1.4-dev, so there was no added benefit from that.
ReplyDelete