Wednesday, March 31, 2010

CouchDB Relaximation

‹prev | My Chain | next›

Getting back to node.js, I think I will explore some more of the node.js things linked to CouchDB. Several folks were kind enough to provide links in response to a post from last week. One of the first on the list is relaximation.

The relaximation script establishes a number of node.js clients to perform concurrent reads and writes against a CouchDB server. It then performs a statistical analysis of the results, providing a nice graph output. By default, it creates 50 writing processes and 200 reading processes. Other defaults can be seen in the help:
cstrom@whitefall:~/repos/relaximation/tests$ ~/local/bin/node compare_write_and_read.js --help
-w, --wclients :: Number of concurrent write clients per process. Default is 50.
-r, --rclients :: Number of concurrent read clients per process. Default is 200.
-u, --url1 :: CouchDB url to run tests against. Default is http://localhost:5984
-v, --url2 :: CouchDB url to run tests against. Default is http://localhost:5985
-1, --name1 :: Name of first comparative. Required.
-2, --name2 :: Name of first comparative. Required.
-d, --doc :: small or large doc. Default is small.
-t, --duration :: Duration of the run in seconds. Default is 60.
-i, --poll :: Polling interval in seconds. Default is 1.
-p, --graph :: CouchDB to persist results in. Default is
-r, --recurrence :: How many times to run the tests. Deafult is 10.
I am still running on my netbook here, which only has a single CouchDB server. I do have the VMs lying about, so let's see how a bare metal CouchDB server compares to a VM CouchDB server:
cstrom@whitefall:~/repos/relaximation/tests$ ~/local/bin/node compare_write_and_read.js --name1 netbook --name2 vm-on-netbook --url2 http://couch-011a.local:5984
... Lots and lots and lots of output
That is one pretty graph. Kudos on the use of a <canvas> graph.

It is interesting looking at the graph seeing that read/writes both peak at about 30 seconds, then decrease slightly until ~40 seconds of heavy pounding at which point things stay nice and steady. I would have expected things to get optimized relatively quickly and then to stabilize. Perhaps the slight trend downward is a result of reaching some threshold for number of documents in the database. Grist for IRC conversations tomorrow.

Another interesting thing to note is that the VM outperforms the localhost CouchDB server. It took me a while to remember that the VM and localhost CouchDB servers are at different versions (0.11-pre vs. 0.10). It seems clear that there were some optimizations added between 0.10 and 0.11. All the more reason to upgrade.

The last thing to note on the graphs is that the VM/0.11 graphs are nice and smooth while the localhost/0.10 graphs are jagged (even after 10 runs). I am not sure of the reason for this, but the last few runs against this DB resulted in errors similar to:
[Thu, 01 Apr 2010 01:55:23 GMT] [debug] [<0.25812.14>] httpd 500 error response:

[Thu, 01 Apr 2010 01:55:23 GMT] [info] [<0.25873.14>] Stacktrace: [{mochiweb_request,send,2},
Too many of those could have skewed the statistics.

Looking through the code, there is oodles to learn. It still astounds me how much one can do with Javascript. This guy even implements his own OptionParser in Javascript. Crazy stuff.

That was fun and even more educational than I expected. The concurrent clients seem a nice use of node.js that I would not have otherwise thought of. Tomorrow, I think I would like to explore some of the node.js libraries that interact with the _changes API in CouchDB.

Day #59


  1. I implemented the optionparser because, at the time, there wasn't one for node. As far as I know there still isn't a good one.

    The graph is a couchapp which is also in relaximation called "graphs" and it uses the flot jquery plugin to do the graphing.

    We're using all of this at to compare different performance tweaks that we try in git branches and a few other kinds of performance comparisons. It's cool to see someone else get some use out of it, I'll be sure to improve on the docs and get some more automated stuff up later :)

    One thing you want to do is always run CouchDB with delayed_commits off. This means that responses won't return until they are written to disc, without it responses return immediately and are batched every second. With delayed_commits off the writes are batched efficiently under concurrent load but single writer performances looks terrible which is why the default is on.

  2. @mikeal Thanks for all of the pointers!

    "As far as I know, there still isn't a good one." Classic. Still good fun to read through.

    I reran the script with delayed_commits off and found results more in line with what I expected:

    No idea why the v0.10 was so fast writing, but there were lots of errors in there.

    Thanks again for the node.js help and pointers -- they've help a lot!