Yesterday, I ran a moderately simple SPDY web site over a simulated 100ms round-trip connection.
Today, I would like to see if I can cut down on that time some by using SPDY server push. By pushing resources directly into the browser cache, I should be able to overcome at least some of the RTT that is in place.
But when I first load up the page, pushing all static resources into cache, I find:
Bah! 2+ seconds?! That's terrible. Far worse than the simulated CDN over vanilla HTTP. What gives?
To figure out the delay, I check things out in the SPDY tab of Chrome's
about:net-internals
:t=1311473528051 [st= 0] SPDY_SESSION_SYN_STREAMOff to a reasonable start. Next, I reach the point that server push should go out:
--> flags = 1
--> accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
accept-charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
accept-encoding: gzip,deflate,sdch
accept-language: en-US,en;q=0.8
host: spdy.local:3000
method: GET
referer: https://spdy.local:3000/
scheme: https
url: /real
user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/14.0.825.0 Safari/535.1
version: HTTP/1.1
--> id = 1
t=1311473528260 [st= 209] SPDY_SESSION_PUSHED_SYN_STREAMOK. That looks good. Unfortunately, Chrome does not agree. It sends back a reset stream on my server push:
--> associated_stream = 1
--> flags = 2
--> status: 200
url: https://jaynestown.local:3000/stylesheets/style.css
version: http/1.1
--> id = 2
t=1311473528260 [st= 209] SPDY_SESSION_SEND_RST_STREAMHunh?
--> status = 3
--> stream_id = 2
Oh! This is not longer my local machine (jaynestown). This is a new spdy.local VM on which I am testing RTT, so I need to update the hostname accordingly:
function local_path_and_url(relative_path) {Now, when I run it, I get... only somewhat better results:
return [
"public/" + relative_path,
"https://spdy.local:3000/" + relative_path
];
}
Hrm... That initial connection is a lot longer than last night (only took ~200ms for a SPDY/SSL connection last night). Tonight, the browser is spending a lot of time during SSL negotiation:
Bah. This is one of the hazzards of small bites of research. I am doing something very different tonight than I did last night. I am going to need to go back and redo last night's work to make sure I did not skip a step when simulating SPDY over a high latency connection.
As for tonight's results, the remainder of the resources load in less than 600ms. I am still looking at 1.3 seconds total—a far cry from the 700ms download time of the simulated CDN from the other night. I can certainly eek out a bit more performance by gzip'ing the jQuery library, but still, if the initial SSL negotiation takes more almost as much time as the HTTP + CDN site, SPDY isn't much of a match.
I should note that SPDY is not supposed to mean and end for the CDN, but it would be pretty cool to get response times down to near CDN levels.
Ugh. Mostly I miss the Speed Tracer extension for Chrome. It seemed to have a better SPDY sense (I've been wait a long time to use that one) than does network manager. I'd really like to get a definitive handle on SPDY and RTT, but it may have to wait until I regain the use of Speed Tracer (it starts locked with Chrome 14.0.825.0 dev). Dang it.
Day #82
This is going to be very tricky to figure out because you need to carefully understand the layers below you.
ReplyDeleteFirst, the CDN you're using is probably cranking up init cwnd. Sure, the spec says you get 4K, but no major player (certainly not performance obsessed CDNs) leave it at default. So comparing your standalone server, which is probably stock linux or something, to a CDN already is using different TCP stacks and that makes SPDY operate differently already. I guess you simulated your CDN with your own server, so maybe this wasn't a factor.
Second, the SSL layer is tricky to test. Remember there are two types of SSL handshakes - the full handshake (which negotiates a session id) and the partial handshake (which uses a previously negotiated session id). When doing a full handshake, the client *might* end up doing a validation of the certificate (e.g. OCSP kicks in). But the result of that is usually quite cacheable, so depending on how you clear your cache, again you'll see different results from one day to the next. Finally, using different SSL servers will also impact your results. They can negotiate different bulk ciphers, they can send large/small SSL records which are suboptimal, etc.
In other words:
"No two implementations of TCP are alike"
and
"No two implementations of SSL are alike"
and
"SSL maintains state which you need to fully account for in benchmarking"