SPDY Brings Responsive and Scalable Transport to Firefox 11

Firefox 11 contains the first Firefox implementation of the SPDY protocol. SPDY is a secure web transport protocol that encapsulates HTTP/1 while replacing its aging connection management strategies. This results in more responsive page loads today and enables better scalability with the real time web of tomorrow.

The most important goal of SPDY is to transport web content using fewer TCP connections. It does this by multiplexing large numbers of transactions onto one TLS connection. This has much better latency properties than native HTTP/1. When using SPDY a web request practically never has to wait in the browser due to connection limits being exhausted (e.g. the limit of 6 parallel HTTP/1 connections to the same host name). The request is simply multiplexed onto an existing connection.

Many web pages are full of small icons and script references. The speed of those transfers is limited by network delay instead of bandwidth. SPDY ramps up the parallelism which in turn removes the serialized delays experienced by HTTP/1 and the end result is faster page load time. By using fewer connections, SPDY also saves the time and CPU needed to establish those connections.

The page-load waterfall diagram below tells the story well. Note the large number of object requests that all hit the network at the same time. All of their individual load times are comprised exclusively of network delay and by executing them in parallel the total page load time is reduced to a single round trip.

Generally speaking, web pages on high latency connections with high numbers of embedded objects will see the biggest benefit from SPDY. That’s great because its where the web should be going. High latency mobile is a bigger part of the Internet every day, and as the Internet spreads to parts of the world where it isn’t yet common you can count on the fact that the growth will be mobile driven. Designs with large numbers of objects are also proving to be a very popular paradigm. Facebook, G+, Twitter and any avatar driven forum are clear examples of this. Rather than relying on optimization hacks such as sprites and data urls that are hard to develop and harder to maintain we can let the transport protocol do its job better.

Beyond better page load time, there is good reason to think this approach is good for the web’s foundation. The way HTTP/1 uses large numbers of small and parallel active connections creates a giant networking congestion problem. This inhibits the deployment of real time applications like WebRTC, VOIP, and some highly interactive games. SPDY’s small number of busier connections fits the congestion control model of the Internet much better and enables the transport of classic web content to cooperate better with these real time applications. Web browsers have only managed to keep the congestion problem in check with HTTP/1 through arbitrary limits on its parallelism. With SPDY we can have our parallel-cake and eat it in low latency conditions too. This property is what I find most promising about SPDY, and I’ve written about it extensively in the past.

There is a great transition path onto SPDY. It is a new protocol, but it uses the old https:// protocol scheme in URIs. No changes to markup are needed to use SPDY. Generally SPDY servers support both SPDYand HTTP/1 for use with browsers that are not SPDY capable. The protocol used is silently negotiated through a TLS extension called Next Protocol Negotiation. The great news here is that upgrading to SPDY is just a matter of an administrative server upgrade. No changes to content are needed and things like REST APIs continue to work unmodified. Indeed, a SPDY site is not visually different in any way from an HTTP/1 site.

Google did a lot of work to launch this technology and to evolve it in the open, but it isn’t a Google only project any more. Since the implementations in Chrome and various Google web services were introduced we have seen either code or commitments regarding SPDY from many other products and groups including Amazon’s tablet, node.js, an Apache module, curl, nginx, and even a couple CDNs along with Mozilla. In my opinion, that kind of reaction is because engineers have looked at this and decided that it is solves several serious problems with HTTP’s connection handling and that this is a technology well positioned for us all to cooperate on. There is also discussion and preliminary movement in all the right standardization forums such as the W3C TAG and the IETF. Open standardization of the protocol is a key condition of Mozilla’s interest in it, but it is not a precondition to using it. Gathering operational experience instead of just engineering on whiteboards, is a valuable part of how the best protocols are made. The details of SPDY can be iterated based on that experience and the standardization process. The protocol is well suited to that evolution at this stage.

SPDY needs to be explicitly enabled through about:config in Firefox 11. Go to that URL and search for network.http.spdy.enabled and set it to true. Future revisions hope to have it enabled by default.

About Patrick McManus

Principal Engineer at Mozilla focused on Platform Networking

More articles by Patrick McManus…


  1. Pikadude No. 1

    I’m slightly disappointed that this is apparently only available to authors who can afford/use an SSL certificate.

    February 3rd, 2012 at 19:13

    1. nototoad

      anybody can afford an SSL cert, they’re free. the web shouldn’t be held back because some people don’t want to source out a cheap or free one.

      February 3rd, 2012 at 22:42

  2. Dan

    The SSL and NPN requirements will almost certainly prevent widespread adoption.

    February 3rd, 2012 at 19:48

    1. Patrick McManus

      NPN won’t be a problem. It is part of the dev stream of both openssl and nss already. mod_spdy for apache builds and runs very well with openssl right from the tree. Same for node-spdy.

      biggest risk might be hardware firmware that requires upgrades. But they are undergoing widespread upgrades for BEAST related problems anyhow.. hopefully taking false-start, npn, sni, etc updates with them..

      February 4th, 2012 at 08:30

  3. Adam

    Pikadude – you know you can get free SSL Certs right? – http://www.startssl.com/

    February 3rd, 2012 at 21:48

    1. Techy Mike

      SSL isn’t required (but it is a recommended option)

      I use StartSSL myself for non-production use, but for people wanting alternatives…

      Self-signed certs… – never trusted in users browsers unless they manually add the cert

      http://www.cacert.org/ — not trusted in all browsers

      http://www.comodo.com/e-commerce/ssl-certificates/free-ssl-cert.php — free 90 day certs
      http://www.verisign.co.uk/ssl/free-30day-trial/index.html — 30 days free
      http://www.freessl.com/ — 30 days free

      It should be fairly obvious that free SSL isn’t impossible… and seriously, if your site does require SSL (and for SPDY that isn’t 100% accurate) then there are plenty of cheap options out there.

      February 4th, 2012 at 02:00

      1. Patrick McManus

        Firefox SPDY requires SSL 100% of the time.

        The best thing I’ve ever heard said about this is that SSL can be a logistical burden for server operators[*], but I’ve never met a browser user that wanted to be running an insecure protocol.

        [*] we need to improve the CA/PKI situation. I doubt you’ll find disagreement on that either.

        February 4th, 2012 at 08:27

  4. driax


    Just use startssl.com. They provide free ssl certificates. Though you have to pay if you need subdomain or star-domain (*.example.com).

    February 3rd, 2012 at 22:52

  5. Mook

    Has the code been fully reviewed yet? The bug for the spdy changes originally contained reviews that only covered the bits where spdy was left off, not the spdy-specific code paths. Given that it’s been a while since those patches, and indeed even some time since the initial landing, it might have been all done somewhere else and it’s all good, of course – it’s just that the bug was a little complicated and unclear if all the code has been properly vetted.

    February 4th, 2012 at 00:07

    1. Patrick McManus

      Yes the code reviews went in as part of FF12 with any critical bits ported to FF11. Expect at least a trial run of default-on as part of FF 13 nightly.

      February 4th, 2012 at 08:24

  6. cuz84d

    Hey Patrick, there is a bug in the regular hetwork code I believe that kills the browser and maybe putting it in offiline mode or something weird.

    Occasionally we see the browser go from working fine one minute to just lock up and stop responding to network requests at work Firefox 4-9, and we end of having to clear cache and history and restart the browser before it works again. Any idea what could cause such a thing, its not reproduceable at all though. I wonder if the browser ended up in offline mode by ifself.. or the network messed with the disk cache.

    Anyway, I sure hope SPDy will take care of this.. I did turn it on, and I can say its pretty damn fast at pulling down pages.

    February 4th, 2012 at 01:11

  7. Jo Hermans

    Is there an add-on that does something similar to Chrome’s ? That would it easier to debug SPDY network connections.

    February 4th, 2012 at 14:01

    1. Jo Hermans

      Sorry, I mentioned chrome://net-internals/#spdy in the above post, but it got removed because I used angular quotes.

      February 5th, 2012 at 10:54

  8. Dmitry Pashkevich

    I bet implementing SPDY in the coming FF version was one of the Google-Mozilla renewed contract clauses :)

    February 5th, 2012 at 05:09

    1. louisremi

      I bet you’re wrong: SPDY is good for the Web and its users, why would Mozilla need to be forced to adopt it?

      February 5th, 2012 at 10:18

    2. RyanVM

      Yeah, except for that whole part about starting on it in August of last year…

      February 5th, 2012 at 17:32

  9. Matt Wilcox

    This is awesome news :D

    Has Mozilla considered that SPDY now makes it much more realistic to send useful headers to the server to indicate device capabilities? Due to the compression and multiplexing there is far less overhead in doing this than with HTTP, and it would be extremely useful:

    I agree that headers are still ‘expensive’. But are they expensive compared to a few hundred kilobytes of saved bandwidth because we were able to successfully negotiate content?

    At the moment we can’t do *reliable* sever adaption without *reliable* client feature-set reporting. Which we can’t get any way we try right now, and there are many approaches tried – JS, cookies, and UA sniffing. None are bullet-proof, and all are merely ways of attempting to *guess* what a browser header could explicitly tell us. Which is why headers are wanted.

    To shave off any ‘wastage’ I would love to see browsers behaving something like this: behave exactly as now, but listen out for a server response header that in turn requests the browser to append certain headers with all requests to the domain. I.e.,

    1) Client asks for spdy://website.com
    2) Server responds with content and adds a “request [bandwidth] & [device screen size] headers”
    3) Client then appends these headers to all future requests on the domain.
    4) Server can push any amended content from 2) over SPDY without another request (because SPDY can).

    This way there are no additional overheads in general browsing unless the server has requested them specifically. And with SPDY they’re all compressed anyway.

    At last, reliable feature detection the server can get hold of?

    February 6th, 2012 at 09:38

  10. Anunturi

    This is great news. I’m tired of using sprites and spreading images across multiple hosts just for a small speed improvement.
    Too bad this relies on a ssl cert. Anyway, this should be default in the future and companies should adopt startssl’s initiative to provide free limited ssl certificates.
    I’m guessing that there will be a bigger gap between the sites who can afford its own private IP address and those hosted together with hundreds of other sites.

    Still, the future looks good and the transition to IPv6 will be accelerated due to this new protocol.

    February 7th, 2012 at 23:38

  11. Major

    IMO SPDY is a dangerous hype with lesser advantages over http1 than published.

    I think “Server push” may be a nice advertising feature for SPDY-inventor google, but a really dangerous hole because theroetically a bad server can push any bad content or unsolicited ads to the browser without using client-side scripts.

    As a second drawback, there is no CPU saving, because servers and clients need to decrypt AND decompress and client-side CPU usage may be a real problem in mobile devices.

    The third problem are SSL certificates. Small hosters won’t and in some case even can’t install SSL-certificates, mostly because hosters are afraid of the additional CPU usage for SSL-encryption in shared environments.

    My advice is to use SPDY:// instead of http://. The user can choose his benefits by choosing the protocol.

    February 9th, 2012 at 02:57

    1. Patrick McManus

      @major – thanks for the comments.

      wrt server push – FF does not yet accept server push – we are waiting for the flow control mechanisms of spdy/3 (or better) to be defined before using it. Partially for the concerns you describe.

      re CPU – crypto is absolutely required. We won’t make that mistake again. And SPDY is much more CPU friendly than HTTP/1 over SSL because it terminates so many fewer connections (and the RSA operation of the connection termination is the major cost of SSL – not the bulk cipher on the stream).

      As for compression, its used in a very targetted way with very small windows (which matter for both RAM and CPU)- a lot of thought has gone into this. Your whole stream is not just passed through gzip. The value is extraordinary.

      wrt small hosters: we need to do a better job of running the PKI. But the emphasis should be on making sure users are running secure protocols as the first order of business. users first.

      February 9th, 2012 at 06:40

  12. Pikadude No. 1

    @Everyone replying to me: You’re only solving half the problem; free SSL certs do you no good if your host doesn’t allow them yet. Although I suppose there’s the silver lining that it’s only a matter of time.

    February 9th, 2012 at 17:05

  13. Gautam Dewan

    Patrick: I have a question similar to Jo Hermans above.

    I was trying out Firefox 11 on my web server that implements spdy/2. Chrome and spdycat clients work great. Firefox 11 does not. Is there something in Firefox 11 or an add-on that can show active SPDY sessions, and the flow of frames to and from the web server ?

    To me it look like Firefox is not able to interpret the SYN_RESPONSE frame coming back from the webserver. The webdeveloper console in firefox lists the URL with a status of unknown.

    February 13th, 2012 at 21:45

  14. Patrick McManus

    There is some info in https://groups.google.com/forum/#!topic/mozilla.dev.platform/5dtG0hKRg5U that may help you – a response header and a lot of HTTP Logging information.

    I’d be happy to work on your server interop with you through email mcmanus at ducksong dot com

    HTTP logs would be the first piece of information to gather.

    February 14th, 2012 at 07:05

  15. Gautam Dewan

    Thank you Patrick for working with me to resolve all my issues.
    SPDY on Firefox 11 works great !
    I will be doing some more testing in the coming weeks.


    February 16th, 2012 at 20:50

  16. GrammarNazi

    I couldn’t help noticing this: “SPDY’s small number of busier connections fit the congestion control model of the Internet much better…” – The verb, “fit”, isn’t matching the subject, “number”. It should be “SPDY’s small number of busier connections fits the congestion control model…” (where “fit” is conjugated into 3rd person singular as “fits” to match “number”).

    Other than that nit-picky thing, really great article – I barely understood what SPDY was before reading it.

    February 28th, 2012 at 21:09

    1. Robert Nyman [Mozilla]

      Changed. Thanks for the input!

      February 29th, 2012 at 00:17

  17. Christian Eaton

    +1 for adding device capability headers.

    I can imagine a use case where a small device receives a lower res image from the server on the initial page load due to having a smaller screen and/or reduced bandwidth capabilities, and then requesting a larger version of the same image as the user zooms in on it/clicks on it (depending on the browser) – potentially downloading only the “extra” image data in the case of a progressive image format.

    As the line between “mobile devices” and tablets/netbooks blurs further over the coming years we can’t make assumptions about a user’s requirements based soley on screen dimensions a la CSS media queries: I could be using my full power laptop over a per-Mb GPRS connection, or working on my phone plugged into a HDMI screen over a fast WiFi connection. We need to be able to allow the user (via the browser) to configure how they want to receive content (with some sensible browser-based guesses that a user can override).

    March 5th, 2012 at 06:48

  18. fracjackmac

    Excellent blog post and follow-on commentary.



    March 11th, 2012 at 10:56

  19. Bill Fu

    “The most important goal of SPDY is to transport web content using fewer TCP connections. It does this by multiplexing large numbers of transactions onto one TLS connection. This has much better latency properties than native HTTP/1. ”
    From my knowledge, HTTP1.1 has defined persistent connections (see RFC 2616) so that multiple requests/responses can be pipelined on one bearer connection (often a TCP connection). I think this is just an advantage versus HTTP1.0?

    June 6th, 2012 at 00:56

    1. Patrick McManus

      pipelines are better than nothing, but they are not full mutliplexing like spdy. They suffer severe head of line blocking problems, have no prioritization, have awful error handling, and can’t effectively be deployed to many environments either because of broken infrastructure (including security software). SPDY doesn’t suffer any of those problems.

      with a whole bunch of mitigations pipelines can provide a boost to legacy servers, but spdy is a much better way forward and because it is content agnostic can be a drop in replacement.

      June 6th, 2012 at 05:57

  20. Bill Fu

    Agree. After reading the draft SPDY of IETF (http://tools.ietf.org/html/draft-mbelshe-httpbis-spdy-00#section-2.2), I had much clear understanding about the differences against HTTP1.1. Very interesting, it looks very much like another application layer TCP (and over TCP)!
    From the draft, prioritization mechanism applies to stream. So I’m wondering when a browser opens a webpage which may involve tens of HTTP GET requests (for main.html, icons, pictures etc.), how many streams will be used there? I guess it should be less than 8. There’s not so much hint on how to utilize this feature in practice. Another concern is, there could be the risk of stream ID exhaustion if it always increases monotonically?

    June 27th, 2012 at 19:38

  21. Patrick McManus

    Hi Bill, optimally a whole page and all of its subresources are moved over the same spdy connection. So that’s 1 TCP connection with dozens of parallel streams. The streams are prioritized so the html/css/js gets bandwidth priorty over images, but all the requests are sent in parallel to avoid any rtt hits.

    as fro stream ID exhaustion, there are effectively 30 bits of stream ID (1 billion), so exhaustion isn’t much of a concern.. when it happens you just make a new connection. I’ve never seen it happen :)

    June 27th, 2012 at 20:20

  22. Bill Fu

    Patrick, yes, making a new connection is definitely a good idea :)
    For human users with browser I think it’s hard to exhaust the ID, but if it’s used between servers (e.g. browsing gateway and web servers), I’m afraid this could happen.

    June 27th, 2012 at 22:02

  23. Sriram

    This creates a huge challenge for intermediate transparent caches that are mostly deployed by mobile OpCos and ISPs. These transparent caches serves much more an important role in speeding up page downloads and saving Internet bandwidth for operators.

    I am worried, if SPDY has short sight on issues like these. Because, the moment web is made secure even for things that doesn’t need to be, we are making these transparent caches useless and this results in a big loss for both operators and users.

    For Google, it may not be important as they have a different strategy when it come to these local caches or CDNs.

    July 18th, 2012 at 08:05

  24. Kevin L.

    For those who said that they are concerned about the SSL requirement because some hosts (or their host) don’t support them: If a host doesn’t allow you to install or have SSL in this day and age, you need a new host period.

    August 4th, 2012 at 18:26

  25. John Hosfield

    Why did my image get reset automatically,after I gad already selected an image for my firefox homepage?

    September 7th, 2012 at 16:23

  26. John Hosfield

    Why did my homepage image automatically reset, after i set it for my desired homepage image?

    September 7th, 2012 at 16:25

Comments are closed for this article.