jsDelivr – The advanced open source public CDN

This is a guest post by Dmitriy Akulov and his project jsDelivr. – Editor’s note.

As a developer you are probably aware of Google Hosted Libraries. Google offers an easy and fast way to include 12 of the most popular js libraries in your websites.

But what if you are a webmaster and you want take advantage of a fast CDN with other less popular projects too? Or if you are a developer and you want to make your project easier to access and use by other users.

This is where jsDelivr comes into play. jsDelivr is a free and open source CDN created to help developers and webmasters. There are no popularity restrictions and all kinds of files are allowed, including JavaScript libraries, jQuery plugins, CSS frameworks, fonts and more.

Adding a library

To add a new library or update an existing one all the developer has to do is to clone our Github repository and apply the modifications they see fit. Once a moderator reviews the Pull Request and merges it, the files become instantly available from the official website.

If a mod is online the approval should not take more than 20 minutes, otherwise it can take up to 10 hours until someone comes online. But once our auto-update utility comes online review times will drop.

Reliability

But what actually makes it so advanced? The idea of jsDelivr was not to create another public CDN but to offer a super fast and reliable infrastructure that developers and website owners could trust and use. Any big or small website can use it without worrying about it. There are no bandwidth limits and our service is rock solid.

Slow responses, timeouts and downtime are not tolerated, so we designed a unique system to overcome these problems and offer a product that even Enterprise CDNs would be jealous of. Uptime and performance are top priority, we monitor everything at all times and we are always looking into new technologies and providers that may further improve our CDN.

Infrastructure

network-map

Unlike the competition jsDelivr uses a unique Multi-CDN infrastructure to offer the best possible uptime and performance. The main backbone of it is built on top of CDN networks provided by MaxCDN and CloudFlare.

And we also use custom servers in locations where CDNs have little or no presence. In total at this moment this results in 42 global POP locations. In the future we plan to add even more locations to offer top performance even in less popular countries.

Of course lots of locations means nothing if you can’t load balance across them correctly. For the load balancing system we use services provided by Cedexis. One of their main features is the real time performance data they gather on all major CDN providers. 1.3 billion RUM (Real User Metrics) performance tests per day are processed and available to all Cedexis users.

Measuring performance

To gather these RUM tests they have deployed a special JavaScript code on thousands of websites. Every visitor to one of these websites executes the code and starts testing different CDN providers in the background as they browse the website. The testing does not impact on the browsing experience in any way and is completely transparent to the user. You can actually see how it works by visiting our website and opening developer tools to “Network” tab.

The beauty of these tests is that they are not synthetic. They reflect the real performance real users will get if they download a file from one of those CDNs.

The following information is then stored:

  • Performance metrics to each of our providers.
  • Availability metrics to each of our providers.
  • Browser’s User-Agent
  • First three octets of the user’s IP address

Now that we have all this information we can use it in our smart load balancing algorithm.

Every user gets a unique response that is based on their location and ISP provider. Each time a users requests to download a file from jsDelivr, our algorithm extracts the performance and availability data it has available for the last few minutes and then figures out the most optimal provider for that particular user and that particular time. All that in a few ms.

First it makes sure that all available providers are online. For this it uses the RUM availability data and a synthetic test that checks each provider every minute for uptime. Then it proceeds to sort the providers by performance for the ISP of the user and his/her location.

Once it has the fastest provider it returns the hostname to the user. So for example 2 different users in London with different ISPs could get 2 different responses because their ISPs have different routing and performance to different CDN providers. This smart system guarantees maximum uptime and fast loading times to all users. If a provider goes down jsDelivr won’t experience any issue at all and immediately start serving a different provider.

This algorithm also immediately responds to performance degradation. For example a CDN provider gets DDoSed in Europe and their response times increase, jsDelivr will pick up the change and simply stop using this provider in Europe but still consider it for users in USA and other locations that were not affected by the attack.

Don’t rely on a single CDN for uptime and speed. Everything can go down, but the chances for 2 CDNs and multiple servers to go down at the same time are very slim. And this is why jsDelivr is the most optimal solution for every website out there. No matter how big it is.

I should also point out that MaxCDN, CloudFlare, Cedexis and the rest of the companies sponsor jsDelivr for free. Its nice to see that there are companies out there that are willing to help open source projects and build a fast and free internet.

Advanced Features

jsDelivr also supports some interesting and very helpful features such as:

Version Aliasing

Instead of using a unique URL for each version to load a project with jsDelivr you can use aliasing. Lets take for example the project Abaaso. At this moment the latest version is 3.10.50 and you can load it by specifying the exact version in your url as always. But since this project gets updated very often you would end up with using the old version pretty soon. To overcome this problem you can now simply use the following URL:

//cdn.jsdelivr.net/abaaso/3.10/abaaso.min.js

By using 3.10 you tell jsDelivr to load the latest version it has in the 3.10 branch which in this case is 3.10.50. This is the optimal solution for most authors because they can load the latest minor version without worrying for major changes that could break their website.

It is of course possible to load the latest version in the v3 branch by using the following URL:

//cdn.jsdelivr.net/abaaso/3/abaaso.min.js

And if for any reason you need to always load the latest available version in any major branch you can use:

//cdn.jsdelivr.net/abaaso/latest/abaaso.min.js

By using the latest version you tell the server to load the absolute latest version it has. This of course is dangerous and given enough time may and will break your website. So use this feature with caution.

Load multiple files with a single HTTP request

jsDelivr is the first CDN to support this kind of functionality. You can load multiple files using a single HTTP request. Similar to combining and minifying js files in your own server, but cached by the huge and smart network of jsDelivr.

All you have to do is to build your own URL with the projects and files you want to combine and their versions if needed. For example, to load the latest version for projects abaaso, ace and alloyui you would use the following syntax:

//cdn.jsdelivr.net/g/abaaso,ace,alloyui

Have in mind that loading the latest version is not recommended and given enough time will break your website. This is why you should specify the exact versions or use version aliases:

//cdn.jsdelivr.net/g/jquery@2.1,angularjs@1.2

So jquery@2.1 will load 2.1.0 and angularjs@1.2 will load 1.2.14. But the above URL will load the main files of each project and nothing else.

If you want to load multiple files from a single project then you can do the following:

//cdn.jsdelivr.net/g/jquery@2.1,angularjs@1.2.14(angular.min.js+angular-resource.min.js+angular-animate.min.js+angular-cookies.min.js+angular-route.min.js+angular-sanitize.min.js)

If you want to load CSS then select css files using the above format. If all files in the group URL have a .css extension then the server will automatically respond with a Content-Type: text/css HTTP header. In all other cases (for /g/ URLs) Content-Type: application/javascript is used.

Next you simply include the url in your website and you are done. Less DNS resolving, less TCP connections, less HTTP requests = Faster website.

You can even use this feature to offer your users a builder to allow them to generate a URL with the modules they need and then load them all using a fast CDN.

A real API

jsDelivr has a fully featured API that can be used by developers in their websites,to create custom modules and anything else you might think of https://github.com/jsdelivr/api

You can request exactly what you need using our API without downloading a huge package json. And it also supports cdnjs and Google. This way developers have everything they need to build their applications.

Auto-Updates

jsDelivr libgrabber is a utility that is going to run on our servers and can auto-update all hosted projects if configured. The best part is that the authors don’t have to change anything in their repos. All changes are made on jsDelivr side.

All you need is to create an update.json file with some basic info inside the project you want to keep auto-updated in jsDelivr repo. This file also supports multiple sources for new versions. Like npm, bower and directly Github repos. It is still under development but is planned to be released soon.

Try it out, help out!

jsDelivr is a very interesting project that I enjoy developing and making better. It also heavily relies on the help of the community. Consider using it in your websites and host there your projects.

And if you are interested in helping out, we can always use some help, just join the conversation on Github.

Feel free to leave your comments and ask me any questions you might have.

Thank you

About Dmitriy Akulov

System administrator. In love with technology, high performance and fast web. Some times pretending to be a dev. Working for MaxCDN.

More articles by Dmitriy Akulov…

About Robert Nyman [Editor emeritus]

Technical Evangelist & Editor of Mozilla Hacks. Gives talks & blogs about HTML5, JavaScript & the Open Web. Robert is a strong believer in HTML5 and the Open Web and has been working since 1999 with Front End development for the web - in Sweden and in New York City. He regularly also blogs at http://robertnyman.com and loves to travel and meet people.

More articles by Robert Nyman [Editor emeritus]…


37 comments

  1. Petter

    The cedexis measurement code, is that being embedded into any of the js served, or are the RUM tests completely independent of what is served?

    March 19th, 2014 at 07:44

    1. Dmitriy Akulov

      RUM tests are completely independent. We do not modify hosted files in any way.

      People that want to help can contribute performance data by embedding the following code in their websites
      https://github.com/jsdelivr/jsdelivr#contribute-performance-data

      March 19th, 2014 at 07:47

  2. Fabricio C Zuardi

    Where can I find more documentation or an example of this update.json file to be included on a particular project (for example, a jquery plugin)?

    While libgrabber is not there yet, the way to publicize new versions is to add the new folders on a fork and create a pull request, right?

    March 19th, 2014 at 09:01

    1. Dmitriy Akulov

      README is not yet updated but you can see the discussion and examples of update.json over here https://github.com/jsdelivr/libgrabber/issues/1

      And yes, until its live we will have to create new PRs for each update.

      March 19th, 2014 at 10:00

  3. Petter

    Are there alternative domains to cdn.jsdelivr.net in case I want to download multiple resources at once and do not want to use the api?

    March 19th, 2014 at 10:23

    1. Dmitriy Akulov

      I don’t know what do you mean. Can you explain a bit more what you want to do?

      March 19th, 2014 at 10:29

      1. Adarsh

        @Dmitriy By using different domain names for downloading different resources, we can leverage on the browser’s ability to download resources in parallel as the number of simultaneous connections that can be open to a single domain is usually limited and quite low.

        So, if you have aliases such as cdn-1.jsdelivr.net, cdn-2.jsdelivr.net and cdn-3.jsdelivr.net we could better leverage on browser parallelization.

        March 19th, 2014 at 10:35

        1. Dmitriy Akulov

          Oh I see. No unfortunately only one domain is available. The multi-cdn infrastructure makes it hard to support more domains.

          But you can use the grouping functionality instead. Dont parallelize multiple HTTP requests, do less HTTP requests.

          March 19th, 2014 at 10:40

          1. Petter

            Why is it hard?

            You can simply have cdn2.jsdelivr.net be a cname for cdn.jsdelivr.net with TTL “forever”. Unless you use it, no cost is born. If you use it, at least the dns lookup will not have to go to the dns balancer, there will only be one dns balancer round-trip.

            March 19th, 2014 at 17:12

        2. Paul Irish

          > if you have aliases such as cdn-1.jsdelivr.net, cdn-2.jsdelivr.net and cdn-3.jsdelivr.net we could better leverage on browser parallelization.

          Domain sharding is an antipattern in HTTP/2 and SPDY. So, it’s on the way out.

          March 19th, 2014 at 14:00

          1. Steve Souders

            Hi, Paul. Definitely agree HTTP2 mitigates the need for domain sharding. However, I’ve been talking to some early adopters who believe there might be better performance if two TCP connections are used, but no one has data that tests that hypothesis. Do you know of any?

            March 19th, 2014 at 15:38

          2. Petter

            Basic TCP congestion control is based on an equal bandwidth share per TCP stream.

            So two TCP connections, will, without fancy pants on, get twice the bandwidth through a large pipe when congestion control happens.

            March 19th, 2014 at 17:06

  4. Chords

    Good to see we’re supporting technology infrastructure in Africa. Oh wait.

    March 19th, 2014 at 10:53

    1. Dmitriy Akulov

      I would love to have a POP location in Africa. If some company donates a server there I will add it.
      Also CloudFlare plans to add a location in South Africa later this year. So automatically jsDelivr will get it too.

      March 19th, 2014 at 10:56

  5. sokratis

    Is this possible to also host Drupal sites?And is it free?

    March 19th, 2014 at 11:34

    1. Dmitriy Akulov

      Yes it is free. You can use all hosted libraries in your Drupal websites.

      March 19th, 2014 at 11:49

  6. Steve Souders

    This is awesome. Congrats on launching a great resource for the dev community. I always thought the combo handler idea from Yahoo would be great for a CDN.

    How did you decide on the 1 week Cache-Control max-age value? For fully specified versions that’s awesome, but if someone is using an alias then users might not see the new version for 1 week. Not the end of the world, but I’m curious how you decided on that time value.

    March 19th, 2014 at 12:11

    1. Dmitriy Akulov

      Just to be clear this is not a Mozilla project :)

      There was no particular reason why we chose 1 week. It was the most optimal value I guess. 1 week is not that long to wait for file to get updated and its enough to offer a good performance experience for users.

      If you think an other value would be better feel free to open an issue in our Github repo.

      Let me know if you have any other feedback or ideas.

      March 19th, 2014 at 12:22

      1. Steve Souders

        Sorry Dmitriy! I incorrectly assumed you were at Mozilla. Great project regardless!

        We should look at how many people are using aliases to decide on the optimal cache time. Regardless it should probably vary depending by level of specificity: a full version specification (“3.10.50”) should have a very long cache time (eg, 1 year) while the least specific (“latest”) should be short (eg, 1 day).

        March 19th, 2014 at 12:35

        1. Dmitriy Akulov

          /latest/ versions have smaller caching that /g/ URLs.

          But yeah, good idea. I will try to optimize the times based on the versions too.

          Thank you

          March 19th, 2014 at 12:45

    2. Petter

      Wait, there’s a 1 week max-age for fully specified libraries that never change?

      Why not cache forever?

      March 19th, 2014 at 17:15

      1. Dmitriy Akulov

        For static URLs the cache is 1year.
        You can check the headers for different requests like /g/ and /latest/.
        1 week is for /latest/ URLs.

        March 19th, 2014 at 17:19

  7. Evan Prodromou

    As an Open Source Web developer, I use by default the great cdnjs service. http://cdnjs.com/ for my Web project, pump.io . I also have contributed some updates and new libraries.

    Why use jsDeliver, when cdnjs already exists?

    March 19th, 2014 at 12:39

    1. Dmitriy Akulov

      You can check uptime and performance stats over here http://www.cdnperf.com/

      March 19th, 2014 at 16:01

  8. Ash

    This looks handy. But with more and more of these cropping up they all get a (little) bit less helpful (more of them there are, less chance it’ll already be in the users cache). (eg, http://cdnjs.com/, http://www.bootstrapcdn.com/, a couple more but I can’t remember them off top of my head).

    But still – this is a good move!

    March 19th, 2014 at 15:32

  9. Brett Zamir

    Very cool.

    Might you be open to:
    1. Allow URLs which prevent expiration entirely.
    2. Caching HTML with offline cache manifests

    I had been hoping for such a resource in my desire to see a (hopefully temporary) workaround for browsers abandoning `globalStorage`: a way websites could insert an iframe whose source is an offline-cached HTML page (ideally first served via a fast CDN) and to which they could send `window.postMessage()` calls, whose JS code would prompt the user for permission to read, or, if allowed, write shared data (and perhaps send publish-subscribe events as well upon updates), in order to provide site-neutral namespaced data or read-only (write-by-origin) access to other sites for locally stored data, and also overcome same-domain storage limits if the calling library would hop across different domains to reassemble its data.

    Despite talk of “open data”, and despite the fact that OS file systems flourish because their files are file-creator-independent, there is no good way currently for projects, especially open source ones, non-profits, etc. to provide their data in manner which doesn’t make them the gate-keeper and single point of failure and which provides an API out-of-the-box for reading and optionally writing their generated data in such a way that pure client-side-JS apps can use them. Besides the site-neutral storage of documents such as say HTML document drafts or social media posts, one interesting use case is an add-on infrastructure for a certain type of tool (such as a web-based IDE) which worked independently of the specific tool implementation.

    While I started some preliminary work at https://gist.github.com/brettz9/8876920 , it is not yet tested, documented, or ready, but I wanted to ask the questions now.

    March 19th, 2014 at 17:23

    1. Dmitriy Akulov

      If you want you can open an issue in our Github to discuss your idea further.

      March 19th, 2014 at 18:07

  10. Niloy Mondal

    How does this project pay the bills?

    March 19th, 2014 at 22:34

    1. Dmitriy Akulov

      All traffic is sponsored by companies such as MaxCDN, CloudFlare and Cedexis.

      March 20th, 2014 at 05:15

  11. Matt Freeman

    As much as I admire the motive, the technical side is somewhat flawed, many of the boxes are lowend virtual private server donated by unverified individuals given the LowEndBox solicitation, how this inspires confidence or trust I don’t know? I think you should screen the companies you allow to participate in the network very carefully, I can see a rogue jQuery showing up soon giving how some of these “sysadmins” act. For example who is jetdino? no contact info on site? company reg?

    March 19th, 2014 at 23:37

    1. Dmitriy Akulov

      I have root access to all servers, they are not shared. Also the servers don’t host the files, they proxy cache them from my origin so its impossible for someone to just log in and modify a few files.

      I do screen all of sponsored companies. Also jetdino was removed very recently(after I wrote the post) for instability issues.

      Let me know if that eases your concerns. If you want you can drop by our Github and discuss our security in detail and maybe improve a few things.

      March 20th, 2014 at 05:47

  12. Marcel

    First three octets of the user’s IP address..and what is saved when using IPv6?

    March 21st, 2014 at 04:04

    1. Dmitriy Akulov

      We don’t measure users using Ipv6 since almost no CDNs support Ipv6.

      March 21st, 2014 at 07:51

  13. Per Østergaard

    How could I trust this? As long as the browsers do not support verifying the checksum of, say, JavaScript, this is the ideal way of giving ‘somebody’ access to you web sites. This sound very dangerous for everything that is not hobby-like.

    March 25th, 2014 at 08:24

    1. Dmitriy Akulov

      All public CDNs are based on trust. If you have any ideas on how could improve our system and win your trust drop by our Github and open an issue to discuss it.

      March 25th, 2014 at 09:47

  14. Dan

    Looks great.
    Quick question. Can anyone update an existing entry. EG: Could Mr.Evil update jquery hosted on the CDN?.

    April 9th, 2014 at 06:22

    1. Dmitriy Akulov

      He could try. But we do check all committed files. And once the auto-updater comes online all new versions will be added automatically without any malware risk.

      April 9th, 2014 at 07:41

Comments are closed for this article.