JavaScript Articles

Sort by:


  1. From Map/Reduce to JavaScript Functional Programming

    Since ECMAScript 5.1, & Array.prototype.reduce were introduced to major browsers. These two functions not only allow developers to describe a computation more clearly, but also to simplify the work of writing loops for traversing an array; especially when the looping code actually is for mapping the array to a new array, or for the accumulation, checksum, and other similar reducing operations.

    Left: using ordinary loop; Right: using map & reduce

    Left: using ordinary loop; Right: using map & reduce



    Map actually means to compute things with the original array without doing structural changes to the output. For example, when map receives an array, you can make sure the output will be another array, and the only difference is that the elements inside it may be transformed from the original value/type to another value/type. So we can say the doMap function from the above example comes with the following type signature:

    doMap signature

    The signature reveals that [Number] means this is an array of numbers. So we now can read the signature as:

    doMap is a function, which would turn an array of numbers to an array of booleans

    On the other hand, the reducing operation means we may change the structure of the input data type to a new one. For example, the signature of the doReduce is:

    doReduce signature

    Here, the Array of [Number] is gone. So we can see the major difference between map and reduce1.

    Functional Programming

    In fact, the concepts of map and reduce are older even than JavaScript and are widely used in other functional programming languages, such as Lisp or Haskell2. This observation is noted in the famous article ‘JavaScript: The World’s Most Misunderstood Programming Language’ by Douglas Crockford 3:

    JavaScript’s C-like syntax, including curly braces and the clunky for statement, makes it appear to be an ordinary procedural language. This is misleading because JavaScript has more in common with functional languages like Lisp or Scheme than with C or Java.

    This is one reason why JavaScript can do some functional-like things which other orthogonal OOP languages can’t or won’t do. For example, before Java 8 4 5, if we wanted to do some ‘callback’ things common in JavaScript, we’d need to create a redundant ‘anonymous class.':

    Button button =
      (Button) findViewById(;
      new OnClickListener() {
        public void onClick(View v) {
          // do something

    Of course, using anonymous callbacks or not in JavaScript is always controversial. We may encounter callback hell especially when the component keeps growing. However, first-class functions can do lots of things beyond the callback. In Haskell, we can organize our whole GUI program similar to the Quake-like games6 with only functions7. That is, we can even make it without the classes, methods, inheritance, templates and other stuff8 people usually expect to have when a program needs to be constructed.


    Frag, the Quake-like game in Haskell

    Therefore, in the JavaScript world, it’s possible to follow similar patterns to construct our programs, rather than hurriedly implementing our own ‘class’ and ‘class system,’ as programmers often do when starting on a problem9. Adding some functional flavor in JavaScript is not so bad after all, especially when features like map and reduce are supported by native APIs. To adopt this approach also means we can write more concise code10 by combining features instead of redefining them. The only limitation is the language itself is still not functional enough, so we may run into trouble if we play too many tricks, although this should be solvable with the right library11.

    map and reduce receive other functions as arguments and output them as results. This is important because in this way they present the basic idea of composing computations in the functional world, and are able to glue small pieces together with flexibility and scalability. For example, let’s take a look at the signature of our map expression mentioned above:

    map signature

    You’ll notice that the second argument indicates a function with type Number -> Boolean. In fact, you can give it any function with a -> b type. This may not be so odd in the world of JavaScript — we write tons of callbacks in our daily work. However, the point is that higher-order functions are functions as well. They can be composed into larger ones till we generate the complete program with only first-class functions and some powerful high-order functions like id, reduce, curry, uncurry, arrow and bind12.

    Map/Reduce in Practice

    Since we may encounter language limitations, we can’t write our JavaScript code in fully functional style; however, we can borrow the idea of types and composition to do lots of things. For example, when you think in types, you will find that map is not only for data handling:

    map & fold signatures

    This is what the type signatures for map and reduce would look like in Haskell. We can substitute the a and b with anything. So, what if a becomes SQL and b becomes IO x ? Remember, we’re thinking in type, and IO x is nothing more than an ordinary type like Int or URL:

    -- Let's construct queries from SQL statements.
    makeQueries strs = map str  prepare conn str
    doQuery qrys = foldl (results query  results >> query) (return ()) qrys 
    -- Do query and get results.
    let stmts = [ "INSERT INTO Articles ('Functional JavaScript')"
                , "INSERT INTO Gecko VALUES ('30.a1')"
                , "DELETE FROM Articles WHERE version='deprecated'"
    main = execute (doQuery (makeQuery stmts))`

    (Note: This is a simplified Haskell example for demo only. It can’t actually be executed.)

    The example creates a makeQueries function with map, which will turn the SQL into IO ()13; this also means we generate several actions which can be performed.

    makeQueries signature

    And then, the doQuery function, which is actually a reducing operation, will execute the queries:

    doQueries signature

    Note its reducing operation performs IO action with the help of the bind function (>>) of the specific Monad. This topic is not covered in this article, but readers should imagine this as a way to compose functions to execute them step by step, just as a Promise does24.

    The technique is useful not only in Haskell but also in JavaScript. We can use this idea with Promises and ES6 arrow functions to organize a similar computation:

      // Use a Promise-based library to do IO.
      var http = require("q-io/http")
         ,noop = new Promise(()=>{})
         ,prepare =
            (str)=>'' + str)
                      .then((res)=> res.body.toString())
                      // the 'then' is equal to the '>>'
         ,makeQuery = 
            (strs)=>> prepare(str))
         ,doQuery = 
            (qrys)=> qrys.reduce((results, qry)=> results.then(qry), noop)
         ,stmts = [ "articles/FunctionalJavaScript"
                  , "blob/c745ef73-ece9-46da-8f66-ebes574789b1"
                  , "books/language/Haskell"
         ,main = doQuery(makeQuery(stmts));

    (NOTE: In Node.js, the similar querying code with map/reduce and Promise would not run like the Haskell version, since we need Lazy Promise14 and Lazy Evaluation15)

    We’re very close to what we want: define computations with functions, and then combine them to perform it later, although the idea of ‘later’ is not actually true since we don’t have real lazy evaluation in JavaScript. This can be accomplished if we play the trick of keeping an undone Promise — a resolve function which is only resolved when we want to do so. However, even this is tricky, and there are still some unsolvable issues.

    Another thing to note is that our program doesn’t need variable variables, but some computation results are transformed and forwarded at every step of our program. In fact, this is just one reason why functional languages can stay pure, and thus they can benefit from the optimization and getting rid of unexpected side-effects 1617.

    More on Functional Programming

    Map/reduce are the most common functional features in JavaScript. With other not-so-functional features like Promise, we can use tricks like Monad-style computation control, or we can easily define curried functions with ES6’s fat-arrow functions18 and so on. Also, there are some excellent libraries which provide nice functional features19 20 21, and some Domain Specific Languages (DSLs) are even born with functional spirit 22 23. Of course, the best way to understand functional programming is to learn a language designed for it, like Haskell, ML or OCaml. Scala, F#, and Erlang are good choices, too.

    1. In fact, map can be implemented with reduce. The most basic operation for structure like this is reduce.



    4. Java 8 now includes the lambda function:

    5. C++ traditionally has not been a functional language, but C++11 introduces lambda functions:


    7. Haskell can represent data structure in function sense, even though declaring a function and a data type are still not the same thing:

    8. Yes, I’m cheating: we have Typeclass, Functor, instance and type variable in Haskell.

    9. For those who can’t live without classes, ES6 is in your future:

    10. I’ve found that some ‘bad functional code’ can be refactored as concisely as possible by strictly following some functional patterns. The most problematic ‘functional’ code happens when the coder mixes two programming styles badly. This can mix problems from two paradigms in a way that makes the code more complicated.

    11. I always hit a wall when I want to have nice Monad and lazy Promise in JavaScript. However, if you don’t mind a ‘crazy’ implementation, these are doable, and we can even have ‘Monad Transformer’ in JavaScript. Other features, like tail-recursion optimization and real lazy-evaluation, are impossible to do without runtime support.

    12. The functions arrow and bind are actually >>> and >>= in Haskell. They’re the keys In Haskell to compose our computation and program with specific effects; hence we can have state machine, networking, event handling, IO statements, and asynchronous flow control. Importantly, these are still plain functions.

    13. The type IO () means ‘do IO without any value returned.’ The IO a means some IO actions may get value a when the function had been performed, although some actions only get (). For example, the function to get a string from user input would be: ask:: IO String, while the function to print string out is print:: String -> IO String.



    16. JavaScript can do this with a library for structures like map, set, and list. Facebook created a library of immutable data structures called immutable-js for this:

    17. You can do almost the same thing with immutable-js and convince everyone to use only let and const to define variables.


    19. wu.js:

    20. Ramda:

    21. transducer.js:–A-JavaScript-Library-for-Transformation-of-Data

    22. LiveScript:

    23. Elm:

    24. No, they’re not really the same, but you *could* implement Promise in Monad

  2. You can’t go wrong watching JavaScript talks

    Late last week, I was collecting suggestions for year-end Hacks blog posts. As she headed out for the winter holidays, apps engineer Soledad Penadés gifted me “a bunch of cool talks I watched this year.”

    In fact, it’s a curated collection of presentations from JSConf, JSConf EU, and other recent developer conferences. Presenters include notable Mozillians and opinionated friends: hackers, JavaScripters, teachers, preachers, and hipsters. Many talks come with links to slides. They cover diverse topics: software development, developer culture and practices, developer tools, open source, JavaScript, CSS, new APIs.

    Part 1: Sole’s list

    Part 2: Making video better on Firefox

    Binge-watching JavaScript talks not your idea of a holiday break? If you’re watching video at all, Mozilla could use your help improving the video experience on Firefox.

    Here’s how you can contribute: Download Firefox Nightly, and help us test the implementation of Media Source Extensions (MSE) to support YouTube videos running in HTML5. Over recents weeks, Firefox engineers have been working hard addressing top bugs. To get this into the hands of users as quickly as possible, we need more help testing the experience. Here’s a detailed description of the testing task, thanks to Bugmaster Liz Henry from the QA team.

    File bugs using this link. Describe your platform, OS, and the steps needed to reproduce the problem.

    Thank you for helping to make the Firefox Desktop video experience awesome. See you in the new year.

  3. Pseudo elements, promise inspection, raw headers, and much more – Firefox Developer Edition 36

    Firefox 36 was just uplifted to the Developer Edition channel, so let’s take a look at the most important Developer Tools changes in this release. We will also cover some changes from Firefox 35 since it was released shortly before the initial Developer Edition announcement. There is a lot to talk about, so let’s get right to it.


    You can now inspect ::before and ::after pseudo elements.  They behave like other elements in the markup tree and inspector sidebars. (35, development notes)


    There is a new “Show DOM Properties” context menu item in the markup tree. (35, development notes, documentation about this feature on MDN)


    The box model highlighter now works on remote targets, so there is a full-featured highlighter even when working with pages on Firefox for Android or apps on Firefox OS. (36, development notes, and technical documentation for building your own custom highlighter)

    Shadow DOM content is now visible in the markup tree; note that you will need to set dom.webcomponents.enabled to true to test this feature (36, development notes, and see bug 1053898 for more work in this space).

    We borrowed a useful feature from Firebug and are now allowing more paste options when right clicking a node in the markup tree.  (36, development notes, documentation about this feature on MDN)


    Some more changes to the Inspector included in Firefox 35 & 36:

    • Deleting a node now selects the previous sibling instead of the parent (36, development notes)
    • There is higher contrast for the currently hovered node in the markup view (36, development notes)
    • Hover over a CSS selector in the computed view to highlight all the nodes that match that selector on the page. (35, development notes)

    Debugger / Console

    DOM Promises are now inspectable. You can inspect the promises state and value at any moment. (36, development notes)


    The debugger now recognizes and works with eval’ed sources. (36, development notes)

    Eval’ed sources support the //# sourceURL=path.js syntax, which will make them appear as a normal file in the debugger and in stack traces reported by Error.prototype.stack. See this post: for much more information. (36, development notes,  more development notes)

    Console statements now include the column number they originated from. (36, development notes)


    You are now able to connect to Firefox for Android from the WebIDE.  See documentation for debugging firefox for android on MDN.  (36, development notes).

    We also made some changes to improve the user experience in the WebIDE:

    • Bring up developer tools by default when I select a runtime app / tab (35, development notes)
    • Re-select project on connect if last project is runtime app (35, development notes)
    • Auto-select and connect to last used runtime if available (35, development notes)
    • Font resizing (36, development notes)
    • You can now adding a hosted app project by entering the base URL (eg: “”) instead of requiring the full path to the manifest.webapp file (35, development notes)

    Network Monitor

    We added a plain request/response headers view to make it easier to view and copy the raw headers on a request. (35, development notes)


    Here is a list of all the bugs resolved for Firefox 35 and all the bugs resolved for Firefox 36.

    Do you have feedback, bug reports, feature requests, or questions? As always, you can comment here, add/vote for ideas on UserVoice or get in touch with the team at @FirefoxDevTools on Twitter.

  4. Introducing the JavaScript Internationalization API

    Firefox 29 issued half a year ago, so this post is long overdue. Nevertheless I wanted to pause for a second to discuss the Internationalization API first shipped on desktop in that release (and passing all tests!). Norbert Lindenberg wrote most of the implementation, and I reviewed it and now maintain it. (Work by Makoto Kato should bring this to Android soon; b2g may take longer due to some b2g-specific hurdles. Stay tuned.)

    What’s internationalization?

    Internationalization (i18n for short — i, eighteen characters, n) is the process of writing applications in a way that allows them to be easily adapted for audiences from varied places, using varied languages. It’s easy to get this wrong by inadvertently assuming one’s users come from one place and speak one language, especially if you don’t even know you’ve made an assumption.

    function formatDate(d)
      // Everyone uses month/date/year...right?
      var month = d.getMonth() + 1;
      var date = d.getDate();
      var year = d.getFullYear();
      return month + "/" + date + "/" + year;
    function formatMoney(amount)
      // All money is dollars with two fractional digits...right?
      return "$" + amount.toFixed(2);
    function sortNames(names)
      function sortAlphabetically(a, b)
        var left = a.toLowerCase(), right = b.toLowerCase();
        if (left > right)
          return 1;
        if (left === right)
          return 0;
        return -1;
      // Names always sort alphabetically...right?

    JavaScript’s historical i18n support is poor

    i18n-aware formatting in traditional JS uses the various toLocaleString() methods. The resulting strings contained whatever details the implementation chose to provide: no way to pick and choose (did you need a weekday in that formatted date? is the year irrelevant?). Even if the proper details were included, the format might be wrong e.g. decimal when percentage was desired. And you couldn’t choose a locale.

    As for sorting, JS provided almost no useful locale-sensitive text-comparison (collation) functions. localeCompare() existed but with a very awkward interface unsuited for use with sort. And it too didn’t permit choosing a locale or specific sort order.

    These limitations are bad enough that — this surprised me greatly when I learned it! — serious web applications that need i18n capabilities (most commonly, financial sites displaying currencies) will box up the data, send it to a server, have the server perform the operation, and send it back to the client. Server roundtrips just to format amounts of money. Yeesh.

    A new JS Internationalization API

    The new ECMAScript Internationalization API greatly improves JavaScript’s i18n capabilities. It provides all the flourishes one could want for formatting dates and numbers and sorting text. The locale is selectable, with fallback if the requested locale is unsupported. Formatting requests can specify the particular components to include. Custom formats for percentages, significant digits, and currencies are supported. Numerous collation options are exposed for use in sorting text. And if you care about performance, the up-front work to select a locale and process options can now be done once, instead of once every time a locale-dependent operation is performed.

    That said, the API is not a panacea. The API is “best effort” only. Precise outputs are almost always deliberately unspecified. An implementation could legally support only the oj locale, or it could ignore (almost all) provided formatting options. Most implementations will have high-quality support for many locales, but it’s not guaranteed (particularly on resource-constrained systems such as mobile).

    Under the hood, Firefox’s implementation depends upon the International Components for Unicode library (ICU), which in turn depends upon the Unicode Common Locale Data Repository (CLDR) locale data set. Our implementation is self-hosted: most of the implementation atop ICU is written in JavaScript itself. We hit a few bumps along the way (we haven’t self-hosted anything this large before), but nothing major.

    The Intl interface

    The i18n API lives on the global Intl object. Intl contains three constructors: Intl.Collator, Intl.DateTimeFormat, and Intl.NumberFormat. Each constructor creates an object exposing the relevant operation, efficiently caching locale and options for the operation. Creating such an object follows this pattern:

    var ctor = "Collator"; // or the others
    var instance = new Intl[ctor](locales, options);

    locales is a string specifying a single language tag or an arraylike object containing multiple language tags. Language tags are strings like en (English generally), de-AT (German as used in Austria), or zh-Hant-TW (Chinese as used in Taiwan, using the traditional Chinese script). Language tags can also include a “Unicode extension”, of the form -u-key1-value1-key2-value2..., where each key is an “extension key”. The various constructors interpret these specially.

    options is an object whose properties (or their absence, by evaluating to undefined) determine how the formatter or collator behaves. Its exact interpretation is determined by the individual constructor.

    Given locale information and options, the implementation will try to produce the closest behavior it can to the “ideal” behavior. Firefox supports 400+ locales for collation and 600+ locales for date/time and number formatting, so it’s very likely (but not guaranteed) the locales you might care about are supported.

    Intl generally provides no guarantee of particular behavior. If the requested locale is unsupported, Intl allows best-effort behavior. Even if the locale is supported, behavior is not rigidly specified. Never assume that a particular set of options corresponds to a particular format. The phrasing of the overall format (encompassing all requested components) might vary across browsers, or even across browser versions. Individual components’ formats are unspecified: a short-format weekday might be “S”, “Sa”, or “Sat”. The Intl API isn’t intended to expose exactly specified behavior.

    Date/time formatting


    The primary options properties for date/time formatting are as follows:

    weekday, era
    "narrow", "short", or "long". (era refers to typically longer-than-year divisions in a calendar system: BC/AD, the current Japanese emperor’s reign, or others.)
    "2-digit", "numeric", "narrow", "short", or "long"
    hour, minute, second
    "2-digit" or "numeric"
    "short" or "long"
    Case-insensitive "UTC" will format with respect to UTC. Values like "CEST" and "America/New_York" don’t have to be supported, and they don’t currently work in Firefox.

    The values don’t map to particular formats: remember, the Intl API almost never specifies exact behavior. But the intent is that "narrow", "short", and "long" produce output of corresponding size — “S” or “Sa”, “Sat”, and “Saturday”, for example. (Output may be ambiguous: Saturday and Sunday both could produce “S”.) "2-digit" and "numeric" map to two-digit number strings or full-length numeric strings: “70” and “1970”, for example.

    The final used options are largely the requested options. However, if you don’t specifically request any weekday/year/month/day/hour/minute/second, then year/month/day will be added to your provided options.

    Beyond these basic options are a few special options:

    Specifies whether hours will be in 12-hour or 24-hour format. The default is typically locale-dependent. (Details such as whether midnight is zero-based or twelve-based and whether leading zeroes are present are also locale-dependent.)

    There are also two special properties, localeMatcher (taking either "lookup" or "best fit") and formatMatcher (taking either "basic" or "best fit"), each defaulting to "best fit". These affect how the right locale and format are selected. The use cases for these are somewhat esoteric, so you should probably ignore them.

    Locale-centric options

    DateTimeFormat also allows formatting using customized calendaring and numbering systems. These details are effectively part of the locale, so they’re specified in the Unicode extension in the language tag.

    For example, Thai as spoken in Thailand has the language tag th-TH. Recall that a Unicode extension has the format -u-key1-value1-key2-value2.... The calendaring system key is ca, and the numbering system key is nu. The Thai numbering system has the value thai, and the Chinese calendaring system has the value chinese. Thus to format dates in this overall manner, we tack a Unicode extension containing both these key/value pairs onto the end of the language tag: th-TH-u-ca-chinese-nu-thai.

    For more information on the various calendaring and numbering systems, see the full DateTimeFormat documentation.


    After creating a DateTimeFormat object, the next step is to use it to format dates via the handy format() function. Conveniently, this function is a bound function: you don’t have to call it on the DateTimeFormat directly. Then provide it a timestamp or Date object.

    Putting it all together, here are some examples of how to create DateTimeFormat options for particular uses, with current behavior in Firefox.

    var msPerDay = 24 * 60 * 60 * 1000;
    // July 17, 2014 00:00:00 UTC.
    var july172014 = new Date(msPerDay * (44 * 365 + 11 + 197));

    Let’s format a date for English as used in the United States. Let’s include two-digit month/day/year, plus two-digit hours/minutes, and a short time zone to clarify that time. (The result would obviously be different in another time zone.)

    var options =
      { year: "2-digit", month: "2-digit", day: "2-digit",
        hour: "2-digit", minute: "2-digit",
        timeZoneName: "short" };
    var americanDateTime =
      new Intl.DateTimeFormat("en-US", options).format;
    print(americanDateTime(july172014)); // 07/16/14, 5:00 PM PDT

    Or let’s do something similar for Portuguese — ideally as used in Brazil, but in a pinch Portugal works. Let’s go for a little longer format, with full year and spelled-out month, but make it UTC for portability.

    var options =
      { year: "numeric", month: "long", day: "numeric",
        hour: "2-digit", minute: "2-digit",
        timeZoneName: "short", timeZone: "UTC" };
    var portugueseTime =
      new Intl.DateTimeFormat(["pt-BR", "pt-PT"], options);
    // 17 de julho de 2014 00:00 GMT

    How about a compact, UTC-formatted weekly Swiss train schedule? We’ll try the official languages from most to least popular to choose the one that’s most likely to be readable.

    var swissLocales = ["de-CH", "fr-CH", "it-CH", "rm-CH"];
    var options =
      { weekday: "short",
        hour: "numeric", minute: "numeric",
        timeZone: "UTC", timeZoneName: "short" };
    var swissTime =
      new Intl.DateTimeFormat(swissLocales, options).format;
    print(swissTime(july172014)); // Do. 00:00 GMT

    Or let’s try a date in descriptive text by a painting in a Japanese museum, using the Japanese calendar with year and era:

    var jpYearEra =
      new Intl.DateTimeFormat("ja-JP-u-ca-japanese",
                              { year: "numeric", era: "long" });
    print(jpYearEra.format(july172014)); // 平成26年

    And for something completely different, a longer date for use in Thai as used in Thailand — but using the Thai numbering system and Chinese calendar. (Quality implementations such as Firefox’s would treat plain th-TH as th-TH-u-ca-buddhist-nu-latn, imputing Thailand’s typical Buddhist calendar system and Latin 0-9 numerals.)

    var options =
      { year: "numeric", month: "long", day: "numeric" };
    var thaiDate =
      new Intl.DateTimeFormat("th-TH-u-nu-thai-ca-chinese", options);
    print(thaiDate.format(july172014)); // ๒๐ 6 ๓๑

    Calendar and numbering system bits aside, it’s relatively simple. Just pick your components and their lengths.

    Number formatting


    The primary options properties for number formatting are as follows:

    "currency", "percent", or "decimal" (the default) to format a value of that kind.
    A three-letter currency code, e.g. USD or CHF. Required if style is "currency", otherwise meaningless.
    "code", "symbol", or "name", defaulting to "symbol". "code" will use the three-letter currency code in the formatted string. "symbol" will use a currency symbol such as $ or £. "name" typically uses some sort of spelled-out version of the currency. (Firefox currently only supports "symbol", but this will be fixed soon.)
    An integer from 1 to 21 (inclusive), defaulting to 1. The resulting string is front-padded with zeroes until its integer component contains at least this many digits. (For example, if this value were 2, formatting 3 might produce “03”.)
    minimumFractionDigits, maximumFractionDigits
    Integers from 0 to 20 (inclusive). The resulting string will have at least minimumFractionDigits, and no more than maximumFractionDigits, fractional digits. The default minimum is currency-dependent (usually 2, rarely 0 or 3) if style is "currency", otherwise 0. The default maximum is 0 for percents, 3 for decimals, and currency-dependent for currencies.
    minimumSignificantDigits, maximumSignificantDigits
    Integers from 1 to 21 (inclusive). If present, these override the integer/fraction digit control above to determine the minimum/maximum significant figures in the formatted number string, as determined in concert with the number of decimal places required to accurately specify the number. (Note that in a multiple of 10 the significant digits may be ambiguous, as in “100” with its one, two, or three significant digits.)
    Boolean (defaulting to true) determining whether the formatted string will contain grouping separators (e.g. “,” as English thousands separator).

    NumberFormat also recognizes the esoteric, mostly ignorable localeMatcher property.

    Locale-centric options

    Just as DateTimeFormat supported custom numbering systems in the Unicode extension using the nu key, so too does NumberFormat. For example, the language tag for Chinese as used in China is zh-CN. The value for the Han decimal numbering system is hanidec. To format numbers for these systems, we tack a Unicode extension onto the language tag: zh-CN-u-nu-hanidec.

    For complete information on specifying the various numbering systems, see the full NumberFormat documentation.


    NumberFormat objects have a format function property just as DateTimeFormat objects do. And as there, the format function is a bound function that may be used in isolation from the NumberFormat.

    Here are some examples of how to create NumberFormat options for particular uses, with Firefox’s behavior. First let’s format some money for use in Chinese as used in China, specifically using Han decimal numbers (instead of much more common Latin numbers). Select the "currency" style, then use the code for Chinese renminbi (yuan), grouping by default, with the usual number of fractional digits.

    var hanDecimalRMBInChina =
      new Intl.NumberFormat("zh-CN-u-nu-hanidec",
                            { style: "currency", currency: "CNY" });
    print(hanDecimalRMBInChina.format(1314.25)); // ¥ 一,三一四.二五

    Or let’s format a United States-style gas price, with its peculiar thousandths-place 9, for use in English as used in the United States.

    var gasPrice =
      new Intl.NumberFormat("en-US",
                            { style: "currency", currency: "USD",
                              minimumFractionDigits: 3 });
    print(gasPrice.format(5.259)); // $5.259

    Or let’s try a percentage in Arabic, meant for use in Egypt. Make sure the percentage has at least two fractional digits. (Note that this and all the other RTL examples may appear with different ordering in RTL context, e.g. ٤٣٫٨٠٪ instead of ٤٣٫٨٠٪.)

    var arabicPercent =
      new Intl.NumberFormat("ar-EG",
                            { style: "percent",
                              minimumFractionDigits: 2 }).format;
    print(arabicPercent(0.438)); // ٤٣٫٨٠٪

    Or suppose we’re formatting for Persian as used in Afghanistan, and we want at least two integer digits and no more than two fractional digits.

    var persianDecimal =
      new Intl.NumberFormat("fa-AF",
                            { minimumIntegerDigits: 2,
                              maximumFractionDigits: 2 });
    print(persianDecimal.format(3.1416)); // ۰۳٫۱۴

    Finally, let’s format an amount of Bahraini dinars, for Arabic as used in Bahrain. Unusually compared to most currencies, Bahraini dinars divide into thousandths (fils), so our number will have three places. (Again note that apparent visual ordering should be taken with a grain of salt.)

    var bahrainiDinars =
      new Intl.NumberFormat("ar-BH",
                            { style: "currency", currency: "BHD" });
    print(bahrainiDinars.format(3.17)); // د.ب.‏ ٣٫١٧٠



    The primary options properties for collation are as follows:

    "sort" or "search" (defaulting to "sort"), specifying the intended use of this Collator. (A search collator might want to consider more strings equivalent than a sort collator would.)
    "base", "accent", "case", or "variant". This affects how sensitive the collator is to characters that have the same “base letter” but have different accents/diacritics and/or case. (Base letters are locale-dependent: “a” and “ä” have the same base letter in German but are different letters in Swedish.) "base" sensitivity considers only the base letter, ignoring modifications (so for German “a”, “A”, and “ä” are considered the same). "accent" considers the base letter and accents but ignores case (so for German “a” and “A” are the same, but “ä” differs from both). "case" considers the base letter and case but ignores accents (so for German “a” and “ä” are the same, but “A” differs from both). Finally, "variant" considers base letter, accents, and case (so for German “a”, “ä, “ä” and “A” all differ). If usage is "sort", the default is "variant"; otherwise it’s locale-dependent.
    Boolean (defaulting to false) determining whether complete numbers embedded in strings are considered when sorting. For example, numeric sorting might produce "F-4 Phantom II", "F-14 Tomcat", "F-35 Lightning II"; non-numeric sorting might produce "F-14 Tomcat", "F-35 Lightning II", "F-4 Phantom II".
    "upper", "lower", or "false" (the default). Determines how case is considered when sorting: "upper" places uppercase letters first ("B", "a", "c"), "lower" places lowercase first ("a", "c", "B"), and "false" ignores case entirely ("a", "B", "c"). (Note: Firefox currently ignores this property.)
    Boolean (defaulting to false) determining whether to ignore embedded punctuation when performing the comparison (for example, so that "biweekly" and "bi-weekly" compare equivalent).

    And there’s that localeMatcher property that you can probably ignore.

    Locale-centric options

    The main Collator option specified as part of the locale’s Unicode extension is co, selecting the kind of sorting to perform: phone book (phonebk), dictionary (dict), and many others.

    Additionally, the keys kn and kf may, optionally, duplicate the numeric and caseFirst properties of the options object. But they’re not guaranteed to be supported in the language tag, and options is much clearer than language tag components. So it’s best to only adjust these options through options.

    These key-value pairs are included in the Unicode extension the same way they’ve been included for DateTimeFormat and NumberFormat; refer to those sections for how to specify these in a language tag.


    Collator objects have a compare function property. This function accepts two arguments x and y and returns a number less than zero if x compares less than y, 0 if x compares equal to y, or a number greater than zero if x compares greater than y. As with the format functions, compare is a bound function that may be extracted for standalone use.

    Let’s try sorting a few German surnames, for use in German as used in Germany. There are actually two different sort orders in German, phonebook and dictionary. Phonebook sort emphasizes sound, and it’s as if “ä”, “ö”, and so on were expanded to “ae”, “oe”, and so on prior to sorting.

    var names =
      ["Hochberg", "Hönigswald", "Holzman"];
    var germanPhonebook = new Intl.Collator("de-DE-u-co-phonebk");
    // as if sorting ["Hochberg", "Hoenigswald", "Holzman"]:
    //   Hochberg, Hönigswald, Holzman
    print(names.sort(", "));

    Some German words conjugate with extra umlauts, so in dictionaries it’s sensible to order ignoring umlauts (except when ordering words differing only by umlauts: schon before schön).

    var germanDictionary = new Intl.Collator("de-DE-u-co-dict");
    // as if sorting ["Hochberg", "Honigswald", "Holzman"]:
    //   Hochberg, Holzman, Hönigswald
    print(names.sort(", "));

    Or let’s sort a list Firefox versions with various typos (different capitalizations, random accents and diacritical marks, extra hyphenation), in English as used in the United States. We want to sort respecting version number, so do a numeric sort so that numbers in the strings are compared, not considered character-by-character.

    var firefoxen =
      ["FireFøx 3.6",
       "Fire-fox 1.0",
       "Firefox 29",
       "FÍrefox 3.5",
       "Fírefox 18"];
    var usVersion =
      new Intl.Collator("en-US",
                        { sensitivity: "base",
                          numeric: true,
                          ignorePunctuation: true });
    // Fire-fox 1.0, FÍrefox 3.5, FireFøx 3.6, Fírefox 18, Firefox 29
    print(firefoxen.sort(", "));

    Last, let’s do some locale-aware string searching that ignores case and accents, again in English as used in the United States.

    // Comparisons work with both composed and decomposed forms.
    var decoratedBrowsers =
       "A\u0362maya",  // A͢maya
       "CH\u035Brôme", // CH͛rôme
       "o\u0323pERA",  // ọpERA
       "I\u0352E",     // I͒E
    var fuzzySearch =
      new Intl.Collator("en-US",
                        { usage: "search", sensitivity: "base" });
    function findBrowser(browser)
      function cmp(other)
        return, other) === 0;
      return cmp;
    print(decoratedBrowsers.findIndex(findBrowser("Firêfox"))); // 2
    print(decoratedBrowsers.findIndex(findBrowser("Safåri")));  // 3
    print(decoratedBrowsers.findIndex(findBrowser("Ãmaya")));   // 0
    print(decoratedBrowsers.findIndex(findBrowser("Øpera")));   // 4
    print(decoratedBrowsers.findIndex(findBrowser("Chromè")));  // 1
    print(decoratedBrowsers.findIndex(findBrowser("IË")));      // 5

    Odds and ends

    It may be useful to determine whether support for some operation is provided for particular locales, or to determine whether a locale is supported. Intl provides supportedLocales() functions on each constructor, and resolvedOptions() functions on each prototype, to expose this information.

    var navajoLocales =
      Intl.Collator.supportedLocalesOf(["nv"], { usage: "sort" });
    print(navajoLocales.length > 0
          ? "Navajo collation supported"
          : "Navajo collation not supported");
    var germanFakeRegion =
      new Intl.DateTimeFormat("de-XX", { timeZone: "UTC" });
    var usedOptions = germanFakeRegion.resolvedOptions();
    print(usedOptions.locale);   // de
    print(usedOptions.timeZone); // UTC

    Legacy behavior

    The ES5 toLocaleString-style and localeCompare functions previously had no particular semantics, accepted no particular options, and were largely useless. So the i18n API reformulates them in terms of Intl operations. Each method now accepts additional trailing locales and options arguments, interpreted just as the Intl constructors would do. (Except that for toLocaleTimeString and toLocaleDateString, different default components are used if options aren’t provided.)

    For brief use where precise behavior doesn’t matter, the old methods are fine to use. But if you need more control or are formatting or comparing many times, it’s best to use the Intl primitives directly.


    Internationalization is a fascinating topic whose complexity is bounded only by the varied nature of human communication. The Internationalization API addresses a small but quite useful portion of that complexity, making it easier to produce locale-sensitive web applications. Go use it!

    (And a special thanks to Norbert Lindenberg, Anas El Husseini, Simon Montagu, Gary Kwong, Shu-yu Guo, Ehsan Akhgari, the people of, and anyone I may have forgotten [sorry!] who provided feedback on this article or assisted me in producing and critiquing the examples. The English and German examples were the limit of my knowledge, and I’d have been completely lost on the other examples without their assistance. Blame all remaining errors on me. Thanks again!)

  5. QuaggaJS – Building a barcode-scanner for the Web

    Have your ever tried to type in a voucher code on your mobile phone or simply enter the number of your membership card into a web form?

    These are just two examples of time-consuming and error-prone tasks which can be avoided by taking advantage of printed barcodes. This is nothing new; many solutions exist for reading barcodes with a regular camera, like zxing, but they require a native platform such as Android or iOS. I wanted a solution which works on the Web, without plugins of any sort, and which even Firefox OS could leverage.

    My general interest in computer vision and web technologies fueled my curiosity whether something like this would be possible. Not just a simple scanner, but a scanner equipped with localization mechanisms to find a barcode in real-time.

    The result is a project called QuaggaJS, which is hosted on GitHub. Take a look at the demo pages to get an idea of what this project is all about.

    Reading Barcodes - teaser_500

    How does it work?

    Simply speaking, the pipeline can be divided into the following three steps:

    1. Reading the image and converting it into a binary representation
    2. Determining the location and rotation of the barcode
    3. Decoding the barcode based on the type EAN, Code128

    The first step requires the source to be either a webcam stream or an image file, which is then converted into gray-scale and stored in a 1D array. After that, the image data is passed along to the locator, which is responsible for finding a barcode-like pattern in the image. And finally, if a pattern is found, the decoder tries to read the barcode and return the result. You can read more about these steps in how barcode localization works in QuaggaJS.

    The real-time challenge

    One of the main challenges was to get the pipeline up to speed and fast enough to be considered as a real-time application. When talking about real-time in image-processing applications, I consider 25 frames per second (FPS) the lower boundary. This means that the entire pipeline has to be completed in at least 40ms.

    The core parts of QuaggaJS are made up of computer vision algorithms which tend to be quite heavy on array access. As I already mentioned, the input image is stored in a 1D array. This is not a regular JavaScript Array, but a Typed Array. Since the image has already been converted to gray-scale in the first step, the range of each pixel’s value is set between 0 and 255. This is why Uint8Arrays are used for all image-related buffers.

    Memory efficiency

    One of the key ways to achieve real-time speed for interactive applications is to create memory efficient code which avoids large GC (garbage collection) pauses. That is why I removed most of the memory allocation calls by simply reusing initially created buffers. However this is only useful for buffers when you know the size up front and when the size does not change over time, as with images.


    When you are curious why a certain part of your application runs too slow, a CPU profile may come in handy.

    Firefox includes some wonderful tools to create CPU profiles for the running JavaScript code. During development, this proved to be viable for pinpointing performance bottlenecks and finding functions which caused the most load on the CPU. The following profile was recorded during a session with a webcam on an Intel Core i7-4600U. (Config: video 640×480, half-sampling barcode-localization)


    The profile is zoomed in and shows four subsequent frames. On average, one frame in the pipeline is processed in roughly 20 ms. This can be considered fast enough, even when running on machines having a less powerful CPU, like mobile phones or tablets.

    I marked each step of the pipeline in a different color; green is the first, blue the second and red the third one. The drill-down shows that the localization step consumes most of the time (55.6 %), followed by reading the input stream (28.4 %) and finally by decoding (3.7 %). It is also worth noting that skeletonize is one of the most expensive functions in terms of CPU usage. Because of that, I re-implemented the entire skeletonizing algorithm in asm.js by hand to see whether it could run even faster.


    Asm.js is a highly optimizable subset of JavaScript that can execute at close to native speed. It promises a lot of performance gains when used for compute-intensive tasks (take a look at MASSIVE), like most computer vision algorithms. That’s why I ported the entire skeletonizer module to asm.js. This was a very tedious task, because you are actually not supposed to write asm.js code by hand. Usually asm.js code is generated when it is cross-compiled from C/C++ or other LLVM languages using emscripten. But I did it anyway, just to prove a point.

    The first thing that needs to be sorted out is how to get the image-data into the asm.js module, alongside with parameters like the size of the image. The module is designed to fit right into the existing implementation and therefore incorporates some constraints, like a square image size. However, the skeletonizer is only applied on chunks of the original image, which are all square by definition. Not only is the input-data relevant, but also three temporary buffers are needed during processing (eroded, temp, skeleton).

    In order to cover that, an initial buffer is created, big enough to hold all four images at once. The buffer is shared between the caller and the module. Since we are working with a single buffer, we need to keep a reference to the position of each image. It’s like playing with pointers in C.

    function skeletonize() {
      var subImagePtr = 0,
        erodedImagePtr = 0,
        tempImagePtr = 0,
        skelImagePtr = 0;
      erodedImagePtr = imul(size, size) | 0;
      tempImagePtr = (erodedImagePtr + erodedImagePtr) | 0;
      skelImagePtr = (tempImagePtr + erodedImagePtr) | 0;
      // ...

    To get a better understanding of the idea behind the structure of the buffer, compare it with the following illustration:

    Buffer in Skeletonizer

    The buffer in green represents the allocated memory, which is passed in the asm.js module upon creation. This buffer is then divided into four blue blocks, of which each contains the data for the respective image. In order to get a reference to the correct data block, the variables (ending with Ptr) are pointing to that exact position.

    Now that we have set up the buffer, it is time to take a look at the erode function, which is part of the skeletonizer written in vanilla JavaScript:

    function erode(inImageWrapper, outImageWrapper) {
      var v,
        inImageData =,
        outImageData =,
        height = inImageWrapper.size.y,
        width = inImageWrapper.size.x,
      for ( v = 1; v < height - 1; v++) {
        for ( u = 1; u < width - 1; u++) {
          yStart1 = v - 1;
          yStart2 = v + 1;
          xStart1 = u - 1;
          xStart2 = u + 1;
          sum = inImageData[yStart1 * width + xStart1] +
            inImageData[yStart1 * width + xStart2] +
            inImageData[v * width + u] +
            inImageData[yStart2 * width + xStart1] +
            inImageData[yStart2 * width + xStart2];
          outImageData[v * width + u] = sum === 5 ? 1 : 0;

    This code was then modified to conform to the asm.js specification.

    "use asm";
    // initially creating a view on the buffer (passed in)
    var images = new stdlib.Uint8Array(buffer),
      size = foreign.size | 0;
    function erode(inImagePtr, outImagePtr) {
      inImagePtr = inImagePtr | 0;
      outImagePtr = outImagePtr | 0;
      var v = 0,
        u = 0,
        sum = 0,
        yStart1 = 0,
        yStart2 = 0,
        xStart1 = 0,
        xStart2 = 0,
        offset = 0;
      for ( v = 1; (v | 0) < ((size - 1) | 0); v = (v + 1) | 0) {
        offset = (offset + size) | 0;
        for ( u = 1; (u | 0) < ((size - 1) | 0); u = (u + 1) | 0) {
          yStart1 = (offset - size) | 0;
          yStart2 = (offset + size) | 0;
          xStart1 = (u - 1) | 0;
          xStart2 = (u + 1) | 0;
          sum = ((images[(inImagePtr + yStart1 + xStart1) | 0] | 0) +
            (images[(inImagePtr + yStart1 + xStart2) | 0] | 0) +
            (images[(inImagePtr + offset + u) | 0] | 0) +
            (images[(inImagePtr + yStart2 + xStart1) | 0] | 0) +
            (images[(inImagePtr + yStart2 + xStart2) | 0] | 0)) | 0;
          if ((sum | 0) == (5 | 0)) {
            images[(outImagePtr + offset + u) | 0] = 1;
          } else {
            images[(outImagePtr + offset + u) | 0] = 0;

    Although the basic code structure did not significantly change, the devil is in the detail. Instead of passing in the references to JavaScript objects, the respective indexes of the input and output images, pointing to the buffer, are used. Another noticeable difference is the repeated casting of values to integers with the | 0 notion, which is necessary for secure array access. There is also an additional variable offset defined, which is used as a counter to keep track of the absolute position in the buffer. This approach replaces the multiplication used for determining the current position. In general, asm.js does not allow multiplications of integers except when using the imul operator.

    Finally, the use of the tenary operator ( ? : ) is forbidden in asm.js which has simply been replaced by a regular if.. else condition.

    Performance comparison

    And now it is time to answer the more important question: How much faster is the asm.js implementation compared to regular JavaScript? Let’s take a look at the performance profiles, of which the first one represents the normal JavaScript version and the second asm.js.

    Image Stream Profile


    Surprisingly, the difference between the two implementations is not as big as you might expect (~10%). Apparently, the initial JavaScript code was already written clean enough, so that the JIT compiler could already take full advantage of that. This assumption can only be proven wrong or right if someone re-implements the algorithm in C/C++ and cross-compiles it to asm.js using emscripten. I’m almost sure that the result would differ from my naïve port and produce much more optimized code.


    Besides performance, there are many other parts that must fit together in order to get the best experience. One of those parts is the portal to the user’s world, the camera. As we all know, getUserMedia provides an API to gain access to the device’s camera. Here, the difficulty lies within the differences among all major browser vendors, where the constraints, resolutions and events are handled differently.


    If you are targeting devices other than regular laptops or computers, the chances are high that these devices offer more than one camera. Nowadays almost every tablet or smartphone has a back- and front-facing camera. When using Firefox, selecting the camera programmatically is not possible. Every time the user confirms access to the camera, he or she has to select the desired one. This is handled differently in Chrome, where MediaStreamTrack.getSources exposes the available sources which can then be filtered. You can find the defined sources in the W3C draft.

    The following snippet demonstrates how to get preferred access to the user’s back-facing camera:

    MediaStreamTrack.getSources(function(sourceInfos) {
      var envSource = sourceInfos.filter(function(sourceInfo) {
        return sourceInfo.kind == "video"
            && sourceInfo.facing == "environment";
      }).reduce(function(a, source) {
        return source;
      }, null);
      var constraints = {
        audio : false,
        video : {
          optional : [{
            sourceId : envSource ? : null

    In the use-case of barcode-scanning, the user is most likely going to use the device’s back-facing camera. This is where choosing a camera up front can enormously improve the user experience.


    Another very important topic when working with video is the actual resolution of the stream. This can be controlled with additional constraints to the video stream.

    var hdConstraint = {
      video: {
        mandatory: {
          width: { min: 1280 },
          height: { min: 720 }

    The above snippet, when added to the video constraints, tries to get a video-stream with the specified quality. If no camera meets those requirements, an ConstraintNotSatisfiedError error is returned in the callback. However, these constraints are not fully compatible with all browsers, since some use minWidth and minHeight instead.


    Barcodes are typically rather small and must be close-up to the camera in order to be correctly identified. This is where a built-in auto-focus can help to increase the robustness of the detection algorithm. However, the getUserMedia API lacks functionality for triggering the auto-focus and most devices do not even support continuous autofocus in browser-mode. If you have an up-to-date Android device, chances are high that Firefox is able to use the autofocus of your camera (e.g. Nexus 5 or HTC One). Chrome on Android does not support it yet, but there is already an issue filed.


    And there is still the question of the performance impact caused by grabbing the frames from the video stream. The results have already been presented in the profiling section. They show that almost 30%, or 8ms of CPU time is consumed for just fetching the image and storing it in a TypedArray instance. The typical process of reading the data from a video-source looks as follows:

    1. Make sure the camera-stream is attached to a video-element
    2. Draw the image to a canvas using ctx.drawImage
    3. Read the data from the canvas using ctx.getImageData
    4. Convert the video to gray-scale and store it inside a TypedArray
    var video = document.getElementById("camera"),
        ctx = document.getElementById("canvas").getContext("2d"),
        width = video.videoWidth,
        height = video.videoHeight
        data = new Uint8Array(width*height);
    ctx.drawImage(video, 0, 0);
    ctxData = ctx.getImageData(0, 0, width, height).data;
    computeGray(ctxData, data);

    It would be very much appreciated if there were a way to get lower-level access to the camera frames without going through the hassle of drawing and reading every single image. This is especially important when processing higher resolution content.

    Wrap up

    It has been real fun to create a project centered on computer vision, especially because it connects so many parts of the web platform. Hopefully, limitations such as the missing auto-focus on mobile devices, or reading the camera stream, will be sorted out in the near future. Still, it is pretty amazing what you can build nowadays by simply using HTML and JavaScript.

    Another lesson learned is that implementing asm.js by hand is both hard and unnecessary if you already know how to write proper JavaScript code. However, if you already have an existing C/C++ codebase which you would like to port, emscripten does a wonderful job. This is where asm.js comes to the rescue.

    Finally, I hope more and more people are jumping on the computer vision path, even if technologies like WebCL are still a long way down the road. The future for Firefox might even be for ARB_compute_shader to eventually jump on to the fast track.

  6. MetricsGraphics.js – a lightweight graphics library based on D3

    MetricsGraphics.js is a library built on top of D3 that is optimized for visualizing and laying out time-series data. It provides a simple way to produce common types of graphics in a principled and consistent way. The library supports line charts, scatterplots, histograms, barplots and data tables, as well as features like rug plots and basic linear regression.

    The library elevates the layout and explanation of these graphics to the same level of priority as the graphics. The emergent philosophy is one of efficiency and practicality.

    Hamilton Ulmer and I began building the library earlier this year, during which time we found ourselves copy-and-pasting bits of code in various projects. This led to errors and inconsistent features, and so we decided to develop a single library that provides common functionality and aesthetics to all of our internal projects.

    Moreover, at the time, we were having limited success with our attempts to get casual programmers and non-programmers within the organization to use a library like D3 to create dashboards. The learning curve was proving a bit of an obstacle. So it seemed reasonable to create a level of indirection using well-established design patterns to try and bridge that chasm.

    Our API is simple. All that’s needed to create a graphic is to specify a few default parameters and then, if desired, override one or more of the optional parameters on offer. We don’t maintain state. To update a graphic, one would call data_graphic on the same target element.

    The library is also data-source agnostic. While it provides a number of convenience functions and options that allow for graphics to better handle things like missing observations, it doesn’t care where the data comes from.

    A quick tutorial

    Here’s a quick tutorial to get you started. Say that we have some data on a scholarly topic like UFO sightings. We decide that we’re interested in creating a line chart of yearly sightings.

    We create a JSON file called data/ufo-sightings.json based on the original dataset, where we aggregate yearly sightings. The data doesn’t have to be JSON of course, but that will mean less work later on.

    The next thing we do is load the data:

    d3.json('data/ufo-sightings.json', function(data) {

    data_graphic expects the data object to be an array of objects, which is already the case for us. That’s good. It also needs dates to be timestamps if they’re in a format like yyyy-mm-dd. We’ve got aggregated yearly data, so we don’t need to worry about that. So now, all we need to do is create the graphic and place it in the element specified in target.

    d3.json('data/ufo-sightings.json', function(data) {
            title: "UFO Sightings",
            description: "Yearly UFO sightings (1945 to 2010).",
            data: data,
            width: 650,
            height: 150,
            target: '#ufo-sightings',
            x_accessor: 'year',
            y_accessor: 'sightings',
            markers: [{'year': 1964, 
                       'label': '"The Creeping Terror" released'

    And this is what we end up with. In this example, we’re adding a marker to draw attention to a particular data point. This is optional of course.

    A line chart in MetricsGraphics.js

    A few final remarks

    We follow a real-needs approach to development. Right now, we have mostly implemented features that have been important to us. Having said that, our work is available on Github, as are many of our discussions, and we take any and all pull requests and issues seriously.

    There is still a lot of work to be done. We invite you to take the library out for a spin and file bugs! We’ve set up a sandbox that you can use to try things out without having to download anything:

    MetricsGraphics.js v1.1 is scheduled for release on December 1, 2014.

  7. Visually Representing Angular Applications

    This article concerns diagrammatically representing Angular applications. It is a first step, not a fully figured out dissertation about how to visual specify or document Angular apps. And maybe the result of this is that I, with some embarrassment, find out that someone else already has a complete solution.

    My interest in this springs from two ongoing projects:

    1. My day job working on the next generation version of‘s support center agent application and
    2. My night job working on a book, Angular In Depth, for Manning Publications

    1: Large, complex Angular application

    The first involves working on a large, complex Angular application as part of a multi-person front-end team. One of the problems I, and I assume other team members encounter (hopefully I’m not the only one), is getting familiar enough with different parts of the application so my additions or changes don’t hose it or cause problems down the road.

    With Angular application it is sometimes challenging to trace what’s happening where. Directives give you the ability to encapsulate behavior and let you employ that behavior declaratively. That’s great. Until you have nested directives or multiple directives operating in tandem that someone else painstakingly wrote. That person probably had a clear vision of how everything related and worked together. But, when you come to it newly, it can be challenging to trace the pieces and keep them in your head as you begin to add features.

    Wouldn’t it be nice to have a visual representation of complex parts of an Angular application? Something that gives you the lay-of-the-land so you can see at a glance what depends on what.

    2: The book project

    The second item above — the book project — involves trying to write about how Angular works under-the-covers. I think most Angular developers have at one time or another viewed some part of Angular as magical. We’ve also all cursed the documentation, particularly those descriptions that use terms whose descriptions use terms whose descriptions are poorly defined based on an understanding of the first item in the chain.

    There’s nothing wrong with using Angular directives or services as demonstrated in online examples or in the documentation or in the starter applications. But it helps us as developers if we also understand what’s happening behind the scenes and why. Knowing how Angular services are created and managed might not be required to write an Angular application, but the ease of writing and the quality can be, I believe, improved by better understanding those kinds of details.

    Visual representations

    In the course of trying to better understand Angular behind-the-scenes and write about it, I’ve come to rely heavily on visual representations of the key concepts and processes. The visual representations I’ve done aren’t perfect by any means, but just working through how to represent a process in a diagram has a great clarifying effect.

    There’s nothing new about visually representing software concepts. UML, process diagrams, even Business Process Modeling Notation (BPMN) are ways to help visualize classes, concepts, relationships and functionality.

    And while those diagramming techniques are useful, it seems that at least in the Angular world, we’re missing a full-bodied visual language that is well suited to describe, document or specify Angular applications.

    We probably don’t need to reinvent the wheel here — obviously something totally new is not needed — but when I’m tackling a (for me) new area of a complex application, having available a customized visual vocabulary to represent it would help.

    Diagrammatically representing front-end JavaScript development

    I’m working with Angular daily so I’m thinking specifically about how to represent an Angular application but this may also be an issue within the larger JavaScript community: how to diagrammatically represent front-end JavaScript development in a way allows us to clearly visualize our models, controllers and views, and the interactions between the DOM and our JavaScript code including a event-driven, async callbacks. In other words, a visual domain specific language (DSL) for client-side JavaScript development.

    I don’t have a complete answer for that, but in self-defense I started working with some diagrams to roughly represent parts of an Angular application. Here’s sort of the sequence I went through to arrive at a first cut:

    1. The first thing I did was write out a detailed description of the problem and what I wanted out of an Angular visual DSL. I also defined some simple abbreviations to use to identify the different types of Angular “objects” (directives, controllers, etc.). Then I dove in began diagramming.
    2. I identified the area of code I needed to understand better, picked a file and threw it on the diagram. What I wanted to do was to diagram it in such a way that I could look at that one file and document it without simultaneously having to trace everything to which it connected.
    3. When the first item was on the diagram, I went to something on which it depended. For example, starting with a directive this leads to associated views or controllers. I diagrammed the second item and added the relationship.
    4. I kept adding items and relationships including nested directives and their views and controllers.
    5. I continued until the picture made sense and I could see the pieces involved in the task I had to complete.

    Since I was working on a specific ticket, I knew the problem I needed to solve so not all information had to be included in each visual element. The result is rough and way too verbose, but it did accomplish:

    • Showing me the key pieces and how they related, particularly the nested directives.
    • Including useful information on where methods or $scope properties lived.
    • Giving a guide to the directories where each item lives.

    It’s not pretty but here is the result:

    This represents a somewhat complicated part of the code and having the diagram helped in at least four ways:

    • By going through the exercise of creating it, I learned the pieces involved in an orderly way — and I didn’t have to try to retain the entire structure in my head as I went.
    • I got the high-level view I needed.
    • It was very helpful when developing, particularly since the work got interrupted and I had to come back to it a few days later.
    • When the work was done, I added it to our internal WIKI to ease future ramp-up in the area.

    I think the some next steps might be to define and expand the visual vocabulary by adding things such as:

    • Unique shapes or icons to identify directives, controllers, views, etc.
    • Standardize how to represent the different kinds of relationships such as ng-include or a view referenced by a directive.
    • Standardize how to represent async actions.
    • Add representations of the model.

    As I said in the beginning, this is rough and nowhere near complete, but it did confirm for me the potential value of having a diagramming convention customized for JavaScript development. And in particular, it validated the need for a robust visual DSL to explore, explain, specify and document Angular applications.

  8. interact.js for drag and drop, resizing and multi-touch gestures

    interact.js is a JavaScript module for Drag and drop, resizing and multi-touch gestures with inertia and snapping for modern browsers (and also IE8+).


    I started it as part of my GSoC 2012 project for Biographer‘s network visualization tool. The tool was a web app which rendered to an SVG canvas and used jQuery UI for drag and drop, selection and resizing. Because jQuery UI has little support for SVG, heavy workarounds had to be used. I needed to make the web app more usable on smartphones and tablets and the largest chunk of this work was to replace jQuery UI with interact.js which:

    • is lightweight,
    • works well with SVG,
    • handles multi-touch input,
    • leaves the task of rendering/styling elements to the application and
    • allows the application to supply object dimensions instead of parsing element styles or getting DOMRects.

    What interact.js tries to do is present input data consistently across different browsers and devices and provide convenient ways to pretend that the user did something that they didn’t really do (snapping, inertia, etc.).

    Certain sequences of user input can lead to InteractEvents being fired. If you add event listeners for an event type, that function is given an InteractEvent object which provides pointer coordinates and speed and, in gesture events, scale, distance, angle, etc. The only time interact.js modifies the DOM is to style the cursor; making an element move while a drag happens has to be done from your own event listeners. This way you’re in control of everything that happens.

    Slider demo

    Here’s an example of how you could make a slider with interact.js. You can view and edit the complete HTML, CSS and JS of all the demos in this post on CodePen.

    See the Pen interact.js simple slider by Taye A (@taye) on CodePen.

    JavaScript rundown

    interact('.slider')                   // target the matches of that selector
      .origin('self')                     // (0, 0) will be the element's top-left
      .restrict({drag: 'self'})           // keep the drag within the element
      .inertia(true)                      // start inertial movement if thrown
      .draggable({                        // make the element fire drag events
        max: Infinity                     // allow drags on multiple elements
      .on('dragmove', function (event) {  // call this function on every move
        var sliderWidth = interact.getElementRect(,
            value = event.pageX / sliderWidth;
     = (value * 100) + '%';'data-value', value.toFixed(2));
    interact.maxInteractions(Infinity);   // Allow multiple interactions
    • interact('.slider') [docs] creates an Interactable object which targets elements that match the '.slider' CSS selector. An HTML or SVG element object could also have been used as the target but using a selector lets you use the same settings for multiple elements.
    • .origin('self') [docs] tells interact.js to modify the reported coordinates so that an event at the top-left corner of the target element would be (0,0).
    • .restrict({drag: 'self'}) [docs] keeps the coordinates within the area of the target element.
    • .inertia(true) [docs] lets the user “throw” the target so that it keeps moving after the pointer is released.
    • Calling .draggable({max: Infinity}) [docs] on the object:
      • allows drag listeners to be called when the user drags from an element that matches the target and
      • allows multiple target elements to be dragged simultaneously
    • .on('dragmove', function (event) {...}) [docs] adds a listener for the dragmove event. Whenever a dragmove event occurs, all listeners for that event type that were added to the target Interactable are called. The listener function here calculates a value from 0 to 1 depending on which point along the width of the slider the drag happened. This value is used to position the handle.
    • interact.maxInteractions(Infinity) [docs] is needed to enable multiple interactions on any target. The default value is 1 for backwards compatibility.

    A lot of differences in browser implementations are resolved by interact.js. MouseEvents, TouchEvents and PointerEvents would produce identical drag event objects so this slider works on iOS, Android, Firefox OS and Windows RT as well as on desktop browsers as far back as IE8.

    Rainbow pixel canvas demo

    interact.js is useful for more than moving elements around a page. Here I use it for drawing onto a canvas element.

    See the Pen interact.js pixel rainbow canvas by Taye A (@taye) on CodePen.

    JavaScript rundown

    var pixelSize = 16;
        // snap to the corners of a grid
        mode: 'grid',
        // specify the grid dimensions
        grid: { x: pixelSize, y: pixelSize }
        max: Infinity,
        maxPerElement: Infinity
      // draw colored squares on move
      .on('dragmove', function (event) {
        var context ='2d'),
            // calculate the angle of the drag direction
            dragAngle = 180 * Math.atan2(event.dx, event.dy) / Math.PI;
        // set color based on drag angle and speed
        context.fillStyle = 'hsl(' + dragAngle + ', 86%, '
                            + (30 + Math.min(event.speed / 1000, 1) * 50) + '%)';
        // draw squares
        context.fillRect(event.pageX - pixelSize / 2, event.pageY - pixelSize / 2,
                         pixelSize, pixelSize);
      // clear the canvas on doubletap
      .on('doubletap', function (event) {
        var context ='2d');
        context.clearRect(0, 0, context.canvas.width, context.canvas.height);
      function resizeCanvases () {
        []'.rainbow-pixel-canvas'), function (canvas) {
          canvas.width = document.body.clientWidth;
          canvas.height = window.innerHeight * 0.7;
      // interact.js can also add DOM event listeners
      interact(document).on('DOMContentLoaded', resizeCanvases);
      interact(window).on('resize', resizeCanvases);

    Snapping is used to modify the pointer coordinates so that they are always aligned to a grid.

        // snap to the corners of a grid
        mode: 'grid',
        // specify the grid dimensions
        grid: { x: pixelSize, y: pixelSize }

    Like in the previous demo, multiple drags are enabled but an extra option, maxPerElement, needs to be changed to allow multiple drags on the same element.

        max: Infinity,
        maxPerElement: Infinity

    The movement angle is calculated with Math.atan2(event.dx, event.dy) and that’s used to set the hue of the paint color. event.speed is used to adjust the lightness.

    interact.js has tap and double tap events which are equivalent to click and double click but without the delay on mobile devices. Also, unlike regular click events, a tap isn’t fired if the mouse is moved before being released. (I’m working on adding more events like these).

      // clear the canvas on doubletap
      .on('doubletap', function (event) {

    It can also listen for regular DOM events. In the above demo it’s used to listen for window resize and document DOMContentLoaded.

      interact(document).on('DOMContentLoaded', resizeCanvases);
      interact(window).on('resize', resizeCanvases);

    Similar to jQuery, It can also be used for delegated events. For example:

    interact('input', { context: document.body })
      .on('keypress', function (event) {

    Supplying element dimensions

    To get element dimensions interact.js normally uses:

    • Element#getBoundingClientRect() for SVGElements and
    • Element#getClientRects()[0] for HTMLElements (because it includes the element’s borders)

    and adds page scroll. This is done when checking which action to perform on an element, checking for drops, calculating 'self' origin and in a few other places. If your application keeps the dimensions of elements that are being interacted with, then it makes sense to use the application’s data instead of getting the DOMRect. To allow this, Interactables have a rectChecker() [docs] method to change how elements’ dimensions are gotten. The method takes a function as an argument. When interact.js needs an element’s dimensions, the element is passed to that function and the return value is used.

    Graphic Editor Demo

    The “SVG editor” below has a Rectangle class to represent <rect class="edit-rectangle"/> elements in the DOM. Each rectangle object has dimensions, the element that the user sees and a draw method.

    See the Pen Interactable#rectChecker demo by Taye A (@taye) on CodePen.

    JavaScript rundown

    var svgCanvas = document.querySelector('svg'),
        svgNS = '',
        rectangles = [];
    function Rectangle (x, y, w, h, svgCanvas) {
      this.x = x;
      this.y = y;
      this.w = w;
      this.h = h;
      this.stroke = 5;
      this.el = document.createElementNS(svgNS, 'rect');
      this.el.setAttribute('data-index', rectangles.length);
      this.el.setAttribute('class', 'edit-rectangle');
    Rectangle.prototype.draw = function () {
      this.el.setAttribute('x', this.x + this.stroke / 2);
      this.el.setAttribute('y', this.y + this.stroke / 2);
      this.el.setAttribute('width' , this.w - this.stroke);
      this.el.setAttribute('height', this.h - this.stroke);
      this.el.setAttribute('stroke-width', this.stroke);
      // change how interact gets the
      // dimensions of '.edit-rectangle' elements
      .rectChecker(function (element) {
        // find the Rectangle object that the element belongs to
        var rectangle = rectangles[element.getAttribute('data-index')];
        // return a suitable object for interact.js
        return {
          left  : rectangle.x,
          top   : rectangle.y,
          right : rectangle.x + rectangle.w,
          bottom: rectangle.y + rectangle.h

    Whenever interact.js needs to get the dimensions of one of the '.edit-rectangle' elements, it calls the rectChecker function that was specified. The function finds the Rectangle object using the element argument then creates and returns an appropriate object with left, right, top and bottom properties.

    This object is used for restricting when the restrict elementRect option is set. In the slider demo from earlier, restriction used only the pointer coordinates. Here, restriction will try to prevent the element from being dragged out of the specified area.

        // don't jump to the resume location
        zeroResumeDelta: true
        // restrict to a parent element that matches this CSS selector
        drag: 'svg',
        // only restrict before ending the drag
        endOnly: true,
        // consider the element's dimensions when restricting
        elementRect: { top: 0, left: 0, bottom: 1, right: 1 }

    The rectangles are made draggable and resizable.

        max: Infinity,
        onmove: function (event) {
          var rectangle = rectangles['data-index')];
          rectangle.x += event.dx;
          rectangle.y += event.dy;
        max: Infinity,
        onmove: function (event) {
          var rectangle = rectangles['data-index')];
          rectangle.w = Math.max(rectangle.w + event.dx, 10);
          rectangle.h = Math.max(rectangle.h + event.dy, 10);

    Development and contributions

    I hope this article gives a good overview of how to use interact.js and the types of applications that I think it would be useful for. If not, there are more demos on the project homepage and you can throw questions or issues at Twitter or Github. I’d really like to make a comprehensive set of examples and documentation but I’ve been too busy with fixes and improvements. (I’ve also been too lazy :-P).

    Since the 1.0.0 release, user comments and contributions have led to loads of bug fixes and many new features including:

    So please use it, share it, break it and help to make it better!

  9. jsDelivr and its open-source load balancing algorithm

    This is a guest post by Dmitriy Akulov of jsDelivr.

    Recently I wrote about jsDelivr and what makes it unique where I described in detail about the features that we offer and how our system works. Since then we improved a lot of stuff and released even more features. But the biggest one is was the open source of our load balancing algorithm.

    As you know from the previous blog post we are using Cedexis to do our load balancing. In short we collect millions of RUM (Real User Metrics) data points from all over the world. When a user visits a website partner of Cedexis or ours a JavaScript is executed in the background that does performance checks to our core CDNs, MaxCDN and CloudFlare, and sends this data back to Cedexis. We can then use it to do load balancing based on real time performance information from real life users and ISPs. This is important as it allows us to mitigate outages that CDNs can experience in very localized areas such as a single country or even a single ISP and not worldwide.

    Open-sourcing the load balancing code

    Now our load balancing code is open to everybody to review, test and even send their own Pull Requests with improvements and modifications.

    Until recently the code was actually written in PHP, but due to performance issues and other problems that arrised from that it was decided to switch to JavaScript. Now the DNS application is completely written in js and I will try to explain how exactly it works.

    This is an application that runs on a DNS level and integrates with Cedexis’ API. Every DNS request made to is processed by the following code and then based on all the variables it returns a CNAME that the client can use to get the requested asset.

    Declaring providers

    The first step is to declare our providers:

    providers: {
        'cloudflare': '',
        'maxcdn': '',

    This array contains all the aliases of our providers and the hostnames that we can return if the provider is then chosen. We actually use a couple of custom servers to improve the performance in locations that the CDNs lack but we are currently in the process of removing all of them in favor of more enterprise CDNs that wish to sponsor us.

    Before I explain the next array I want to skip to line 40:

    defaultProviders: [ 'maxcdn', 'cloudflare' ],

    Because our CDN providers get so much more RUM tests than our custom servers their data and in turn the load balancing results are much more reliable and better. This is why by default only MaxCDN and CloudFlare are considered for any user request. And its actually the main reason we want to sunset our custom servers.

    Country mapping

    Now that you know that comes our next array:

    countryMapping: {
        'CN': [ 'exvm-sg', 'cloudflare' ],
        'HK': [ 'exvm-sg', 'cloudflare' ],
        'ID': [ 'exvm-sg', 'cloudflare' ],
        'IT': [ 'prome-it', 'maxcdn', 'cloudflare' ],
        'IN': [ 'exvm-sg', 'cloudflare' ],
        'KR': [ 'exvm-sg', 'cloudflare' ],
        'MY': [ 'exvm-sg', 'cloudflare' ],
        'SG': [ 'exvm-sg', 'cloudflare' ],
        'TH': [ 'exvm-sg', 'cloudflare' ],
        'JP': [ 'exvm-sg', 'cloudflare', 'maxcdn' ],
        'UA': [ 'leap-ua', 'maxcdn', 'cloudflare' ],
        'RU': [ 'leap-ua', 'maxcdn' ],
        'VN': [ 'exvm-sg', 'cloudflare' ],
        'PT': [ 'leap-pt', 'maxcdn', 'cloudflare' ],
        'MA': [ 'leap-pt', 'prome-it', 'maxcdn', 'cloudflare' ]

    This array contains country mappings that override the “defaultProviders” parameter. This is where the custom servers currently come to use. For some countries we know 100% that our custom servers can be much faster than our CDN providers so we manually specify them. Since these locations are few we only need to create handful of rules.

    ASN mappings

    asnMapping: {
        '36114': [ 'maxcdn' ], // Las Vegas 2
        '36351': [ 'maxcdn' ], // San Jose + Washington
        '42473': [ 'prome-it' ], // Milan
        '32489': [ 'cloudflare' ], // Canada

    ASN mappings contains overrides per ASN. Currently we are using them to improve the results of Pingdom tests. The reason for this is because we rely on RUM results to do load balancing we never get any performance tests for ASNs used by hosting providers such as companies where Pingdom rents their servers. So the code is forced to failover to country level performance data to chose the best provider for Pingdom and any other synthetic test and server. This data is not always reliable because not all ISPs have the same performance with a CDN provider as the fastest CDN provider country-wide. So we tweak some ASNs to work better with jsDelivr.

    More settings

    • lastResortProvider sets the CDN provider we want to use in case the application fails to chose one itself. This should be very rare.
    • defaultTtl: 20 is the TTL for our DNS record. We made some tests and decided that this was the most optimal value. In worst case scenario in case of downtime the maximum downtime jsDelivr can have is 20 seconds. Plus our DNS and our CDN are fast enough to compensate for the extra DNS latency every 20 seconds without having any impact on performance.
    • availabilityThresholds is a value in percentage and sets the uptime below which a provider should be considered down. This is based on RUM data. Again because of some small issues with synthetic tests we had to lower the Pingdom threshold. The Pingdom value does not impact anyone else.
    • sonarThreshold Sonar is a secondary uptime monitor we use to ensure the uptime of our providers. It runs every 60 seconds and checks all of our providers including their SSL certificates. If something is wrong our application will pick up the change in uptime and if it drops below this threshold it will be considered as down.
    • And finally minValidRtt is there to filter out all invalid RUM tests.

    The initialization process

    Next our app starts the initialization process. Wrong config and uptime not meeting our criteria is checked and all providers not matching our criteria are then removed from the potential candidates for this request.

    Next we create a reasons array for debugging purposes and apply our override settings. Here we use Cedexis API to get the latest live data for sonar uptime, rum update and HTTP performance.

    sonar = request.getData('sonar');
    candidates = filterObject(request.getProbe('avail'), filterCandidates);
    //console.log('candidates: ' + JSON.stringify(candidates));
    candidates = joinObjects(candidates, request.getProbe('http_rtt'), 'http_rtt');
    //console.log('candidates (with rtt): ' + JSON.stringify(candidates));
    candidateAliases = Object.keys(candidates);

    In case of uptime we also filter our bad providers that dont meet our criteria of uptime by calling the filterCandidates function.

    function filterCandidates(candidate, alias) {
        return (-1 < subpopulation.indexOf(alias))
        && (candidate.avail !== undefined)
        && (candidate.avail >= availabilityThreshold)
        && (sonar[alias] !== undefined)
        && (parseFloat(sonar[alias]) >= settings.sonarThreshold);

    The actual decision making is performed by a rather small code:

    if (1 === candidateAliases.length) {
        decisionAlias = candidateAliases[0];
        decisionTtl = decisionTtl || settings.defaultTtl;
    } else if (0 === candidateAliases.length) {
        decisionAlias = settings.lastResortProvider;
        decisionTtl = decisionTtl || settings.defaultTtl;
    } else {
        candidates = filterObject(candidates, filterInvalidRtt);
        //console.log('candidates (rtt filtered): ' + JSON.stringify(candidates));
        candidateAliases = Object.keys(candidates);
        if (!candidateAliases.length) {
        decisionAlias = settings.lastResortProvider;
        decisionTtl = decisionTtl || settings.defaultTtl;
    } else {
        decisionAlias = getLowest(candidates, 'http_rtt');
        decisionTtl = decisionTtl || settings.defaultTtl;
        response.respond(decisionAlias, settings.providers[decisionAlias]);

    In case we only have 1 provider left after our checks we simply select that provider and output the CNAME, if we have 0 providers left then the lastResortProvider is used. Otherwise if everything is ok and we have more than 1 provider left we do more checks.

    Once we have left with providers that are currently online and don’t have any issues with their performance data we sort them based on RUM HTTP performance and push the CNAME out for the user’s browser to use.

    And thats it. Most of the other stuff like fallback to country level data is automatically done in backend and we only get the actual data we can use in our application.


    I hope you found it interesting and learned more about what you should be considering when doing load balancing especially based on RUM data.

    Check out jsDelivr and feel free to use it in your projects. If you are interested to help we are also looking for node.js developers and designers to help us out.

    We are also looking for companies sponsors to help us grow even faster.

  10. An easier way of using polyfills

    Polyfills are a fantastic way to enable the use of modern code even while supporting legacy browsers, but currently using polyfills is too hard, so at the FT we’ve built a new service to make it easier. We’d like to invite you to use it, and help us improve it.

    Image from

    More pictures, they said. So here’s a unicorn, which is basically a horse with a polyfill.

    The challenge

    Here are some of the issues we are trying to solve:

    • Developers do not necessarily know which features need to be polyfilled. You load your site in some old version of IE beloved by a frustratingly large number of your users, see that the site doesn’t work, and have to debug it to figure out which feature is causing the problem. Sometimes the culprit is obvious, but often not, especially when legacy browsers also lack good developer tools.
    • There are often multiple polyfills available for each feature. It can be hard to know which one most faithfully emulates the missing feature.
    • Some polyfills come as a big bundle with lots of other polyfills that you don’t need, to provide comprehensive coverage of a large feature set, such as ES6. It should not be necessary to ship all of this code to the browser to fix something very simple.
    • Newer browsers don’t need the polyfill, but typically the polyfill is served to all browsers. This reduces performance in modern browsers in order to improve compatibility with legacy ones. We don’t want to make that compromise. We’d rather serve polyfills only to browsers that lack a native implementation of the feature.

    Our solution: polyfills as a service

    To solve these problems, we created the polyfill service. It’s a similar idea to going to an optometrist, having your eyes tested, and getting a pair of glasses perfectly designed to correct your particular vision problem. We are doing the same for browsers. Here’s how it works:

    1. Developers insert a script tag into their page, which loads the polyfill service endpoint.
    2. The service analyses the browser’s user-agent header and a list of requested features (or uses a default list of everything polyfillable) and builds a list of polyfills that are required for this browser
    3. The polyfills are ordered using a graph sort to place them in the right dependency order.
    4. The bundle is minified and served through a CDN (for which we’re very grateful to Fastly for their support)

    Do we really need this solution? Well, consider this: Modernizr is a big grab bag of feature detects, and all sensible use cases benefit from a custom build, but a large proportion of Modernizr users just use the default build, often from or as part of html5boilerplate. Why include Modernizr if you aren’t using its feature detects? Maybe you misunderstand the purpose of the library and just think that Modernizr “fixes stuff”? I have to admit, I did, when I first heard the name, and I was mildly disappointed to find that rather than doing any actual modernising, Modernizr actually just defines modernness.

    The polyfill service, on the other hand, does fix stuff. There’s really nothing wrong with not wanting to spend time gaining intimate knowledge of all the foibles of legacy browsers. Let someone figure it out once, and then we can all benefit from it without needing or wanting to understand the details.

    How to use it

    The simplest use case is:

    <script src="//" async defer></script>

    This includes our default polyfill set. The default set is a manually curated list of features that we think are most essential to modern web development, and where the polyfills are reasonably small and highly accurate. If you want to specify which features you want to polyfill though, go right ahead:

    <!-- Just the Array.from polyfill -->
    <script src="//" async defer></script>
    <!-- The default set, plus the geolocation polyfill -->
    <script src="//,Navigator.prototype.geolocation" async defer></script>

    If it’s important that you have loaded the polyfills before parsing your own code, you can remove the async and defer attributes, or use a script loader (one that doesn’t require any polyfills!).

    Testing and documenting feature support

    This table shows the polyfill service’s effect for a number of key web technologies and a range of popular browsers:

    Polyfill service support grid

    The full list of features we support is shown on our feature matrix. To build this grid we use Sauce Labs’ test automation platform, which runs each polyfill through a barrage of tests in each browser, and documents the results.

    So, er, user-agent sniffing? Really?

    Yes. There are several reasons why UA analysis wins out over feature detection for us:

    • In some cases, we have multiple polyfills for the same feature, because some browsers offer a non-compliant implementation that just needs to be bashed into shape, while others lack any implementation at all. With UA detection you can choose to serve the right variant of the polyfill.
    • With UA detection, the first HTTP request can respond directly with polyfill code. If we used feature detection, the first request would serve feature-detect code, and then a second one would be needed to fetch specific polyfills.

    Almost all websites with significant scale do UA detection. This isn’t to say the stigma attached to it is necessarily bad. It’s easy to write bad UA detect rules, and hard to write good ones. And we’re not ruling out making a way of using the service via feature-detects (in fact there’s an issue in our tracker for it).

    A service for everyone

    The service part of the app is maintained by the FT, and we are working on expanding and improving the tools, documentation, testing and service features all the time. The source is freely available on GitHub so you can easily host it yourself, but we also host an instance of the service on which you can use for free, and our friends at Fastly are providing free CDN distribution and SSL.

    We’ve made a platform. We need the community’s help to populate it. We already serve some of the best polyfills from Jonathan Neal, Mathias Bynens and others, but we’d love to be more comprehensive. Bring your polyfills, improve our tests, and make this a resource that can help move the web forward!