JavaScript Articles

Sort by:


  1. ES6 In Depth: Iterators and the for-of loop

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    How do you loop over the elements of an array? When JavaScript was introduced, twenty years ago, you would do it like this:

    for (var index = 0; index < myArray.length; index++) {

    Since ES5, you can use the built-in forEach method:

    myArray.forEach(function (value) {

    This is a little shorter, but there is one minor drawback: you can’t break out of this loop using a break statement or return from the enclosing function using a return statement.

    It sure would be nice if there were just a for-loop syntax that looped over array elements.

    How about a for-in loop?

    for (var index in myArray) {    // don't actually do this

    This is a bad idea for several reasons:

    • The values assigned to index in this code are the strings "0", "1", "2" and so on, not actual numbers. Since you probably don’t want string arithmetic ("2" + 1 == "21"), this is inconvenient at best.
    • The loop body will execute not only for array elements, but also for any other expando properties someone may have added. For example, if your array has an enumerable property, then this loop will execute one extra time, with index == "name". Even properties on the array’s prototype chain can be visited.
    • Most astonishing of all, in some circumstances, this code can loop over the array elements in an arbitrary order.

    In short, for-in was designed to work on plain old Objects with string keys. For Arrays, it’s not so great.

    The mighty for-of loop

    Remember last week I promised that ES6 would not break the JS code you’ve already written. Well, millions of Web sites depend on the behavior of for-in—yes, even its behavior on arrays. So there was never any question of “fixing” for-in to be more helpful when used with arrays. The only way for ES6 to improve matters was to add some kind of new loop syntax.

    And here it is:

    for (var value of myArray) {

    Hmm. After all that build-up, it doesn’t seem all that impressive, does it? Well, we’ll see whether for-of has any neat tricks up its sleeve. For now, just note that:

    • this is the most concise, direct syntax yet for looping through array elements
    • it avoids all the pitfalls of for-in
    • unlike forEach(), it works with break, continue, and return

    The for-in loop is for looping over object properties.

    The for-of loop is for looping over data—like the values in an array.

    But that’s not all.

    Other collections support for-of too

    for-of is not just for arrays. It also works on most array-like objects, like DOM NodeLists.

    It also works on strings, treating the string as a sequence of Unicode characters:

    for (var chr of "😺😲") {

    It also works on Map and Set objects.

    Oh, I’m sorry. You’ve never heard of Map and Set objects? Well, they are new in ES6. We’ll do a whole post about them at some point. If you’ve worked with maps and sets in other languages, there won’t be any big surprises.

    For example, a Set object is good for eliminating duplicates:

    // make a set from an array of words
    var uniqueWords = new Set(words);

    Once you’ve got a Set, maybe you’d like to loop over its contents. Easy:

    for (var word of uniqueWords) {

    A Map is slightly different: the data inside it is made of key-value pairs, so you’ll want to use destructuring to unpack the key and value into two separate variables:

    for (var [key, value] of phoneBookMap) {
      console.log(key + "'s phone number is: " + value);

    Destructuring is yet another new ES6 feature and a great topic for a future blog post. I should write these down.

    By now, you get the picture: JS already has quite a few different collection classes, and even more are on the way. for-of is designed to be the workhorse loop statement you use with all of them.

    for-of does not work with plain old Objects, but if you want to iterate over an object’s properties you can either use for-in (that’s what it’s for) or the builtin Object.keys():

    // dump an object's own enumerable properties to the console
    for (var key of Object.keys(someObject)) {
      console.log(key + ": " + someObject[key]);

    Under the hood

    “Good artists copy, great artists steal.” —Pablo Picasso

    A running theme in ES6 is that the new features being added to the language didn’t come out of nowhere. Most have been tried and proven useful in other languages.

    The for-of loop, for example, resembles similar loop statements in C++, Java, C#, and Python. Like them, it works with several different data structures provided by the language and its standard library. But it’s also an extension point in the language.

    Like the for/foreach statements in those other languages, for-of works entirely in terms of method calls. What Arrays, Maps, Sets, and the other objects we talked about all have in common is that they have an iterator method.

    And there’s another kind of object that can have an iterator method too: any object you want.

    Just as you can add a myObject.toString() method to any object and suddenly JS knows how to convert that object to a string, you can add the myObject[Symbol.iterator]() method to any object and suddenly JS will know how to loop over that object.

    For example, suppose you’re using jQuery, and although you’re very fond of .each(), you would like jQuery objects to work with for-of as well. Here’s how to do that:

    // Since jQuery objects are array-like,
    // give them the same iterator method Arrays have
    jQuery.prototype[Symbol.iterator] =

    OK, I know what you’re thinking. That [Symbol.iterator] syntax seems weird. What is going on there? It has to do with the method’s name. The standard committee could have just called this method .iterator(), but then, your existing code might already have some objects with .iterator() methods, and that could get pretty confusing. So the standard uses a symbol, rather than a string, as the name of this method.

    Symbols are new in ES6, and we’ll tell you all about them in—you guessed it—a future blog post. For now, all you need to know is that the standard can define a brand-new symbol, like Symbol.iterator, and it’s guaranteed not to conflict with any existing code. The trade-off is that the syntax is a little weird. But it’s a small price to pay for this versatile new feature and excellent backward compatibility.

    An object that has a [Symbol.iterator]() method is called iterable. In coming weeks, we’ll see that the concept of iterable objects is used throughout the language, not only in for-of but in the Map and Set constructors, destructuring assignment, and the new spread operator.

    Iterator objects

    Now, there is a chance you will never have to implement an iterator object of your own from scratch. We’ll see why next week. But for completeness, let’s look at what an iterator object looks like. (If you skip this whole section, you’ll mainly be missing crunchy technical details.)

    A for-of loop starts by calling the [Symbol.iterator]() method on the collection. This returns a new iterator object. An iterator object can be any object with a .next() method; the for-of loop will call this method repeatedly, once each time through the loop. For example, here’s the simplest iterator object I can think of:

    var zeroesForeverIterator = {
      [Symbol.iterator]: function () {
        return this;
      next: function () {
        return {done: false, value: 0};

    Every time this .next() method is called, it returns the same result, telling the for-of loop (a) we’re not done iterating yet; and (b) the next value is 0. This means that for (value of zeroesForeverIterator) {} will be an infinite loop. Of course, a typical iterator will not be quite this trivial.

    This iterator design, with its .done and .value properties, is superficially different from how iterators work in other languages. In Java, iterators have separate .hasNext() and .next() methods. In Python, they have a single .next() method that throws StopIteration when there are no more values. But all three designs are fundamentally returning the same information.

    An iterator object can also implement optional .return() and .throw(exc) methods. The for-of loop calls .return() if the loop exits prematurely, due to an exception or a break or return statement. The iterator can implement .return() if it needs to do some cleanup or free up resources it was using. Most iterator objects won’t need to implement it. .throw(exc) is even more of a special case: for-of never calls it at all. But we’ll hear more about it next week.

    Now that we have all the details, we can take a simple for-of loop and rewrite it in terms of the underlying method calls.

    First the for-of loop:

    for (VAR of ITERABLE) {

    Here is a rough equivalent, using the underlying methods and a few temporary variables:

    var $iterator = ITERABLE[Symbol.iterator]();
    var $result = $;
    while (!result.done) {
      VAR = result.value;
      $result = $;

    This code doesn’t show how .return() is handled. We could add that, but I think it would obscure what’s going on rather than illuminate it. for-of is easy to use, but there is a lot going on behind the scenes.

    When can I start using this?

    The for-of loop is supported in all current Firefox and Chrome releases. It also works in Microsoft’s Spartan browser, but not in shipping versions of IE. If you’d like to use this new syntax on the web, but you need to support IE and Safari, you can use a compiler like Babel or Google’s Traceur to translate your ES6 code to Web-friendly ES5.

    On the server, you don’t need a compiler—you can start using for-of in io.js and Node today.

    {done: true}


    Well, we’re done for today, but we’re still not done with the for-of loop.

    There is one more new kind of object in ES6 that works beautifully with for-of. I didn’t mention it because it’s the topic of next week’s post. I think this new feature is the most magical thing in ES6. If you haven’t already encountered it in languages like Python and C#, you’ll probably find it mind-boggling at first. But it’s the easiest way to write an iterator, it’s useful in refactoring, and it might just change the way we write asynchronous code, both in the browser and on the server. So join us next week as we look at ES6 generators in depth.

  2. Firefox OS, Animations & the Dark Cubic-Bezier of the Soul

    I’ve been using Firefox OS daily for a couple of years now (wow, time flies!). While performance has steadily improved with efforts like Project Silk, I’ve often noticed delays in the user interface. I assumed the delays were because the hardware was well below the “flagship” hardware I’ve become accustomed to with Android and iOS devices.

    Last year, I built Firefox OS for a Nexus 4 and started using that as my daily phone. Quickly I realized that even with better hardware, I sometimes had to wait on Firefox OS for basic interactions, even when the task wasn’t computationally intensive. I moved on to a Nexus 5 and then a Sony Z3 Compact, both with better specs than the Nexus 4, and experienced the same thing.

    Time passed. Frustration grew. Whispers of a nameless fear…

    Running the numbers

    While reading Ralph Thomas’s post about creating animations based on physical models, I wondered about the implementation of animations in Firefox OS, and how that might be involved in this problem. I performed an audit of the number of instances of different animations, grouped by their duration. I removed progress indicators and things like the boot shutdown animation. Here are the animation and transition durations in Firefox OS, grouped by duration, for transitional interactions like scaling, opening, closing and sliding:

    • 0.1s: 15
    • 0.2s: 57
    • 0.3s: 79
    • 0.4s: 40
    • 0.5s: 78
    • 0.6s: 8

    A couple of things stand out. First, we have a pretty wide distribution of animation durations. Second, the vast majority of the animations are more than 300ms long!

    In fact, in more than 80 animations we are making the user wait more than half a second. These slow animations are dragging us down, resulting in a poorer overall experience of Firefox OS.

    How did we get here?

    The Firefox OS UX and interaction designers didn’t huddle in a room and design each interaction to be intentionally slow. The engineers who implemented these animations didn’t ever think to themselves “this feels really responsive… let’s make it slower!”

    My theory is that interactions like these don’t feel slow while you’re designing and implementing them, because you’re working with a single interaction at a time. When designing and developing an animation, I look for fluidity of motion, the aesthetics of that single action and how the visual impact enhances the task at hand, and then I iterate on duration and effects until it feels right.

    We do have guidelines for responsiveness and user-perceived performance in Firefox OS, written up by Gordon Brander, which you can see in the screenshot below. (Click the image for a larger, more readable version.) However, those guidelines don’t cover the sub-second period between the initial perception of cause and effect and the next actionable state of the user interface.

    Screenshot 2015-04-18 09.38.10

    Users have an entirely different experience than we do as developers and designers. Users make their way through our animations while hurriedly sending a text message, trying to capture that perfect moment on camera, entering their username and password, or arduously uploading a bunch of images one at a time. People are trying to get from point A to point B. They want to complete a task… well, actually not just one: Smartphone users are trying to complete 221 tasks every day, according to a study in the UK last October by Tecmark. All those animations add up! I assert that the aggregate of those 203 animations in Gaia that are 300ms and longer contributes to the frustrating feeling of slowness I was experiencing before digging into this.

    Making it feel fast

    So I tested this theory, by changing all animation durations in Gaia to 200ms, as a starting point. The result? Firefox OS feels far more responsive. Moving through tasks and navigating around the OS felt quick but not abrupt. The camera snaps to readiness. Texting feels so much more fluid and snappy. Apps pop up, instead of slowly hauling their creaky bones out of bed. The Rocketbar gets closer to living up to its name (though I still think the keyboard should animate up while the bar becomes active).

    Here’s a demo of some of our animations side by side, before and after this patch:

    There are a couple of things we can do about this in Gaia:

    1. I filed a bug to get this change landed in Gaia. The 200ms duration is a first stab at this until we can do further testing. Better to err on the snappy side instead of the sluggish side. We’ve got the thumbs-up from most of the 16 developers who had to review the changes, and are now working with the UX team to sign off before it can land. Kevin Grandon helped by adding a CSS variable that we can use across all of Gaia, which will make it easier to implement these types of changes OS-wide in the future as we learn more.
    2. I’m working with the Firefox OS UX team to define global and consistent best-practices for animations. These guidelines will not be correct 100% of the time, but can be a starting point when implementing new animations, ensuring that the defaults are based on research and experience.
    If you are a Firefox OS user, report bugs if you experience anything that feels slow. By reporting a bug, you can make change happen and help improve the user experience for everyone on Firefox OS.

    If you are a developer or designer, what are your animation best-practices? What user feedback have you received on the animations in your Web projects? Let us know in the comments below!

  3. ES6 In Depth: An Introduction

    Welcome to ES6 In Depth! In this new weekly series, we’ll be exploring ECMAScript 6, the upcoming new edition of the JavaScript language. ES6 contains many new language features that will make JS more powerful and expressive, and we’ll visit them one by one in weeks to come. But before we start in on the details, maybe it’s worth taking a minute to talk about what ES6 is and what you can expect.

    What falls under the scope of ECMAScript?

    The JavaScript programming language is standardized by ECMA (a standards body like W3C) under the name ECMAScript. Among other things, ECMAScript defines:

    What it doesn’t define is anything to do with HTML or CSS, or the Web APIs, such as the DOM (Document Object Model). Those are defined in separate standards. ECMAScript covers the aspects of JS that are present not only in the browser, but also in non-browser environments such as node.js.

    The new standard

    Last week, the final draft of the ECMAScript Language Specification, Edition 6, was submitted to the Ecma General Assembly for review. What does that mean?

    It means that this summer, we’ll have a new standard for the core JavaScript programming language.

    This is big news. A new JS language standard doesn’t drop every day. The last one, ES5, happened back in 2009. The ES standards committee has been working on ES6 ever since.

    ES6 is a major upgrade to the language. At the same time, your JS code will continue to work. ES6 was designed for maximum compatibility with existing code. In fact, many browsers already support various ES6 features, and implementation efforts are ongoing. This means all your JS code has already been running in browsers that implement some ES6 features! If you haven’t seen any compatibility issues by now, you probably never will.

    Counting to 6

    The previous editions of the ECMAScript standard were numbered 1, 2, 3, and 5.

    What happened to Edition 4? An ECMAScript Edition 4 was once planned—and in fact a ton of work was done on it—but it was eventually scrapped as too ambitious. (It had, for example, a sophisticated opt-in static type system with generics and type inference.)

    ES4 was contentious. When the standards committee finally stopped work on it, the committee members agreed to publish a relatively modest ES5 and then proceed to work on more substantial new features. This explicit, negotiated agreement was called “Harmony,” and it’s why the ES5 spec contains these two sentences:

    ECMAScript is a vibrant language and the evolution of the language is not complete. Significant technical enhancement will continue with future editions of this specification.

    This statement could be seen as something of a promise.

    Promises resolved

    ES5, the 2009 update to the language, introduced Object.create(), Object.defineProperty(), getters and setters, strict mode, and the JSON object. I’ve used all these features, and I like what ES5 did for the language. But it would be too much to say any of these features had a dramatic impact on the way I write JS code. The most important innovation, for me, was probably the new Array methods: .map(), .filter(), and so on.

    Well, ES6 is different. It’s the product of years of harmonious work. And it’s a treasure trove of new language and library features, the most substantial upgrade for JS ever. The new features range from welcome conveniences, like arrow functions and simple string interpolation, to brain-melting new concepts like proxies and generators.

    ES6 will change the way you write JS code.

    This series aims to show you how, by examining the new features ES6 offers to JavaScript programmers.

    We’ll start with a classic “missing feature” that I’ve been eager to see in JavaScript for the better part of a decade. So join us next week for a look at ES6 iterators and the new for-of loop.

  4. Network Activity and Battery Drain in Mobile Web Apps

    Editor’s note: This post describes the work of a group of students from Portland State University who worked with Mozilla on their senior project. Over the course of the last 6 months, they’ve worked with Mozillian Dietrich Ayala to create a JavaScript library that allows developers to optimize the usage of network operations, thus saving battery life. The group consists of 8 students with varied technology backgrounds, each assigned to take on a different task for the project. Congratulations to the team:

    Overview and Goals of the Project

    The goal of our senior Capstone project was to develop a JavaScript library to make it easier for developers to write applications that use less power by making fewer network requests. In emerging markets, where battery capacities in mobile devices may be smaller and signal strength may be poor, applications that make many requests create serious challenges for the usability of smartphones. Sometimes, applications designed for users in regions with robust network infrastructure can create significant negative effects for users with less reliable access. Reducing battery drain can provide better battery life and better user experiences for everyone. To improve this situation, we’ve created APIs that help developers write mobile applications in a way that minimizes network usage.

    In order to solve this problem effectively, we provide developers with mechanisms to delay non-critical requests, batch requests together, and detect when the network conditions are best for a given activity. This involved doing research to determine the efficacy of various solutions. Regardless of the effectiveness of our APIs, this research provides insight into saving battery usage. In addition to our research, we also focused on the developer ergonomics, hoping to make this easy for developers to use.

    Installation & Usage

    Installation of the library is simple: clone the “dist” folder and choose the library variant that best suits your needs. LocalForage is used in the library for storing the statistical details for each XMLHttpRequest (XHR). This way the developer can perform analysis to develop a set of dynamic heuristics such as utilizing when the user is most likely to make successful XHRs. However, if this is something you do not think will be commonly used, you can opt for a LocalForage-free version to get a smaller library memory footprint.

    We encourage you to check out our General Usage section and API Usage section to get a comprehensive idea of usage and context. A brief overview of how to use the core functions of the APIs is provided.

    Critical Requests

    When you need to make a critical XHR for something that the user needs right away, you make it using the following syntax:

    AL.ajax(url [, data] [, success] [, method])

    Where url indicates the endpoint, data is the parameter you pass JSON data to (i.e., POST XHR), success is called after the request has completed successfully, and the optional method parameter specifies the HTTP method to use (e.g. Patch, Post). If method is not specified, and the data field is null, GET will be used, but if data is used, POST will be the default.

    A sample critical request would look like this:

    AL.ajax('', {cats: 20}, function(response, status, xhr) {
    console.log('Response: ', response);
    console.log('Status: ', status);
    console.log('Xhr: ', xhr);

    This code when executed would result in the following output:

    Response: {"request_method":"POST","request_parameters":[]}
    Status: 200
    Xhr: XMLHttpRequest { readyState=4, timeout=0, withCredentials=false, ...}

    Non-Critical Requests

    Non-critical requests are used for non-urgent needs and work by placing the non-critical XHR(s) in a queue which is fired upon certain conditions. The two default conditions are ‘battery is more than 10% and a critical request was just fired’ or ‘battery is more than 10% and the device is plugged in to a power source’. The syntax for making a non-critical request is the same as a critical one, with the exception of the function name and an additional parameter, timeout:

    AL.addNonCriticalRequest(url [, data] [, success] [, method] [, timeout])

    Here’s how timeout works: given a number of milliseconds, the XHR added (and all other XHR in the queue) will fire off if the queue is not already fired off by some other mechanism such as a critical request firing.

    Recording & Analysis

    XHRs are stored within LocalForage. There are a variety of functions to either retrieve the data or trim it. General retrieval syntax is in this format, where callback is an array of XHR-related objects that contain data relevant to the XHR such as start time, end time, and size of the request.


    You can use this data in all manner of interesting ways, but at a basic level you’ll want to time the XHRs. Calculate the difference between start time and end time of the request for the five most recent requests by doing the following:

    function getRecords() {
    var elem = document.getElementById('recordsList');
    AL.getHistory(function (records) {
    if (records) {
    var counter = 0;
    var string = [];
    for (var i = Math.max(records.length - 5, 0); i < Math.max(records.length, 0); ++i) {
    string[counter] = records[i].end - records[i].begin;
    elem.innerHTML = string.toString();
    else {
    console.log("Records is null");

    Research Findings

    In order to gather data about the effectiveness of our APIs on reducing battery usage, we hooked up our reference device (a Flame) to a battery harness and used our demo app to process 30 requests of various different types of media (text, images, and video). All three tests were run on a WiFi network (our university’s WiFi network, specifically). We attempted to run all three tests on a 3G (T-Mobile) network, but due to poor connectivity, we were only able to run the text test over a cellular network.

    When running the tests on WiFi, we noticed that the WiFi chip was extremely efficient. It would turn on almost instantly and once all network requests were done, it would turn off just as quickly. Because of this, we realized that the library is not very useful when on a WiFi network; it is hard to be more efficient than instant on/off.

    When testing on a 3G network however, it became very apparent that this library could be useful. On the graph of power consumption, there is a very clear (and relatively slow) period where the chip warms up, increasing its power usage gradually until it is fully powered. Once all network activity is complete, there is an even longer period of cool down, as the power draw of the chip gradually declines. It is clear that stacking the requests together is useful on this type of network to avoid dead periods, when the phone is powering down the chips due to lack of activity just as another request comes in, causing chips to be powered back up again at the same slow rate.

    Text over WiFi

    Screen turned on around 2 seconds, 30 XHMLHttpRequest burst sent from roughly the 6 second mark to the end of the graph (~13.5 second mark)

    Text over 3G

    Screen turned on around 2 seconds, 30 XHMLHttpRequest burst sent from roughly the 2 second mark to roughly the 18 second mark

    In conclusion, we believe our library will prove useful when the cellphone is using a 3G network and will help conserve battery usage for requests that are not immediately necessary. Furthermore, the optional data analytics backbone can help advanced developers generate unique heuristics per user to further minimize battery drain.

    Portland State University Firefox OS Capstone Team

    Portland State University Firefox OS Capstone Team: Back row: Tim Huynh, Casey English, Nathan Kerr, Scott Anecito. Front row: Brianna Buckner, Ryan Niebur, Sean Mahan, Bin Gao (left to right)

  5. Drag Elements, Console History, and more – Firefox Developer Edition 39

    Quite a few big new features, improvements, and bug fixes made their way into Firefox Developer Edition 39. Update your Firefox Developer Edition, or Nightly builds to try them out!


    The Inspector now allows you to move elements around via drag and drop. Click and hold on an element and then drag it to where you want it to go. This feature was added by contributor Mahdi Dibaiee.

    Back in Firefox 33, a tooltip was added to the rule view to allow editing curves for cubic bezier CSS animations. In Developer Edition 39, we’ve greatly enhanced the tooltip’s UX by adding various standard curves you can try right away, as well as cleaning up the overall appearance. This enhancement was added by new contributor John Giannakos.


    The CSS animations panel we debuted in Developer Edition 37 now includes a time machine. You can rewind, fast forward, and set the current time of your animations.


    Previously, when the DevTools console closed, your past Console history was lost. Now, Console history is persisted across sessions. The recent commands you’ve entered will remain accessible in the next toolbox you open, whether it’s in another tab or after restarting Firefox. Additionally, we’ve added a clearHistory console command to reset the stored list of commands.

    The shorthand $_ has been added as an alias for the last result evaluated in the Console. If you evaluated an expression without storing the result to a variable (for example), you can use this as a quick way to grab the last result.

    We now format pseudo-array-like objects as if they were arrays in the Console output. This makes a pseudo-array-like object easier to reason about and inspect, just like a real array. This feature was added by contributor Johan K. Jensen.


    WebIDE and Mobile

    WiFi debugging for Firefox OS has landed. WiFi debugging allows WebIDE to connect to your Firefox OS device via your local WiFi network instead of a USB cable. We’ll discuss this feature in more detail in a future post.

    WebIDE gained support for Cordova-based projects. If you’re working on a mobile app project using Cordova, WebIDE now knows how to build the project for devices it supports without any extra configuration.

    Other Changes

    • Attribute changes only flash the changed attribute in the Markup View, instead of the whole element.
    • Canvas Debugger now supports setTimeout for animations.
    • Inline box model highlighting.
    • Browser Toolbox can now be opened from a shortcut: Cmd-Opt-Shift-I / Ctrl-Alt-Shift-I.
    • Network Monitor now shows the remote server’s IP address and port.
    • When an element’s highlighted in the Inspector, you can now use the arrow keys to highlight the current element’s parent (left key), or its first child, or its next sibling if it has no children, or the next node in the tree if it has no siblings (right key). This is especially useful when an element and its parent occupy the same space on the screen, making it difficult to select one of them using only the mouse.

    For an even more complete list, check out all 200 bugs resolved during the Firefox 39 development cycle.

    Thanks to all the new developers who made their first DevTools contribution this release:

    • Anush
    • Brandon Max
    • Geoffroy Planquart
    • Johan K. Jensen
    • John Giannakos
    • Mahdi Dibaiee
    • Nounours Heureux
    • Wickie Lee
    • Willian Gustavo Veiga

    Do you have feedback, bug reports, feature requests, or questions? As always, you can comment here, add/vote for ideas on UserVoice, or get in touch with the team at @FirefoxDevTools on Twitter.

  6. WebRTC in Firefox 38: Multistream and renegotiation

    Building on the JSEP (Javascript Session Establishment Protocol) engine rewrite introduced in 37, Firefox 38 now has support for multistream (multiple tracks of the same type in a single PeerConnection), and renegotiation (multiple offer/answer exchanges in a single PeerConnection). As usual with such things, there are caveats and limitations, but the functionality seems to be pretty solid.

    Multistream and renegotiation features

    Why are these things useful, you ask? For example, now you can handle a group video call with a single PeerConnection (multistream), and do things like add/remove these streams on the fly (renegotiation). You can also add screensharing to an existing video call without needing a separate PeerConnection. Here are some advantages of this new functionality:

    • Simplifies your job as an app-writer
    • Requires fewer rounds of ICE (Interactive Connectivity Establishment – the protocol for establishing connection between the browsers), and reduces call establishment time
    • Requires fewer ports, both on the browser and on TURN relays (if using bundle, which is enabled by default)

    Now, there are very few WebRTC services that use multistream (the way it is currently specified, see below) or renegotiation. This means that real-world testing of these features is extremely limited, and there will probably be bugs. If you are working with these features, and are having difficulty, do not hesitate to ask questions in IRC at on #media, since this helps us find these bugs.

    Also, it is important to note that Google Chrome’s current implementation of multistream is not going to be interoperable; this is because Chrome has not yet implemented the specification for multistream (called “unified plan” – check on their progress in the Google Chromium Bug tracker). Instead they are still using an older Google proposal (called “plan B”). These two approaches are mutually incompatible.

    On a related note, if you maintain or use a WebRTC gateway that supports multistream, odds are good that it uses “plan B” as well, and will need to be updated. This is a good time to start implementing unified plan support. (Check the Appendix below for examples.)

    Building a simple WebRTC video call page

    So let’s start with a concrete example. We are going to build a simple WebRTC video call page that allows the user to add screen sharing during the call. As we are going to dive deep quickly you might want to check out our earlier Hacks article, WebRTC and the Early API, to learn the basics.

    First we need two PeerConnections:

    pc1 = new mozRTCPeerConnection();
    pc2 = new mozRTCPeerConnection();

    Then we request access to camera and microphone and attach the resulting stream to the first PeerConnection:

    let videoConstraints = {audio: true, video: true};
      .then(stream1) {

    To keep things simple we want to be able to run the call just on one machine. But most computers today don’t have two cameras and/or microphones available. And just having a one-way call is not very exciting. So let’s use a built-in testing feature of Firefox for the other direction:

    let fakeVideoConstraints = {video: true, fake: true };
      .then(stream2) {

    Note: You’ll want to call this part from within the success callback of the first getUserMedia() call so that you don’t have to track with boolean flags if both getUserMedia() calls succeeded before you proceed to the next step.
    Firefox also has a built-in fake audio source (which you can turn on like this {audio: true, fake: true}). But listening to an 8kHz tone is not as pleasant as looking at the changing color of the fake video source.

    Now we have all the pieces ready to create the initial offer:

    pc1.createOffer().then(step1, failed);

    Now the WebRTC typical offer – answer flow follows:

    function step1(offer) {
      pc1_offer = offer;
      pc1.setLocalDescription(offer).then(step2, failed);
    function step2() {
      pc2.setRemoteDescription(pc1_offer).then(step3, failed);

    For this example we take a shortcut: Instead of passing the signaling message through an actual signaling relay, we simply pass the information into both PeerConnections as they are both locally available on the same page. Refer to our previous hacks article WebRTC and the Early API for a solution which actually uses FireBase as relay instead to connect two browsers.

    function step3() {
      pc2.createAnswer().then(step4, failed);
    function step4(answer) {
      pc2_answer = answer;
      pc2.setLocalDescription(answer).then(step5, failed);
    function step5() {
      pc1.setRemoteDescription(pc2_answer).then(step6, failed);
    function step6() {
      log("Signaling is done");

    The one remaining piece is to connect the remote videos once we receive them.

    pc1.onaddstream = function(obj) {
      pc1video.mozSrcObject =;

    Add a similar clone of this for our PeerConnection 2. Keep in mind that these callback functions are super trivial — they assume we only ever receive a single stream and only have a single video player to connect it. The example will get a little more complicated once we add the screen sharing.

    With this we should be able to establish a simple call with audio and video from the real devices getting sent from PeerConnection 1 to PeerConnection 2 and in the opposite direction a fake video stream that shows slowly changing colors.

    Implementing screen sharing

    Now let’s get to the real meat and add screen sharing to the already established call.

    function screenShare() {
      let screenConstraints = {video: {mediaSource: "screen"}};
        .then(stream) {
          stream.getTracks().forEach(track) {
            screenStream = stream;
            screenSenders.push(pc1.addTrack(track, stream));

    Two things are required to get screen sharing working:

    1. Only pages loaded over HTTPS are allowed to request screen sharing.
    2. You need to append your domain to the user preference  media.getusermedia.screensharing.allowed_domains in about:config to whitelist it for screen sharing.

    For the screenConstraints you can also use ‘window‘ or ‘application‘ instead of ‘screen‘ if you want to share less than the whole screen.
    We are using getTracks() here to fetch and store the video track out of the stream we get from the getUserMedia call, because we need to remember the track later when we want to be able to remove screen sharing from the call. Alternatively, in this case you could use the addStream() function used before to add new streams to a PeerConnection. But the addTrack() function gives you more flexibility if you want to handle video and audio tracks differently, for instance. In that case, you can fetch these tracks separately via the getAudioTracks() and getVideoTracks() functions instead of using the getTracks() function.

    Once you add a stream or track to an established PeerConnection this needs to be signaled to the other side of the connection. To kick that off, the onnegotiationneeded callback will be invoked. So your callback should be setup before adding a track or stream. The beauty here — from this point on we can simply re-use our signaling call chain. So the resulting screen share function looks like this:

    function screenShare() {
      let screenConstraints = {video: {mediaSource: "screen"}};
      pc1.onnegotiationneeded = function (event) {
        pc1.createOffer(step1, failed);
        .then(stream) {
          stream.getTracks().forEach(track) {
            screenStream = stream;
            screenSenders.push(pc1.addTrack(track, stream));

    Now the receiving side also needs to learn that the stream from the screen sharing was successfully established. We need to slightly modify our initial onaddstream function for that:

    pc2.onaddstream = function(obj) {
      var stream =;
      if (stream.getAudioTracks().length == 0) {
        pc3video.mozSrcObject =;
      } else {
        pc2video.mozSrcObject =;

    The important thing to note here: With multistream and renegotiation onaddstream can and will be called multiple times. In our little example onaddstream is called the first time we establish the connection and PeerConnection 2 starts receiving the audio and video from the real devices. And then it is called a second time when the video stream from the screen sharing is added.
    We are simply assuming here that the screen share will have no audio track in it to distinguish the two cases. There are probably cleaner ways to do this.

    Please refer to the Appendix for a bit more detail on what happens here under the hood.

    As the user probably does not want to share his/her screen until the end of the call let’s add a function to remove it as well.

    function stopScreenShare() {
      screenSenders.forEach(sender) {

    We are holding on to a reference to the original stream to be able to call stop() on it to release the getUserMedia permission we got from the user. The addTrack() call in our screenShare() function returned us an RTCRtpSender object, which we are storing so we can hand it to the removeTrack() function.

    All of the code combined with some extra syntactic sugar can be found on our MultiStream test page.

    If you are going to build something which allows both ends of the call to add screen share, a more realistic scenario than our demo, you will need to handle special cases. For example, multiple users might accidentally try to add another stream (e.g. the screen share) exactly at the same time and you may end up with a new corner-case for renegotiation called “glare.” This is what happens when both ends of the WebRTC session decide to send new offers at the same time. We do not yet support the “rollback” session description type that can be used to recover from glare (see Jsep draft and the Firefox bug). Probably the best interim solution to prevent glare is to announce via your signaling channel that the user did something which is going to kick off another round of renegotiation. Then, wait for the okay from the far end before you call createOffer() locally.


    This is an example renegotiation offer SDP from Firefox 39 when adding the screen share:

    o=mozilla...THIS_IS_SDPARTA-39.0a1 7832380118043521940 1 IN IP4
    t=0 0
    a=fingerprint:sha-256 4B:31:DA:18:68:AA:76:A9:C9:A7:45:4D:3A:B3:61:E9:A9:5F:DE:63:3A:98:7C:E5:34:E4:A5:B6:95:C6:F2:E1
    a=group:BUNDLE sdparta_0 sdparta_1 sdparta_2
    a=msid-semantic:WMS *
    m=audio 9 RTP/SAVPF 109 9 0 8
    c=IN IP4
    a=candidate:0 1 UDP 2130379007 62583 typ host
    a=candidate:1 1 UDP 1694236671 54687 typ srflx raddr rport 62583
    a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
    a=msid:{d57d3917-64e9-4f49-adfb-b049d165c312} {920e9ffc-728e-0d40-a1b9-ebd0025c860a}
    a=rtpmap:109 opus/48000/2
    a=rtpmap:9 G722/8000/1
    a=rtpmap:0 PCMU/8000
    a=rtpmap:8 PCMA/8000
    a=ssrc:323910839 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}
    m=video 9 RTP/SAVPF 120
    c=IN IP4
    a=candidate:0 1 UDP 2130379007 62583 typ host
    a=candidate:1 1 UDP 1694236671 54687 typ srflx raddr rport 62583
    a=fmtp:120 max-fs=12288;max-fr=60
    a=msid:{d57d3917-64e9-4f49-adfb-b049d165c312} {35eeb34f-f89c-3946-8e5e-2d5abd38c5a5}
    a=rtcp-fb:120 nack
    a=rtcp-fb:120 nack pli
    a=rtcp-fb:120 ccm fir
    a=rtpmap:120 VP8/90000
    a=ssrc:2917595157 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}
    m=video 9 RTP/SAVPF 120
    c=IN IP4
    a=fmtp:120 max-fs=12288;max-fr=60
    a=msid:{3a2bfe17-c65d-364a-af14-415d90bb9f52} {aa7a4ca4-189b-504a-9748-5c22bc7a6c4f}
    a=rtcp-fb:120 nack
    a=rtcp-fb:120 nack pli
    a=rtcp-fb:120 ccm fir
    a=rtpmap:120 VP8/90000
    a=ssrc:2325911938 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}

    Note that each track gets its own m-section, denoted by the msid attribute.

    As you can see from the BUNDLE attribute, Firefox offers to put the new video stream, with its different msid value, into the same bundled transport. That means if the answerer agrees we can start sending the video stream over the already established transport. We don’t have to go through another ICE and DTLS round. And in case of TURN servers we save another relay resource.

    Hypothetically, this is what the previous offer would look like if it used plan B (as Chrome does):

    o=mozilla...THIS_IS_SDPARTA-39.0a1 7832380118043521940 1 IN IP4
    t=0 0
    a=fingerprint:sha-256 4B:31:DA:18:68:AA:76:A9:C9:A7:45:4D:3A:B3:61:E9:A9:5F:DE:63:3A:98:7C:E5:34:E4:A5:B6:95:C6:F2:E1
    a=group:BUNDLE sdparta_0 sdparta_1
    a=msid-semantic:WMS *
    m=audio 9 RTP/SAVPF 109 9 0 8
    c=IN IP4
    a=candidate:0 1 UDP 2130379007 62583 typ host
    a=candidate:1 1 UDP 1694236671 54687 typ srflx raddr rport 62583
    a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
    a=rtpmap:109 opus/48000/2
    a=rtpmap:9 G722/8000/1
    a=rtpmap:0 PCMU/8000
    a=rtpmap:8 PCMA/8000
    a=ssrc:323910839 msid:{d57d3917-64e9-4f49-adfb-b049d165c312} {920e9ffc-728e-0d40-a1b9-ebd0025c860a}
    a=ssrc:323910839 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}
    m=video 9 RTP/SAVPF 120
    c=IN IP4
    a=candidate:0 1 UDP 2130379007 62583 typ host
    a=candidate:1 1 UDP 1694236671 54687 typ srflx raddr rport 62583
    a=fmtp:120 max-fs=12288;max-fr=60
    a=rtcp-fb:120 nack
    a=rtcp-fb:120 nack pli
    a=rtcp-fb:120 ccm fir
    a=rtpmap:120 VP8/90000
    a=ssrc:2917595157 msid:{d57d3917-64e9-4f49-adfb-b049d165c312} {35eeb34f-f89c-3946-8e5e-2d5abd38c5a5}
    a=ssrc:2917595157 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}
    a=ssrc:2325911938 msid:{3a2bfe17-c65d-364a-af14-415d90bb9f52} {aa7a4ca4-189b-504a-9748-5c22bc7a6c4f}
    a=ssrc:2325911938 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}

    Note that there is only one video m-section, with two different msids, which are part of the ssrc attributes rather than in their own a lines (these are called “source-level” attributes). Continued…

  7. An analytics primer for developers

    There are three kinds of lies: lies, damned lies, and statistics – Mark Twain

    Deciding what to track (all the things)

    When you are adding analytics to a system you should try to log everything. At some point in the future if you need to pull information out of a system it’s much better to have every piece of information to hand, rather than realising that you need some data that you don’t yet track. Here are some guidelines and suggestion for collecting and analysing information about how people are interacting with your website or app.

    Grouping your stats as a best practice

    Most analytics platforms allow you to tag an event with metadata. This lets you analyse stats against each other and makes it easier to compare elements in a user interaction.

    For example, if you are logging clicks on a menu, you could track each menu item differently e.g.:

    track("Home pressed");
    track("Cart pressed");
    track("Logout pressed");

    Doing this makes it harder to answer questions such as which button is most popular etc. Using metadata you can make most analytics platforms perform calculations like this for you:

    track("Menu pressed","Home button");
    track("Menu pressed","Cart button");
    track("Menu pressed","Logout button");

    The analytics above mean you now have a total of all menu presses, and you can find the most/least popular of the menu items with no extra effort.

    Optimising your funnel

    A conversion funnel is a term of art derived from a consumer marketing model. The metaphor of the funnel describes the flow of steps a user goes through as they engage more deeply with your software. Imagine you want to know how many users clicked log in and then paid at the checkout? If you track events such as “Checkout complete” and “User logged in” you can then ask your analytics platform what percentage of users did both within a certain time frame (a day for instance).

    Imagine the answer comes out to be 10%, this tells you useful information about the behaviour of your users (bear in mind this funnel is not order sensitive, i.e., it does not matter in which order the events happen in login -> cart -> pay or cart -> login -> pay). Thus, you can start to optimise parts of your app and use this value to determine whether or not you are converting more of your users to make a purchase or otherwise engage more deeply.

    Deciding what to measure

    Depending on your business, different stats will have different levels of importance. Here are some common stats of interest to developers of apps or online services:

    Number of sessions
    The total number of sessions on your product (the user opening your product, using it, then closing it = 1 session)
    Session length
    How long each session lasts (can be mode, mean, median)
    How many people come back to your product having used it before (there are a variety of metrics such as rolling retention, 30 day retention etc)
    Monthly active users: how may users use the app once a month
    Daily active users: how may users use the app once a day
    Average revenue per user: how much money you make per person
    Average transaction value: how much money you make per sale
    Customer acquisition cost: how much it costs to get one extra user (normally specified by the channel for getting them)
    Customer lifetime value: total profit made from a user (usually projected)
    The number of people who leave your product in a given time (usually given as a percentage of total user base)
    Cycle time
    The the time it takes for one user to refer another

    Choosing an analytics tool or platform

    There are plenty of analytics providers, listed below are some of the best known and most widely used:

    Google Analytics

    Developer documentation

    Quick event log example:

    ga('send', 'event', 'button', 'click');


    • Free
    • Easy to set up


    • Steep learning curve for using the platform
    • Specialist training can be required to get the most out of the platform

    Single page apps:

    If you are making a single page app/website, you need to keep Google informed that the user is still on your page and hasn’t bounced (gone to your page/app and left without doing anything):

    ga('set' , 'page', location.pathname + + location.hash);
    ga('send', 'pageview');

    Use the above code every time a user navigates to a new section of your app/website to let Google know the user is still browsing your site/app.


    Developer documentation

    Quick event log example:

    FlurryAgent.logEvent("Button clicked");
    FlurryAgent.logEvent("Button clicked",{more : 'data'});


    • Free
    • Easy to set up


    • Data normally 24 hours behind real time
    • Takes ages to load the data


    Developer documentation

    Quick event log example:

    mixpanel.track("Button clicked");
    mixpanel.track("Button clicked",{more : 'data'});


    • Free trial
    • Easy to set up
    • Real-time data


    • Gets expensive after the free trial
    • If you are tracking a lot of points, the interface can get cluttered

    Speeding up requests

    When you are loading an external JS file you want to do it asynchronously if possible to speed up the page load.

    <script type="text/javascript" async> ... </script>

    The above code will cause the JavaScript to load asynchronously but assumes the user has a browser supporting HTML5.

    //jQuery example

    This code will load the JavaScript asynchronously with greater browser support.

    The next problem is that you could try to add an analytic even though the framework does not exist yet, so you need to check to see if the variable framework first:

    if(typeof FlurryAgent != "undefined"){

    This will prevent errors and will also allow you to easily disable analytics during testing. (You can just stop the script from being loaded – and the variable will never be defined.)

    The problem here is that you might be missing analytics whilst waiting for the script to load. Instead, you can make a queue to store the events and then post them all when the script loads:

    var queue = [];
    if(typeof FlurryAgent != "undefined"){
       queue.push(["data",{more : data}]);
    //jQuery example
       for(var i = 0;i < queue.length;i++)
       queue = [];

    Analytics for your Firefox App

    You can use any of the above providers above with Firefox OS, but remember when you paste a script into your code they generally are protocol agnostic: they start // and you need to choose either http: or https: (This is required only if you are making a packaged app.)

    Let us know how it goes.

  8. asm.js Speedups Everywhere

    asm.js is an easy-to-optimize subset of JavaScript. It runs in all browsers without plugins, and is a good target for porting C/C++ codebases such as game engines – which have in fact been the biggest adopters of this approach, for example Unity 3D and Unreal Engine.

    Obviously, developers porting games using asm.js would like them to run well across all browsers. However, each browser has different performance characteristics, because each has a different JavaScript engine, different graphics implementation, and so forth. In this post, we’ll focus on JavaScript execution speed and see the significant progress towards fast asm.js execution that has been happening across the board. Let’s go over each of the four major browsers now.


    Already in 2013, Google released Octane 2.0, a new version of their primary JavaScript benchmark suite, which contained a new asm.js benchmark, zlib. Benchmarks define what browsers optimize: things that matter are included in benchmarks, and browsers then compete to achieve the best scores. Therefore, adding an asm.js benchmark to Octane clearly signaled Google’s belief that asm.js content is important to optimize for.

    A further major development happened more recently, when Google landed TurboFan, a new work-in-progress optimizing compiler for Chrome’s JavaScript engine, v8. TurboFan has a “sea of nodes” architecture (which is new in the JavaScript space, and has been used very successfully elsewhere, for example in the Java server virtual machine), and aims to reach even higher speeds than CrankShaft, the first optimizing compiler for v8.

    While TurboFan is not yet ready to be enabled on all JavaScript content, as of Chrome 41 it is enabled on asm.js. Getting the benefits of TurboFan early on asm.js shows the importance of optimizing asm.js for the Chrome team. And the benefits can be quite substantial: For example, TurboFan speeds up Emscripten‘s zlib benchmark by 13%, and fasta by 24%.


    During the last year, Safari’s JavaScript Engine, JavaScriptCore, introduced a new JIT (Just In Time compiler) called FTL. FTL stands for “Fourth Tier LLVM,” as it adds a fourth level of optimization above the three previously-existing ones, and it is based on LLVM, a powerful open source compiler framework. This is exciting because LLVM is a top-tier general-purpose compiler, with many years of optimizations put into it, and Safari gets to reuse all those efforts. As shown in the blogposts linked to earlier, the speedups that FTL provides can be very substantial.

    Another interesting development from Apple this year was the introduction of a new JavaScript benchmark, JetStream. JetStream contains several asm.js benchmarks, an indication that Apple believes asm.js content is important to optimize for, just as when Google added an asm.js benchmark to Octane.

    Internet Explorer

    The JavaScript engine inside Internet Explorer is named Chakra. Last year, the Chakra team blogged about a suite of optimizations coming to IE in Windows 10 and pointed to significant improvements in the scores on asm.js workloads in Octane and JetStream. This is yet another example of how having asm.js workloads in common benchmarks drives measurement and optimization.

    The big news, however, is the recent announcement by the Chakra team that they are working on adding specific asm.js optimizations, to arrive in Windows 10 together with the other optimizations mentioned earlier. These optimizations haven’t made it to the Preview channel yet, so we can’t measure and report on them here. However, we can speculate on the improvements based on the initial impact of landing asm.js optimizations in Firefox. As shown in this benchmark comparisons slide containing measurements from right after the landing, asm.js optimizations immediately brought Firefox to around 2x slower than native performance (from 5-12x native before). Why should these wins translate to Chakra? Because, as explained in our previous post, the asm.js spec provides a predictable way to validate asm.js code and generate high-quality code based on the results.

    So, here’s looking forward to good asm.js performance in Windows 10!


    As we mentioned before, the initial landing of asm.js optimizations in Firefox generally put Firefox within 2x of native in terms of raw throughput. By the end of 2013, we were able to report that the gap had shrunk to around 1.5x native – which is close to the amount of variability that different native compilers have between each other anyhow, so comparisons to “native speed” start to be less meaningful.

    At a high-level, this progress comes from two kinds of improvements: compiler backend optimizations and new JavaScript features. In the area of compiler backend optimizations, there has been a stream of tiny wins (specific to particular code patterns or hardware) making it difficult to point to any one thing. Two significant improvements stand out, though:

    Along with backend optimization work, two new JavaScript features have been incorporated into asm.js which unlock new performance capabilities in the hardware. The first feature, Math.fround, may look simple but it enables the compiler backend to generate single-precision floating-point arithmetic when used carefully in JS. As described in this post, the switch can result in anywhere from a 5% – 60% speedup, depending on the workload. The second feature is much bigger: SIMD.js. This is still a stage 1 proposal for ES7 so the new SIMD operations and the associated asm.js extensions are only available in Firefox Nightly. Initial results are promising though.

    Separate from all these throughput optimizations, there have also been a set of load time optimizations in Firefox: off-main-thread and parallel compilation of asm.js code as well as caching of the compiled machine code. As described in this post, these optimizations significantly improve the experience of starting a Unity- or Epic-sized asm.js application. Existing asm.js workloads in the benchmarks mentioned above do not test this aspect of asm.js performance so we put together a new benchmark suite named Massive that does. Looking at Firefox’s Massive score over time, we can see the load-time optimizations contributing to a more than 6x improvement (more details in the Hacks post introducing the Massive benchmark).

    The Bottom Line

    What is most important, in the end, are not the underlying implementation details, nor even specific performance numbers on this benchmark or that. What really matters is that applications run well. The best way to check that is to actually run real-world games! A nice example of an asm.js-using game is Dead Trigger 2, a Unity 3D game:

    The video shows the game running on Firefox, but as it uses only standard web APIs, it should work in any browser. We tried it now, and it renders quite smoothly on Firefox, Chrome and Safari. We are looking forward to testing it on the next Preview version of Internet Explorer as well.

    Another example is Cloud Raiders:

    As with Unity, the developers of Cloud Raiders were able to compile their existing C++ codebase (using Emscripten) to run on the web without relying on plugins. The result runs well in all four of the major browsers.

    In conclusion, asm.js performance has made great strides over the last year. There is still room for improvement – sometimes performance is not perfect, or a particular API is missing, in one browser or another – but all major browsers are working to make sure that asm.js runs quickly. We can see that by looking at the benchmarks they are optimizing on, which contain asm.js, and in the new improvements they are implementing in their JavaScript engines, which are often motivated by asm.js. As a result, games that not long ago would have required plugins are quickly getting to the point where they can run well without them, in modern browsers across the web.

  9. Synchronous Execution and Filesystem Access in Emscripten

    Emscripten helps port C and C++ code to run on the Web. When doing such porting, we have to work around limitations of the web platform, one of which is that code must be asynchronous: you can’t have long-running code on the Web, it must be split up into events, because other important things – rendering, input, etc. – can’t happen while your code is running. But, it is common to have C and C++ code that is synchronous! This post will review how Emscripten helps handle this problem, using a variety of methods. We’ll look at preloading a virtual filesystem as well as a recently-added option to execute your compiled code in a special interpreter. We’ll also get the chance to play some Doom!

    First, let’s take a more concrete look at  the problem. Consider, for example,

    FILE *f = fopen("data.txt", "rb");
    fread(buffer, 100, 1, f);

    This C code opens a file and reads from it synchronously. Now, in the browser we don’t have local filesystem access (content is sandboxed, for security), so when reading a file, we might be issuing a remote request to a server, or loading from IndexedDB – both of which are asynchronous! How, then, does anything get ported at all? Let’s go over three approaches to handling this problem.

    1. Preloading to Emscripten’s virtual filesystem

    The first tool Emscripten has is a virtual in-memory filesystem, implemented in JavaScript (credit goes to inolen for most of the code), which can be pre-populated before the program runs. If you know which files will be accessed, you can preload them (using emcc’s –preload-file option), and when the code executes, copies of the files are already in memory, ready for synchronous access.

    On small to medium amounts of data, this is a simple and useful technique. The compiled code doesn’t know it’s using a virtual filesystem, everything looks normal and synchronous to it. Things just work. However, with large amounts of data, it can be too expensive to preload it all into memory. You might only need each file for a short time – for example, if you load it into a WebGL shader, and then forget about it on the CPU side – but if it’s all preloaded, you have to hold it all in memory at once. Also, the Emscripten virtual filesystem works hard to be as POSIX-compliant as it can, supporting things like permissions, mmap, etc., which add overhead that might be unnecessary in some applications.

    How much of a problem this is depends not just on the amount of data you load, but also the browser and the operating system. For example, on a 32-bit browser you are generally limited to 4GB of virtual address space, and fragmentation can be a problem. For these reasons, 64-bit browsers can sometimes succeed in running applications that need a lot of memory while 32-bit browsers fail (or fail some of the time). To some extent you can try to work around memory fragmentation problems by splitting up your data into separate asset bundles, by running Emscripten’s file packager separately several times, instead of using –preload-file once for everything. Each bundle is a combination of JavaScript that you load on your page, and a binary file with the data of all the files you packaged in that asset bundle, so in this way you get multiple smaller files rather than one big one. You can also run the file packager with –no-heap-copy, which will keep the downloaded asset bundle data in separate typed arrays instead of copying them into your program’s memory. However, even at best, these things can only help some of the time with memory fragmentation, in an unpredictable manner.

    Preloading all the data is therefore not always a viable solution: With large amounts of data, we might not have enough memory, or fragmentation might be a problem. Also, we might not know ahead of time which files we will need. And in general, even if preloading works for a project, we would still like to avoid it so that we can use as little memory as possible, as things generally run faster that way. That’s why we need the 2 other approaches to handling the problem of synchronous code, which we will discuss now.

    2. Refactor code to be asynchronous

    The second approach is to refactor your code to turn synchronous into asynchronous code. Emscripten provides asynchronous APIs you can use for this purpose, for example, the fread() in the example above could be replaced with an asynchronous network download (emscripten_async_wget, emscripten_async_wget_data), or an asynchronous access of locally-cached data in IndexedDB (emscripten_idb_async_load, emscripten_idb_async_store, etc.).

    And if you have synchronous code doing something other than filesystem access, for example rendering, Emscripten provides a generic API to do an asynchronous callback (emscripten_async_call). For the common case of a main loop which should be called once per frame from the browser’s event loop, Emscripten has a main loop API (emscripten_set_main_loop, etc.).

    Concretely, an fread() would be replaced with something like

    emscripten_async_wget_data("filename.txt", 0, onLoad, onError);

    where the first parameter is the filename on the remote server, then an optional void* argument (that will be passed to the callbacks), then callbacks on load and on error. The tricky thing is that the code that should execute right after the fread() would need to be in the onLoad callback – that’s where the refactoring comes in. Sometimes this is easy to do, but it might not be.

    Refactoring code to be asynchronous is generally the optimal thing to do. It makes your application use the APIs that are available on the Web in the way they are intended to be used. However, it does require changes to your project, and may require that the entire thing be designed in an event-friendly manner, which can be difficult if it wasn’t already structured that way. For these reasons, Emscripten has one more approach that can help you here.

    3. The Emterpreter: Run synchronous code asynchronously, automatically

    The Emterpreter is a fairly new option in Emscripten that was initially developed for startup-time reasons. It compiles your code into a binary bytecode, and ships it with a little interpreter (written in JavaScript, of course), in which the code can be executed. Code running in an interpreter is “manually executed” by us, so we can control it more easily than normal JavaScript, and we can add the capability to pause and resume, which is what we need to turn synchronous code into asynchronous code. Emterpreter-Async, the Emterpreter plus support for running synchronous code asynchronously, was therefore fairly easy to add on top of the existing Emterpreter option.

    The idea of an automatic transformation from synchronous to asynchronous code was experimented with by Lu Wang during his internship over the summer of 2014: the Asyncify option. Asyncify rewrites code at the LLVM level to support pausing and resuming execution: you write synchronous code, and the compiler rewrites it to run asynchronously. Returning to the fread() example from before, Asyncify would automatically break up the function around that call, and put the code after the call into a callback function – basically, it does what we suggested you do manually in the “Refactor code to be asynchronous” section above. This can work surprisingly well: For example, Lu ported vim, a large application with a lot of synchronous code in it, to the Web. And it works! However, we hit significant limitations in terms of increased code size because of how Asyncify restructures your code.

    The Emterpreter’s async support avoids the code size problem that Asyncify hit because it is an interpreter running bytecode: The bytecode is always the same size (in fact, smaller than asm.js), and we can manipulate control flow on it manually in the interpreter, without instrumenting the code.

    Of course, running in an interpreter can be quite slow, and this one is no exception – speed can be significantly slower than usual. Therefore, this is not a mode in which you want to run most of your code. But, the Emterpreter gives you the option to decide which parts of your codebase are interpreted and which are not, and this is crucial to productive use of this option, as we will now see.

    Let’s make this concrete by showing the option in practice on the Doom codebase. Here is a normal port of Doom (specifically Boon:, the Doom code with Freedoom open art assets). That link is just Doom compiled with Emscripten, not using synchronous code or the Emterpreter at all, yet. It looks like the game works in that link – do we even need anything else? It turns out that we need synchronous execution in two places in Doom: First, for filesystem access. Since Doom is from 1993, the size of the game is quite small compared to today’s hardware. We can preload all of the data files and things just work (that’s what happens in that link). So far, so good!

    The second problem, though, is trickier: For the most part Doom renders a whole frame in each iteration of the main loop (which we can call from the browser’s event loop one at a time), however it also does some visual effects using synchronous code. Those effects are not shown in that first link – Doom fans may have noticed something was missing! :)

    Here is a build with the Emterpreter-Async option enabled. This runs the entire application as bytecode in the interpreter, and it’s quite slow, as expected. Ignoring speed for now, you might notice that when you start a game, there is a “wipe” effect right before you begin to play, that wasn’t in the previous build. It looks kind of like a descending wave. Here’s a screenshot:

    22297That effect is written synchronously (note the screen update and sleep). The result is that in the initial port of the game, the wipe effect code is executed, but the JavaScript frame doesn’t end yet so no rendering happens. For this reason, we don’t see the wipe in the first build! But we do see it in the second, because we enabled the Emterpreter-Async option, which supports synchronous code.

    The second build is slow. What can we do? The Emterpreter lets you decide which code runs normally, as full-speed asm.js, and which is interpreted. We want to run only what we absolutely must run in the interpreter, and everything else in asm.js, so things are as fast as possible. For purposes of synchronous code, the code we must interpret is anything that is on the stack during a synchronous operation. To understand what that means, imagine that the callstack currently looks like this:

    main() => D_DoomMain() => D_Display() => D_Wipe() => I_uSleep()

    and the last of those does a call to sleep. Then the Emterpreter turns this synchronous operation into an asynchronous operation by saving where execution is right now in the current method (this is easy using the interpreter’s program counter, as well as since all local variables are already stored in a stack on a global typed array), then doing the same for the methods calling it, and while doing so to exit them all (which is also easy, each call to the interpreter is a call to a JavaScript method, which just returns). After that, we can do a setTimeout() for when we want to resume. So far, we have saved what we were doing, stopped, set an asynchronous callback for some time in the future, and we can then return control to the browser’s event loop, so it can render and so forth.

    When the asynchronous callback fires sometime later, we reverse the first part of the process: We call into the interpreter for main(), jump to the right position in it, then continue to do so for the rest of the call stack – basically, recreating the call stack exactly as it was before. At this point we can resume execution in the interpreter, and it is as if we never left: synchronous execution has been turned asynchronous.

    That means that if D_Wipe() does a synchronous operation, it must be interpreted, and anything that can call it as well, and so forth, recursively. The good news is that often such code tends to be small and doesn’t need to be fast: it’s typically event-loop handling code, and not code actually doing hard work. Talking abstractly, it’s common to see callstacks like these in games:

    main() => MainLoop() => RunTasks() => PhysicsTask() => HardWork()


    main() => MainLoop() => RunTasks() => IOTask() => LoadFile()

    Assuming LoadFile() does a synchronous read of a file, it must be interpreted. As we mentioned above, this means everything that can be on the stack together with it must also be interpreted: main(), MainLoop(), RunTasks(), and IOTask() – but not any of the physics methods. In other words, if you never have physics and networking on the stack at the same time (a network event calling something that ends up calling physics, or a physics event that somehow decides to do a network request all of a sudden), then you can run networking in the interpreter, and physics at full speed. This is the case in Doom, and also other real-world codebases (and even in ones that are tricky, as in Em-DOSBox which has recursion in a crucial method, sometimes a solution can be found).

    Here is a build of Doom with that optimization enabled – it only interprets what we absolutely must interpret. It runs at about the same speed as the original, optimized build and it also has the wipe effect fully working. Also, the wipe effect is nice and smooth, which it wasn’t before: even though the wipe method itself must be interpreted – because it calls sleep() – the rendering code it calls in between sleeping can run at full speed, as that rendering code is never on the stack while sleeping!

    To have synchronous code working properly while the project stays at full speed, it is crucial to run exactly the right methods in the interpreter. Here is a list of the methods we need in Doom (in the ‘whitelist’ option there) – only 15 out of 1,425, or ~1%. To help you find a list for your project, the Emterpreter provides both static and dynamic tools, see the docs for more details.


    Emscripten is often used to port code that contains synchronous portions, but long-running synchronous code is not possible on the Web. As described in this article, there are three approaches to handling that situation:

    • If the synchronous code just does file access, then preloading everything is a simple solution.
    • However, if there is a great amount of data, or you don’t know what you’ll need ahead of time, this might not work well. Another option is to refactor your code to be asynchronous.
    • If that isn’t an option either, perhaps because the refactoring is too extensive, then Emscripten now offers the Emterpreter option to run parts of your codebase in an interpreter which does support synchronous execution.

    Together, these approaches provide a range of options for handling synchronous code, and in particular the common case of synchronous filesystem access.

  10. What’s new in Web Audio


    It’s been a while since we said anything on Hacks about the Web Audio API. However, with Firefox 37/38 hitting our Developer Edition/Nightly browser channels, there are some interesting new features to talk about!

    This article presents you with some new Web Audio tricks to watch out for, such as the new StereoPannerNode, promise-based methods, and more.

    Simple stereo panning

    Firefox 37 introduces the StereoPannerNode interface, which allows you to add a stereo panning effect to an audio source simply and easily. It takes a single property: pan—an a-rate AudioParam that can accept numeric values between -1.0 (full left channel pan) and 1.0 (full right channel pan).

    But don’t we already have a PannerNode?

    You may have already used the older PannerNode interface, which allows you to position sounds in 3D. Connecting a sound source to a PannerNode causes it to be “spatialised”, meaning that it is placed into a 3D space and you can then specify the position of the listener inside. The browser then figures out how to make the sources sound, applying panning and doppler shift effects, and other nice 3D “artifacts” if the sounds are moving over time, etc:

    var audioContext = new AudioContext();
    var pannerNode = audioContext.createPanner();
    // The listener is 100 units to the right of the 3D origin
    audioContext.listener.setPosition(100, 0, 0);
    // The panner is in the 3D origin
    pannerNode.setPosition(0, 0, 0);

    This works well with WebGL-based games as both environments use similar units for positioning—an array of x, y, z values. So you could easily update the position, orientation, and velocity of the PannerNodes as you update the position of the entities in your 3D scene.

    But what if you are just building a conventional music player where the songs are already stereo tracks, and you actually don’t care at all about 3D? You have to go through a more complicated setup process than should be necessary, and it can also be computationally more expensive. With the increased usage of mobile devices, every operation you don’t perform is a bit more battery life you save, and users of your website will love you for it.

    Enter StereoPannerNode

    StereoPannerNode is a much better solution for simple stereo use cases, as described above. You don’t need to care about the listener’s position; you just need to connect source nodes that you want to spatialise to a StereoPannerNode instance, then use the pan parameter.

    To use a stereo panner, first create a StereoPannerNode using createStereoPanner(), and then connect it to your audio source. For example:

    var audioCtx = window.AudioContext();
    // You can use any type of source
    var source = audioCtx.createMediaElementSource(myAudio);
    var panNode = audioCtx.createStereoPanner();

    To change the amount of panning applied, you just update the pan property value:

    panNode.pan.value = 0.5; // places the sound halfway to the right
    panNode.pan.value = 0.0; // centers it
    panNode.pan.value = -0.5; // places the sound halfway to the left

    You can see for a complete example.

    Also, since pan is an a-rate AudioParam you can design nice smooth curves using parameter automation, and the values will be updated per sample. Trying to do this kind of change over time would sound weird and unnatural if you updated the value over multiple requestAnimationFrame calls. And you can’t automate PannerNode positions either.

    For example, this is how you could set up a panning transition from left to right that lasts two seconds:

    panNode.pan.setValueAtTime(-1, audioContext.currentTime);
    panNode.pan.linearRampToValueAtTime(1, audioContext.currentTime + 2);

    The browser will take care of updating the pan value for you. And now, as of recently, you can also visualise these curves using the Firefox Devtools Web Audio Editor.

    Detecting when StereoPannerNode is available

    It might be the case that the Web Audio implementation you’re using has not implemented this type of node yet. (At the time of this writing, it is supported in Firefox 37 and Chrome 42 only.) If you try to use StereoPannerNode in these cases, you’re going to generate a beautiful undefined is not a function error instead.

    To make sure StereoPannerNodes are available, just check whether the createStereoPanner() method exists in your AudioContext:

    if (audioContext.createStereoPanner) {
        // StereoPannerNode is supported!

    If it doesn’t, you will need to revert back to the older PannerNode.

    Changes to the default PannerNode panning algorithm

    The default panning algorithm type used in PannerNodes used to be HRTF, which is a high quality algorithm that rendered its output using a convolution with human-based data (thus it’s very realistic). However it is also very computationally expensive, requiring the processing to be run in additional threads to ensure smooth playback.

    Authors often don’t require such a high level of quality and just need something that is good enough, so the default PannerNode.type is now equalpower, which is much cheaper to compute. If you want to go back to using the high quality panning algorithm instead, you just need to change the type:

    pannerNodeInstance.type = 'HRTF';

    Incidentally, a PannerNode using type = 'equalpower' results in the same algorithm that StereoPannerNode uses.

    Promise-based methods

    Another interesting feature that has been recently added to the Web Audio spec is Promise-based versions of certain methods. These are OfflineAudioContext.startRendering() and AudioContext.decodeAudioData.

    The below sections show how the method calls look with and without Promises.


    Let’s suppose we want to generate a minute of audio at 44100 Hz. We’d first create the context:

    var offlineAudioContext = new OfflineAudioContext(2, 44100 * 60, 44100);

    Classic code

    offlineAudioContext.addEventListener('oncomplete', function(e) {
        // rendering complete, results are at `e.renderedBuffer`

    Promise-based code

    offlineAudioContext.startRendering().then(function(renderedBuffer) {
        // rendered results in `renderedBuffer`


    Likewise, when decoding an audio track we would create the context first:

    var audioContext = new AudioContext();

    Classic code

    audioContext.decodeAudioData(data, function onSuccess(decodedBuffer) {
        // decoded data is decodedBuffer
    }, function onError(e) {
        // guess what... something didn't work out well!

    Promise-based code

    audioContext.decodeAudioData(data).then(function(decodedBuffer) {
        // decoded data is decodedBuffer
    }, function onError(e) {
        // guess what... something didn't work out well!

    In both cases the differences don’t seem major, but if you’re composing the results of promises sequentially or if you’re waiting on an event to complete before calling several other methods, promises are really helpful to avoid callback hell.

    Detecting support for Promise-based methods

    Again, you don’t want to get the dreaded undefined is not a function error message if the browser you’re running your code on doesn’t support these new versions of the methods.

    A quick way to check for support: look at the returned type of these calls. If they return a Promise, we’re in luck. If they don’t, we have to keep using the old methods:

    if((new OfflineAudioContext(1, 1, 44100)).startRendering() != undefined) {
        // Promise with startRendering is supported
    if((new AudioContext()).decodeAudioData(new Uint8Array(1)) != undefined) {
        // Promise with decodeAudioData is supported

    Audio workers

    Although the spec has not been finalised and they are not implemented in any browser yet, it is also worth giving a mention to Audio Workers, which —you’ve guessed it— are a specialised type of web worker for use by Web Audio code.

    Audio Workers will replace the almost-obsolete ScriptProcessorNode. Originally, this was the way to run your own custom nodes inside the audio graph, but they actually run on the main thread causing all sorts of problems, from audio glitches (if the main thread becomes stalled) to unresponsive UI code (if the ScriptProcessorNodes aren’t fast enough to process their data).

    The biggest feature of audio workers is that they run in their own separate thread, just like any other Worker. This ensures that audio processing is prioritised and we avoid sound glitches, which human ears are very sensitive to.

    There is an ongoing discussion on the w3c web audio list; if you are interested in this and other Web Audio developments, you should go check it out.

    Exciting times for audio on the Web!