Mozilla

JavaScript Articles

Sort by:

View:

  1. ES6 In Depth: Destructuring

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    Editor’s note: An earlier version of today’s post, by Firefox Developer Tools engineer Nick Fitzgerald, originally appeared on Nick’s blog as Destructuring Assignment in ES6.

    What is destructuring assignment?

    Destructuring assignment allows you to assign the properties of an array or object to variables using syntax that looks similar to array or object literals. This syntax can be extremely terse, while still exhibiting more clarity than the traditional property access.

    Without destructuring assignment, you might access the first three items in an array like this:

    var first = someArray[0];
    var second = someArray[1];
    var third = someArray[2];
    

    With destructuring assignment, the equivalent code becomes more concise and readable:

    var [first, second, third] = someArray;
    

    SpiderMonkey (Firefox’s JavaScript engine) already has support for most of destructuring, but not quite all of it. Track SpiderMonkey’s destructuring (and general ES6) support in bug 694100.

    Destructuring arrays and iterables

    We already saw one example of destructuring assignment on an array above. The general form of the syntax is:

    [ variable1, variable2, ..., variableN ] = array;
    

    This will just assign variable1 through variableN to the corresponding item in the array. If you want to declare your variables at the same time, you can add a var, let, or const in front of the assignment:

    var [ variable1, variable2, ..., variableN ] = array;
    let [ variable1, variable2, ..., variableN ] = array;
    const [ variable1, variable2, ..., variableN ] = array;
    

    In fact, variable is a misnomer since you can nest patterns as deep as you would like:

    var [foo, [[bar], baz]] = [1, [[2], 3]];
    console.log(foo);
    // 1
    console.log(bar);
    // 2
    console.log(baz);
    // 3
    

    Furthermore, you can skip over items in the array being destructured:

    var [,,third] = ["foo", "bar", "baz"];
    console.log(third);
    // "baz"
    

    And you can capture all trailing items in an array with a “rest” pattern:

    var [head, ...tail] = [1, 2, 3, 4];
    console.log(tail);
    // [2, 3, 4]
    

    When you access items in the array that are out of bounds or don’t exist, you get the same result you would by indexing: undefined.

    console.log([][0]);
    // undefined
    
    var [missing] = [];
    console.log(missing);
    // undefined
    

    Note that destructuring assignment with an array assignment pattern also works for any iterable:

    function* fibs() {
      var a = 0;
      var b = 1;
      while (true) {
        yield a;
        [a, b] = [b, a + b];
      }
    }
    
    var [first, second, third, fourth, fifth, sixth] = fibs();
    console.log(sixth);
    // 5
    

    Destructuring objects

    Destructuring on objects lets you bind variables to different properties of an object. You specify the property being bound, followed by the variable you are binding its value to.

    var robotA = { name: "Bender" };
    var robotB = { name: "Flexo" };
    
    var { name: nameA } = robotA;
    var { name: nameB } = robotB;
    
    console.log(nameA);
    // "Bender"
    console.log(nameB);
    // "Flexo"
    

    There is a helpful syntactical shortcut for when the property and variable names are the same:

    var { foo, bar } = { foo: "lorem", bar: "ipsum" };
    console.log(foo);
    // "lorem"
    console.log(bar);
    // "ipsum"
    

    And just like destructuring on arrays, you can nest and combine further destructuring:

    var complicatedObj = {
      arrayProp: [
        "Zapp",
        { second: "Brannigan" }
      ]
    };
    
    var { arrayProp: [first, { second }] } = complicatedObj;
    
    console.log(first);
    // "Zapp"
    console.log(second);
    // "Brannigan"
    

    When you destructure on properties that are not defined, you get undefined:

    var { missing } = {};
    console.log(missing);
    // undefined
    

    One potential gotcha you should be aware of is when you are using destructuring on an object to assign variables, but not to declare them (when there is no let, const, or var):

    { blowUp } = { blowUp: 10 };
    // Syntax error
    

    This happens because the JavaScript grammar tells the engine to parse any statement starting with { as a block statement (for example, { console } is a valid block statement). The solution is to either wrap the whole expression in parentheses:

    ({ safe } = {});
    // No errors
    

    Destructuring values that are not an object, array, or iterable

    When you try to use destructuring on null or undefined, you get a type error:

    var [blowUp] = null;
    // TypeError: null has no properties
    

    However, you can destructure on other primitive types such as booleans, numbers, and strings, and get undefined:

    var [wtf] = NaN;
    console.log(wtf);
    // undefined
    

    This may come unexpected, but upon further examination the reason turns out to be simple. When using an object assignment pattern, the value being destructured is required to be coercible to an Object. Most types can be converted to an object, but null and undefined may not be converted. When using an array assignment pattern, the value must have an iterator.

    Default values

    You can also provide default values for when the property you are destructuring is not defined:

    var [missing = true] = [];
    console.log(missing);
    // true
    
    var { message: msg = "Something went wrong" } = {};
    console.log(msg);
    // "Something went wrong"
    
    var { x = 3 } = {};
    console.log(x);
    // 3
    

    (Editor’s note: This feature is currently implemented in Firefox only for the first two cases, not the third. See bug 932080.)

    Practical applications of destructuring

    Function parameter definitions

    As developers, we can often expose more ergonomic APIs by accepting a single object with multiple properties as a parameter instead of forcing our API consumers to remember the order of many individual parameters. We can use destructuring to avoid repeating this single parameter object whenever we want to reference one of its properties:

    function removeBreakpoint({ url, line, column }) {
      // ...
    }
    

    This is a simplified snippet of real world code from the Firefox DevTools JavaScript debugger (which is also implemented in JavaScript—yo dawg). We have found this pattern particularly pleasing.

    Configuration object parameters

    Expanding on the previous example, we can also give default values to the properties of the objects we are destructuring. This is particularly helpful when we have an object that is meant to provide configuration and many of the object’s properties already have sensible defaults. For example, jQuery’s ajax function takes a configuration object as its second parameter, and could be rewritten like this:

    jQuery.ajax = function (url, {
      async = true,
      beforeSend = noop,
      cache = true,
      complete = noop,
      crossDomain = false,
      global = true,
      // ... more config
    }) {
      // ... do stuff
    };
    

    This avoids repeating var foo = config.foo || theDefaultFoo; for each property of the configuration object.

    (Editor’s note: Unfortunately, default values within object shorthand syntax still aren’t implemented in Firefox. I know, we’ve had several paragraphs to work on it since that earlier note. See bug 932080 for the latest updates.)

    With the ES6 iteration protocol

    ECMAScript 6 also defines an iteration protocol, which we talked about earlier in this series. When you iterate over Maps (an ES6 addition to the standard library), you get a series of [key, value] pairs. We can destructure this pair to get easy access to both the key and the value:

    var map = new Map();
    map.set(window, "the global");
    map.set(document, "the document");
    
    for (var [key, value] of map) {
      console.log(key + " is " + value);
    }
    // "[object Window] is the global"
    // "[object HTMLDocument] is the document"
    

    Iterate over only the keys:

    for (var [key] of map) {
      // ...
    }
    

    Or iterate over only the values:

    for (var [,value] of map) {
      // ...
    }
    

    Multiple return values

    Although multiple return values aren’t baked into the language proper, they don’t need to be because you can return an array and destructure the result:

    function returnMultipleValues() {
      return [1, 2];
    }
    var [foo, bar] = returnMultipleValues();
    

    Alternatively, you can use an object as the container and name the returned values:

    function returnMultipleValues() {
      return {
        foo: 1,
        bar: 2
      };
    }
    var { foo, bar } = returnMultipleValues();
    

    Both of these patterns end up much better than holding onto the temporary container:

    function returnMultipleValues() {
      return {
        foo: 1,
        bar: 2
      };
    }
    var temp = returnMultipleValues();
    var foo = temp.foo;
    var bar = temp.bar;
    

    Or using continuation passing style:

    function returnMultipleValues(k) {
      k(1, 2);
    }
    returnMultipleValues((foo, bar) => ...);
    

    Importing names from a CommonJS module

    Not using ES6 modules yet? Still using CommonJS modules? No problem! When importing some CommonJS module X, it is fairly common that module X exports more functions than you actually intend to use. With destructuring, you can be explicit about which parts of a given module you’d like to use and avoid cluttering your namespace:

    const { SourceMapConsumer, SourceNode } = require("source-map");
    

    (And if you do use ES6 modules, you know that a similar syntax is available in import declarations.)

    Conclusion

    So, as you can see, destructuring is useful in many individually small cases. At Mozilla we’ve had a lot of experience with it: destructuring was first implemented for JavaScript by Brendan Eich in 2006, again drawing from Python. It shipped in Firefox 2. So we know that destructuring sneaks into your everyday use of the language, quietly making your code a bit shorter and cleaner all over the place.

    Five weeks ago, we said that ES6 would change the way you write JavaScript. It is this sort of feature we had particularly in mind: simple improvements that can be learned one at a time. Taken together, they will end up affecting every project you work on. Revolution by way of evolution.

    Updating destructuring to comply with ES6 has been a team effort. Special thanks to Tooru Fujisawa (arai) and Arpad Borsos (Swatinem) for their excellent contributions.

    Support for destructuring is under development for Chrome, and other browsers will undoubtedly add support in time. For now, you’ll need to use Babel or Traceur if you want to use destructuring on the Web.


     

    Thanks again to Nick Fitzgerald for this week’s post.

    Next week, we’ll cover a feature that is nothing more or less than a slightly shorter way to write something JS already has—something that has been one of the fundamental building blocks of the language all along. Will you care? Is slightly shorter syntax something you can get excited about? I confidently predict the answer is yes, but don’t take my word for it. Join us next week and find out, as we look at ES6 arrow functions in depth.

    Jason Orendorff
    ES6 In Depth editor

  2. ES6 In Depth: Rest parameters and defaults

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    Today’s post is about two features that make JavaScript’s function syntax more expressive: rest parameters and parameter defaults.

    Rest parameters

    A common need when creating an API is a variadic function, a function that accepts any number of arguments. For example, the String.prototype.concat method takes any number of string arguments. With rest parameters, ES6 provides a new way to write variadic functions.

    To demonstrate, let’s write a simple variadic function containsAll that checks whether a string contains a number of substrings. For example, containsAll("banana", "b", "nan") would return true, and containsAll("banana", "c", "nan") would return false.

    Here is the traditional way to implement this function:

    function containsAll(haystack) {
      for (var i = 1; i < arguments.length; i++) {
        var needle = arguments[i];
        if (haystack.indexOf(needle) === -1) {
          return false;
        }
      }
      return true;
    }
    

    This implementation uses the magical arguments object, an array-like object containing the parameters passed to the function. This code certainly does what we want, but its readibility is not optimal. The function parameter list contains only one parameter haystack, so it’s impossible to tell at a glance that the function actually takes multiple arguments. Additionally, we must be careful to start iterating through arguments at index 1 not 0, since arguments[0] corresponds to the haystack argument. If we ever wanted to add another parameter before or after haystack, we would have to remember to update the for loop. Rest parameters address both of these concerns. Here is a natural ES6 implementation of containsAll using a rest parameter:

    function containsAll(haystack, ...needles) {
      for (var needle of needles) {
        if (haystack.indexOf(needle) === -1) {
          return false;
        }
      }
      return true;
    }
    

    This version of the function has the same behavior as the first one but contains the special ...needles syntax. Let’s see how calling this function works for the invocation containsAll("banana", "b", "nan"). The argument haystack is filled as usual with the parameter that is passed first, namely "banana". The ellipsis before needles indicates it is a rest parameter. All the other passed parameters are put into an array and assigned to the variable needles. For our example call, needles is set to ["b", "nan"]. Function execution then continues as normal. (Notice we have used the ES6 for-of looping construct.)

    Only the last parameter of a function may be marked as a rest parameter. In a call, the parameters before the rest parameter are filled as usual. Any “extra” arguments are put into an array and assigned to the rest parameter. If there are no extra arguments, the rest parameter will simply be an empty array; the rest parameter will never be undefined.

    Default parameters

    Often, a function doesn’t need to have all its possible parameters passed by callers, and there are sensible defaults that could be used for parameters that are not passed. JavaScript has always had a inflexible form of default parameters; parameters for which no value is passed default to undefined. ES6 introduces a way to specify arbitrary parameter defaults.

    Here’s an example. (The backticks signify template strings, which were discussed last week.)

    function animalSentence(animals2="tigers", animals3="bears") {
        return `Lions and ${animals2} and ${animals3}! Oh my!`;
    }
    

    For each parameter, the part after the = is an expression specifying the default value of the parameter if a caller does not pass it. So, animalSentence() returns "Lions and tigers and bears! Oh my!", animalSentence("elephants") returns "Lions and elephants and bears! Oh my!", and animalSentence("elephants", "whales") returns "Lions and elephants and whales! Oh my!".

    The are several subtleties related to default parameters:

    • Unlike Python, default value expressions are evaluated at function call time from left to right. This also means that default expressions can use the values of previously-filled parameters. For example, we could make our animal sentence function more fancy as follows:

      function animalSentenceFancy(animals2="tigers",
          animals3=(animals2 == "bears") ? "sealions" : "bears")
      {
        return `Lions and ${animals2} and ${animals3}! Oh my!`;
      }
      

      Then, animalSentenceFancy("bears") returns "Lions and bears and sealions. Oh my!".

    • Passing undefined is considered to be equivalent to not passing anything at all. Thus, animalSentence(undefined, "unicorns") returns "Lions and tigers and unicorns! Oh my!".

    • A parameter without a default implicitly defaults to undefined, so

      function myFunc(a=42, b) {...}
      

      is allowed and equivalent to

      function myFunc(a=42, b=undefined) {...}
      

    Shutting down arguments

    We’ve now seen that rest parameters and defaults can replace usage of the arguments object, and removing arguments usually makes the code nicer to read. In addition to harming readibility, the magic of the arguments object notoriously causes headaches for optimizing JavaScript VMs.

    It is hoped that rest parameters and defaults can completely supersede arguments. As a first step towards this, functions that use a rest parameter or defaults are forbidden from using the arguments object. Support for arguments won’t be removed soon, if ever, but it’s now preferable to avoid arguments with rest parameters and defaults when possible.

    Browser support

    Firefox has had support for rest parameters and defaults since version 15.

    Unfortunately, no other released browser supports rest parameters or defaults yet. V8 recently added experimental support for rest parameters, and there is an open V8 issue for implementing defaults. JSC also has open issues for rest parameters and defaults.

    The Babel and Traceur compilers both support default parameters, so it is possible to start using them today.

    Conclusion

    Although technically not allowing any new behavior, rest parameters and parameter defaults can make some JavaScript function declarations more expressive and readable. Happy calling!


    Note: Thanks to Benjamin Peterson for implementing these features in Firefox, for all his contributions to the project, and of course for this week’s post.

    Next week, we’ll introduce another simple, elegant, practical, everyday ES6 feature. It takes the familiar syntax you already use to write arrays and objects, and turns it on its head, producing a new, concise way to take arrays and objects apart. What does that mean? Why would you want to take an object apart? Join us next Thursday to find out, as Mozilla engineer Nick Fitzgerald presents ES6 destructuring in depth.

    Jason Orendorff
    ES6 In Depth editor

  3. ES6 In Depth: Template strings

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    Last week I promised a change of pace. After iterators and generators, we would tackle something easy, I said. Something that won’t melt your brain, I said. We’ll see whether I can keep that promise in the end.

    For now, let’s start with something simple.

    Backtick basics

    ES6 introduces a new kind of string literal syntax called template strings. They look like ordinary strings, except using the backtick character ` rather than the usual quote marks ' or ". In the simplest case, they really are just strings:

    context.fillText(`Ceci n'est pas une chaîne.`, x, y);
    

    But there’s a reason these are called “template strings” and not “boring plain old strings that don’t do anything special, only with backticks”. Template strings bring simple string interpolation to JavaScript. That is, they’re a nice-looking, convenient way to plug JavaScript values into a string.

    There are one million ways to use this, but the one that warms my heart is the humble error message:

    function authorize(user, action) {
      if (!user.hasPrivilege(action)) {
        throw new Error(
          `User ${user.name} is not authorized to do ${action}.`);
      }
    }
    

    In this example, ${user.name} and ${action} are called template substitutions. JavaScript will plug the values user.name and action into the resulting string. This could generate a message like User jorendorff is not authorized to do hockey. (Which is true. I don’t have a hockey license.)

    So far, this is just a slightly nicer syntax for the + operator, and the details are what you would expect:

    • The code in a template substitution can be any JavaScript expression, so function calls, arithmetic, and so on are allowed. (If you really want to, you can even nest a template string inside another template string, which I call template inception.)

    • If either value is not a string, it’ll be converted to a string using the usual rules. For example, if action is an object, its .toString() method will be called.

    • If you need to write a backtick inside a template string, you must escape it with a backslash: `\`` is the same as "`".

    • Likewise, if you need to include the two characters ${ in a template string, I don’t want to know what you’re up to, but you can escape either character with a backslash: `write \${ or $\{`.

    Unlike ordinary strings, template strings can cover multiple lines:

    $("#warning").html(`
      <h1>Watch out!</h1>
      <p>Unauthorized hockeying can result in penalties
      of up to ${maxPenalty} minutes.</p>
    `);
    

    All whitespace in the template string, including newlines and indentation, is included verbatim in the output.

    OK. Because of my promise last week, I feel responsible for your brain health. So a quick warning: it starts getting a little intense from here. You can stop reading now, maybe go have a cup of coffee and enjoy your intact, unmelted brain. Seriously, there’s no shame in turning back. Did Lopes Gonçalves exhaustively explore the entire Southern Hemisphere after proving that ships can cross the equator without being crushed by sea monsters or falling off the edge of the earth? No. He turned back, went home, and had a nice lunch. You like lunch, right?

    Backtick the future

    Let’s talk about a few things template strings don’t do.

    • They don’t automatically escape special characters for you. To avoid cross-site scripting vulnerabilities, you’ll still have to treat untrusted data with care, just as if you were concatenating ordinary strings.

    • It’s not obvious how they would interact with an internationalization library (a library for helping your code speak different languages to different users). Template strings don’t handle language-specific formatting of numbers and dates, much less plurals.

    • They aren’t a replacement for template libraries, like Mustache or Nunjucks.

      Template strings don’t have any built-in syntax for looping—building the rows of an HTML table from an array, for example—or even conditionals. (Yes, you could use template inception for this, but to me it seems like the sort of thing you’d do as a joke.)

    ES6 provides one more twist on template strings that gives JS developers and library designers the power to address these limitations and more. The feature is called tagged templates.

    The syntax for tagged templates is simple. They’re just template strings with an extra tag before the opening backtick. For our first example, the tag will be SaferHTML, and we’re going to use this tag to try to address the first limitation listed above: automatically escaping special characters.

    Note that SaferHTML is not something provided by the ES6 standard library. We’re going to implement it ourselves below.

    var message =
      SaferHTML`<p>${bonk.sender} has sent you a bonk.</p>`;
    

    The tag here is the single identifier SaferHTML, but a tag can also be a property, like SaferHTML.escape, or even a method call, like SaferHTML.escape({unicodeControlCharacters: false}). (To be precise, any ES6 MemberExpression or CallExpression can serve as a tag.)

    We saw that untagged template strings are shorthand for simple string concatenation. Tagged templates are shorthand for something else entirely: a function call.

    The code above is equivalent to:

    var message =
      SaferHTML(templateData, bonk.sender);
    

    where templateData is an immutable array of all the string parts of the template, created for us by the JS engine. Here the array would have two elements, because there are two string parts in the tagged template, separated by a substitution. So templateData will be like Object.freeze(["<p>", " has sent you a bonk.</p>"].

    (There is actually one more property present on templateData. We won’t use it in this article, but I’ll mention it for completeness: templateData.raw is another array containing all the string parts in the tagged template, but this time exactly as they looked in the source code—with escape sequences like \n left intact, rather than being turned into newlines and so on. The standard tag String.raw uses these raw strings.)

    This gives the SaferHTML function free rein to interpret both the string and the substitutions in a million possible ways.

    Before reading on, maybe you’d like to try to puzzle out just what SaferHTML should do, and then try your hand at implementing it. After all, it’s just a function. You can test your work in the Firefox developer console.

    Here is one possible answer (also available as a gist).

    function SaferHTML(templateData) {
      var s = templateData[0];
      for (var i = 1; i < arguments.length; i++) {
        var arg = String(arguments[i]);
    
        // Escape special characters in the substitution.
        s += arg.replace(/&/g, "&amp;")
                .replace(/</g, "&lt;")
                .replace(/>/g, "&gt;");
    
        // Don't escape special characters in the template.
        s += templateData[i];
      }
      return s;
    }
    

    With this definition, the tagged template SaferHTML`<p>${bonk.sender} has sent you a bonk.</p>` might expand to the string "<p>ES6&lt;3er has sent you a bonk.</p>". Your users are safe even if a maliciously named user, like Hacker Steve <script>alert('xss');</script>, sends them a bonk. Whatever that means.

    (Incidentally, if the way that function uses the arguments object strikes you as a bit clunky, drop by next week. There’s another new feature in ES6 that I think you’ll like.)

    A single example isn’t enough to illustrate the flexibility of tagged templates. Let’s revisit our earlier list of template string limitations to see what else you could do.

    • Template strings don’t auto-escape special characters. But as we’ve seen, with tagged templates, you can fix that problem yourself with a tag.

      In fact, you can do a lot better than that.

      From a security perspective, my SaferHTML function is pretty weak. Different places in HTML have different special characters that need to be escaped in different ways; SaferHTML does not escape them all. But with some effort, you could write a much smarter SaferHTML function that actually parses the bits of HTML in the strings in templateData, so that it knows which substitutions are in plain HTML; which ones are inside element attributes, and thus need to escape ' and "; which ones are in URL query strings, and thus need URL-escaping rather than HTML-escaping; and so on. It could perform just the right escaping for each substitution.

      Does this sound far-fetched because HTML parsing is slow? Fortunately, the string parts of a tagged template do not change when the template is evaluated again. SaferHTML could cache the results of all this parsing, to speed up later calls. (The cache could be a WeakMap, another ES6 feature that we’ll discuss in a future post.)

    • Template strings don’t have built-in internationalization features. But with tags, we could add them. A blog post by Jack Hsu shows what the first steps down that road might look like. Just one example, as a teaser:

      i18n`Hello ${name}, you have ${amount}:c(CAD) in your bank account.`
      // => Hallo Bob, Sie haben 1.234,56 $CA auf Ihrem Bankkonto.
      

      Note how in this example, name and amount are JavaScript, but there’s a different bit of unfamiliar code, that :c(CAD), which Jack places in the string part of the template. JavaScript is of course handled by the JavaScript engine; the string parts are handled by Jack’s i18n tag. Users would learn from the i18n documentation that :c(CAD) means amount is an amount of currency, denominated in Canadian dollars.

      This is what tagged templates are about.

    • Template strings are no replacement for Mustache and Nunjucks, partly because they don’t have built-in syntax for loops or conditionals. But now we’re starting to see how you would go about fixing this, right? If JS doesn’t provide the feature, write a tag that provides it.

      // Purely hypothetical template language based on
      // ES6 tagged templates.
      var libraryHtml = hashTemplate`
        <ul>
          #for book in ${myBooks}
            <li><i>#{book.title}</i> by #{book.author}</li>
          #end
        </ul>
      `;
      

    The flexibility doesn’t stop there. Note that the arguments to a tag function are not automatically converted to strings. They can be anything. The same goes for the return value. Tagged templates are not even necessarily strings! You could use custom tags to create regular expressions, DOM trees, images, promises representing whole asynchronous processes, JS data structures, GL shaders...

    Tagged templates invite library designers to create powerful domain-specific languages. These languages might look nothing like JS, but they can still embed in JS seamlessly and interact intelligently with the rest of the language. Offhand, I can’t think of anything quite like it in any other language. I don’t know where this feature will take us. The possibilities are exciting.

    When can I start using this?

    On the server, ES6 template strings are supported in io.js today.

    In browsers, Firefox 34+ supports template strings. They were implemented by Guptha Rajagopal as an intern project last summer. Template strings are also supported in Chrome 41+, but not in IE or Safari. For now, you’ll need to use Babel or Traceur if you want to use template strings on the web. You can also use them right now in TypeScript!

    Wait—what about Markdown?

    Hmm?

    Oh. ...Good question.

    (This section isn’t really about JavaScript. If you don’t use Markdown, you can skip it.)

    With template strings, both Markdown and JavaScript now use the ` character to mean something special. In fact, in Markdown, it’s the delimiter for code snippets in the middle of inline text.

    This brings up a bit of a problem! If you write this in a Markdown document:

    To display a message, write `alert(`hello world!`)`.
    

    it’ll be displayed like this:

    To display a message, write alert(hello world!).

    Note that there are no backticks in the output. Markdown interpreted all four backticks as code delimiters and replaced them with HTML tags.

    To avoid this, we turn to a little-known feature that’s been in Markdown from the beginning: you can use multiple backticks as code delimiters, like this:

    To display a message, write ``alert(`hello world!`)``.
    

    This Gist has the details, and it’s written in Markdown so you can look at the source.

    Up next

    Next week, we’ll look at two features that programmers have enjoyed in other languages for decades: one for people who like to avoid an argument where possible, and one for people who like to have lots of arguments. I’m talking about function arguments, of course. Both features are really for all of us.

    We’ll see these features through the eyes of the person who implemented them in Firefox. So please join us next week, as guest author Benjamin Peterson presents ES6 default parameters and rest parameters in depth.

  4. ES6 In Depth: Generators

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    I’m excited about today’s post. Today, we’re going to discuss the most magical feature in ES6.

    What do I mean by “magical”? For starters, this feature is so different from things that already existed in JS that it may seem completely arcane at first. In a sense, it turns the normal behavior of the language inside out! If that’s not magic, I don’t know what is.

    Not only that: this feature’s power to simplify code and straighten out “callback hell” borders on the supernatural.

    Am I laying it on a bit thick? Let’s dive in and you can judge for yourself.

    Introducing ES6 generators

    What are generators?

    Let’s start by looking at one.

    function* quips(name) {
      yield "hello " + name + "!";
      yield "i hope you are enjoying the blog posts";
      if (name.startsWith("X")) {
        yield "it's cool how your name starts with X, " + name;
      }
      yield "see you later!";
    }
    

    This is some code for a talking cat, possibly the most important kind of application on the Internet today. (Go ahead, click the link, play with the cat. When you’re thoroughly confused, come back here for the explanation.)

    It looks sort of like a function, right? This is called a generator-function and it has a lot in common with functions. But you can see two differences right away:

    • Regular functions start with function. Generator-functions start with function*.

    • Inside a generator-function, yield is a keyword, with syntax rather like return. The difference is that while a function (even a generator-function) can only return once, a generator-function can yield any number of times. The yield expression suspends execution of the generator so it can be resumed again later.

    So that’s it, that’s the big difference between regular functions and generator-functions. Regular functions can’t pause themselves. Generator-functions can.

    What generators do

    What happens when you call the quips() generator-function?

    > var iter = quips("jorendorff");
      [object Generator]
    > iter.next()
      { value: "hello jorendorff!", done: false }
    > iter.next()
      { value: "i hope you are enjoying the blog posts", done: false }
    > iter.next()
      { value: "see you later!", done: false }
    > iter.next()
      { value: undefined, done: true }
    

    You’re probably very used to ordinary functions and how they behave. When you call them, they start running right away, and they run until they either return or throw. All this is second nature to any JS programmer.

    Calling a generator looks just the same: quips("jorendorff"). But when you call a generator, it doesn’t start running yet. Instead, it returns a paused Generator object (called iter in the example above). You can think of this Generator object as a function call, frozen in time. Specifically, it’s frozen right at the top of the generator-function, just before running its first line of code.

    Each time you call the Generator object’s .next() method, the function call thaws itself out and runs until it reaches the next yield expression.

    That’s why each time we called iter.next() above, we got a different string value. Those are the values produced by the yield expressions in the body of quips().

    On the last iter.next() call, we finally reached the end of the generator-function, so the .done field of the result is true. Reaching the end of a function is just like returning undefined, and that’s why the .value field of the result is undefined.

    Now might be a good time to go back to the talking cat demo page and really play around with the code. Try putting a yield inside a loop. What happens?

    In technical terms, each time a generator yields, its stack frame—the local variables, arguments, temporary values, and the current position of execution within the generator body—is removed from the stack. However, the Generator object keeps a reference to (or copy of) this stack frame, so that a later .next() call can reactivate it and continue execution.

    It’s worth pointing out that generators are not threads. In languages with threads, multiple pieces of code can run at the same time, usually leading to race conditions, nondeterminism, and sweet sweet performance. Generators are not like that at all. When a generator runs, it runs in the same thread as the caller. The order of execution is sequential and deterministic, and never concurrent. Unlike system threads, a generator is only ever suspended at points marked by yield in its body.

    All right. We know what generators are. We’ve seen a generator run, pause itself, then resume execution. Now for the big question. How could this weird ability possibly be useful?

    Generators are iterators

    Last week, we saw that ES6 iterators are not just a single built-in class. They’re an extension point of the language. You can create your own iterators just by implementing two methods: [Symbol.iterator]() and .next().

    But implementing an interface is always at least a little work. Let’s see what an iterator implementation looks like in practice. As an example, let’s make a simple range iterator that simply counts up from one number to another, like an old-fashioned C for (;;) loop.

    // This should "ding" three times
    for (var value of range(0, 3)) {
      alert("Ding! at floor #" + value);
    }
    

    Here’s one solution, using an ES6 class. (If the class syntax is not completely clear, don’t worry—we’ll cover it in a future blog post.)

    class RangeIterator {
      constructor(start, stop) {
        this.value = start;
        this.stop = stop;
      }
    
      [Symbol.iterator]() { return this; }
    
      next() {
        var value = this.value;
        if (value < this.stop) {
          this.value++;
          return {done: false, value: value};
        } else {
          return {done: true, value: undefined};
        }
      }
    }
    
    // Return a new iterator that counts up from 'start' to 'stop'.
    function range(start, stop) {
      return new RangeIterator(start, stop);
    }
    

    See this code in action.

    This is what implementing an iterator is like in Java or Swift. It’s not so bad. But it’s not exactly trivial either. Are there any bugs in this code? It’s not easy to say. It looks nothing like the original for (;;) loop we are trying to emulate here: the iterator protocol forces us to dismantle the loop.

    At this point you might be feeling a little lukewarm toward iterators. They may be great to use, but they seem hard to implement.

    It probably wouldn’t occur to you to suggest that we introduce a wild, mindbending new control flow structure to the JS language just to make iterators easier to build. But since we do have generators, can we use them here? Let’s try it:

    function* range(start, stop) {
      for (var i = start; i < stop; i++)
        yield i;
    }
    

    See this code in action.

    The above 4-line generator is a drop-in replacement for the previous 23-line implementation of range(), including the entire RangeIterator class. This is possible because generators are iterators. All generators have a built-in implementation of .next() and [Symbol.iterator](). You just write the looping behavior.

    Implementing iterators without generators is like being forced to write a long email entirely in the passive voice. When simply saying what you mean is not an option, what you end up saying instead can become quite convoluted. RangeIterator is long and weird because it has to describe the functionality of a loop without using loop syntax. Generators are the answer.

    How else can we use the ability of generators to act as iterators?

    • Making any object iterable. Just write a generator-function that traverses this, yielding each value as it goes. Then install that generator-function as the [Symbol.iterator] method of the object.

    • Simplifying array-building functions. Suppose you have a function that returns an array of results each time it’s called, like this one:

      // Divide the one-dimensional array 'icons'
      // into arrays of length 'rowLength'.
      function splitIntoRows(icons, rowLength) {
        var rows = [];
        for (var i = 0; i < icons.length; i += rowLength) {
          rows.push(icons.slice(i, i + rowLength));
        }
        return rows;
      }
      

      Generators make this kind of code a bit shorter:

      function* splitIntoRows(icons, rowLength) {
        for (var i = 0; i < icons.length; i += rowLength) {
          yield icons.slice(i, i + rowLength);
        }
      }
      

      The only difference in behavior is that instead of computing all the results at once and returning an array of them, this returns an iterator, and the results are computed one by one, on demand.

    • Results of unusual size. You can’t build an infinite array. But you can return a generator that generates an endless sequence, and each caller can draw from it however many values they need.

    • Refactoring complex loops. Do you have a huge ugly function? Would you like to break it into two simpler parts? Generators are a new knife to add to your refactoring toolkit. When you’re facing a complicated loop, you can factor out the part of the code that produces data, turning it into a separate generator-function. Then change the loop to say for (var data of myNewGenerator(args)).

    • Tools for working with iterables. ES6 does not provide an extensive library for filtering, mapping, and generally hacking on arbitrary iterable data sets. But generators are great for building the tools you need with just a few lines of code.

      For example, suppose you need an equivalent of Array.prototype.filter that works on DOM NodeLists, not just Arrays. Piece of cake:

      function* filter(test, iterable) {
        for (var item of iterable) {
          if (test(item))
            yield item;
        }
      }
      

    So are generators useful? Sure. They are an astonshingly easy way to implement custom iterators, and iterators are the new standard for data and loops throughout ES6.

    But that’s not all generators can do. It may not even turn out to be the most important thing they do.

    Generators and asynchronous code

    Here is some JS code I wrote a while back.

              };
            })
          });
        });
      });
    });
    

    Maybe you’ve seen something like this in your own code. Asynchronous APIs typically require a callback, which means writing an extra anonymous function every time you do something. So if you have a bit of code that does three things, rather than three lines of code, you’re looking at three indentation levels of code.

    Here is some more JS code I’ve written:

    }).on('close', function () {
      done(undefined, undefined);
    }).on('error', function (error) {
      done(error);
    });
    

    Asynchronous APIs have error-handling conventions rather than exceptions. Different APIs have different conventions. In most of them, errors are silently dropped by default. In some of them, even ordinary successful completion is dropped by default.

    Until now, these problems have simply been the price we pay for asynchronous programming. We have come to accept that asynchronous code just doesn’t look as nice and simple as the corresponding synchronous code.

    Generators offer new hope that it doesn’t have to be this way.

    Q.async() is an experimental attempt at using generators with promises to produce async code that resembles the corresponding synchronous code. For example:

    // Synchronous code to make some noise.
    function makeNoise() {
      shake();
      rattle();
      roll();
    }
    
    // Asynchronous code to make some noise.
    // Returns a Promise object that becomes resolved
    // when we're done making noise.
    function makeNoise_async() {
      return Q.async(function* () {
        yield shake_async();
        yield rattle_async();
        yield roll_async();
      });
    }
    

    The main difference is that the asynchronous version must add the yield keyword each place where it calls an asynchronous function.

    Adding a wrinkle like an if statement or a try/catch block in the Q.async version is exactly like adding it to the plain synchronous version. Compared to other ways of writing async code, this feels a lot less like learning a whole new language.

    If you’ve gotten this far, you might enjoy James Long’s very detailed post on this topic.

    So generators are pointing the way to a new asynchronous programming model that seems better suited to human brains. This work is ongoing. Among other things, better syntax might help. A proposal for async functions, building on both promises and generators, and taking inspiration from similar features in C#, is on the table for ES7.

    When can I use these crazy things?

    On the server, you can use ES6 generators today in io.js (and in Node if you use the --harmony command-line option).

    In the browser, only Firefox 27+ and Chrome 39+ support ES6 generators so far. To use generators on the web today, you’ll need to use Babel or Traceur to translate your ES6 code to Web-friendly ES5.

    A few shout-outs to deserving parties: Generators were first implemented in JS by Brendan Eich; his design closely followed Python generators which were inspired by Icon. They shipped in Firefox 2.0 back in 2006. The road to standardization was bumpy, and the syntax and behavior changed a bit along the way. ES6 generators were implemented in both Firefox and Chrome by compiler hacker Andy Wingo. This work was sponsored by Bloomberg.

    yield;

    There is more to say about generators. We didn’t cover the .throw() and .return() methods, the optional argument to .next(), or the yield* expression syntax. But I think this post is long and bewildering enough for now. Like generators themselves, we should pause, and take up the rest another time.

    But next week, let’s change gears a little. We’ve tackled two deep topics in a row here. Wouldn’t it be great to talk about an ES6 feature that won’t change your life? Something simple and obviously useful? Something that will make you smile? ES6 has a few of those too.

    Coming up: a feature that will plug right in to the kind of code you write every day. Please join us next week for a look at ES6 template strings in depth.

  5. ES6 In Depth: Iterators and the for-of loop

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    How do you loop over the elements of an array? When JavaScript was introduced, twenty years ago, you would do it like this:

    for (var index = 0; index < myArray.length; index++) {
      console.log(myArray[index]);
    }
    

    Since ES5, you can use the built-in forEach method:

    myArray.forEach(function (value) {
      console.log(value);
    });
    

    This is a little shorter, but there is one minor drawback: you can’t break out of this loop using a break statement or return from the enclosing function using a return statement.

    It sure would be nice if there were just a for-loop syntax that looped over array elements.

    How about a forin loop?

    for (var index in myArray) {    // don't actually do this
      console.log(myArray[index]);
    }
    

    This is a bad idea for several reasons:

    • The values assigned to index in this code are the strings "0", "1", "2" and so on, not actual numbers. Since you probably don’t want string arithmetic ("2" + 1 == "21"), this is inconvenient at best.
    • The loop body will execute not only for array elements, but also for any other expando properties someone may have added. For example, if your array has an enumerable property myArray.name, then this loop will execute one extra time, with index == "name". Even properties on the array’s prototype chain can be visited.
    • Most astonishing of all, in some circumstances, this code can loop over the array elements in an arbitrary order.

    In short, forin was designed to work on plain old Objects with string keys. For Arrays, it’s not so great.

    The mighty for-of loop

    Remember last week I promised that ES6 would not break the JS code you’ve already written. Well, millions of Web sites depend on the behavior of forin—yes, even its behavior on arrays. So there was never any question of “fixing” forin to be more helpful when used with arrays. The only way for ES6 to improve matters was to add some kind of new loop syntax.

    And here it is:

    for (var value of myArray) {
      console.log(value);
    }
    

    Hmm. After all that build-up, it doesn’t seem all that impressive, does it? Well, we’ll see whether forof has any neat tricks up its sleeve. For now, just note that:

    • this is the most concise, direct syntax yet for looping through array elements
    • it avoids all the pitfalls of forin
    • unlike forEach(), it works with break, continue, and return

    The forin loop is for looping over object properties.

    The forof loop is for looping over data—like the values in an array.

    But that’s not all.

    Other collections support for-of too

    forof is not just for arrays. It also works on most array-like objects, like DOM NodeLists.

    It also works on strings, treating the string as a sequence of Unicode characters:

    for (var chr of "😺😲") {
      alert(chr);
    }
    

    It also works on Map and Set objects.

    Oh, I’m sorry. You’ve never heard of Map and Set objects? Well, they are new in ES6. We’ll do a whole post about them at some point. If you’ve worked with maps and sets in other languages, there won’t be any big surprises.

    For example, a Set object is good for eliminating duplicates:

    // make a set from an array of words
    var uniqueWords = new Set(words);
    

    Once you’ve got a Set, maybe you’d like to loop over its contents. Easy:

    for (var word of uniqueWords) {
      console.log(word);
    }
    

    A Map is slightly different: the data inside it is made of key-value pairs, so you’ll want to use destructuring to unpack the key and value into two separate variables:

    for (var [key, value] of phoneBookMap) {
      console.log(key + "'s phone number is: " + value);
    }
    

    Destructuring is yet another new ES6 feature and a great topic for a future blog post. I should write these down.

    By now, you get the picture: JS already has quite a few different collection classes, and even more are on the way. forof is designed to be the workhorse loop statement you use with all of them.

    forof does not work with plain old Objects, but if you want to iterate over an object’s properties you can either use forin (that’s what it’s for) or the builtin Object.keys():

    // dump an object's own enumerable properties to the console
    for (var key of Object.keys(someObject)) {
      console.log(key + ": " + someObject[key]);
    }
    

    Under the hood

    “Good artists copy, great artists steal.” —Pablo Picasso

    A running theme in ES6 is that the new features being added to the language didn’t come out of nowhere. Most have been tried and proven useful in other languages.

    The forof loop, for example, resembles similar loop statements in C++, Java, C#, and Python. Like them, it works with several different data structures provided by the language and its standard library. But it’s also an extension point in the language.

    Like the for/foreach statements in those other languages, forof works entirely in terms of method calls. What Arrays, Maps, Sets, and the other objects we talked about all have in common is that they have an iterator method.

    And there’s another kind of object that can have an iterator method too: any object you want.

    Just as you can add a myObject.toString() method to any object and suddenly JS knows how to convert that object to a string, you can add the myObject[Symbol.iterator]() method to any object and suddenly JS will know how to loop over that object.

    For example, suppose you’re using jQuery, and although you’re very fond of .each(), you would like jQuery objects to work with forof as well. Here’s how to do that:

    // Since jQuery objects are array-like,
    // give them the same iterator method Arrays have
    jQuery.prototype[Symbol.iterator] =
      Array.prototype[Symbol.iterator];
    

    OK, I know what you’re thinking. That [Symbol.iterator] syntax seems weird. What is going on there? It has to do with the method’s name. The standard committee could have just called this method .iterator(), but then, your existing code might already have some objects with .iterator() methods, and that could get pretty confusing. So the standard uses a symbol, rather than a string, as the name of this method.

    Symbols are new in ES6, and we’ll tell you all about them in—you guessed it—a future blog post. For now, all you need to know is that the standard can define a brand-new symbol, like Symbol.iterator, and it’s guaranteed not to conflict with any existing code. The trade-off is that the syntax is a little weird. But it’s a small price to pay for this versatile new feature and excellent backward compatibility.

    An object that has a [Symbol.iterator]() method is called iterable. In coming weeks, we’ll see that the concept of iterable objects is used throughout the language, not only in forof but in the Map and Set constructors, destructuring assignment, and the new spread operator.

    Iterator objects

    Now, there is a chance you will never have to implement an iterator object of your own from scratch. We’ll see why next week. But for completeness, let’s look at what an iterator object looks like. (If you skip this whole section, you’ll mainly be missing crunchy technical details.)

    A forof loop starts by calling the [Symbol.iterator]() method on the collection. This returns a new iterator object. An iterator object can be any object with a .next() method; the forof loop will call this method repeatedly, once each time through the loop. For example, here’s the simplest iterator object I can think of:

    var zeroesForeverIterator = {
      [Symbol.iterator]: function () {
        return this;
      },
      next: function () {
        return {done: false, value: 0};
      }
    };
    

    Every time this .next() method is called, it returns the same result, telling the forof loop (a) we’re not done iterating yet; and (b) the next value is 0. This means that for (value of zeroesForeverIterator) {} will be an infinite loop. Of course, a typical iterator will not be quite this trivial.

    This iterator design, with its .done and .value properties, is superficially different from how iterators work in other languages. In Java, iterators have separate .hasNext() and .next() methods. In Python, they have a single .next() method that throws StopIteration when there are no more values. But all three designs are fundamentally returning the same information.

    An iterator object can also implement optional .return() and .throw(exc) methods. The forof loop calls .return() if the loop exits prematurely, due to an exception or a break or return statement. The iterator can implement .return() if it needs to do some cleanup or free up resources it was using. Most iterator objects won’t need to implement it. .throw(exc) is even more of a special case: forof never calls it at all. But we’ll hear more about it next week.

    Now that we have all the details, we can take a simple forof loop and rewrite it in terms of the underlying method calls.

    First the forof loop:

    for (VAR of ITERABLE) {
      STATEMENTS
    }
    

    Here is a rough equivalent, using the underlying methods and a few temporary variables:

    var $iterator = ITERABLE[Symbol.iterator]();
    var $result = $iterator.next();
    while (!$result.done) {
      VAR = $result.value;
      STATEMENTS
      $result = $iterator.next();
    }
    

    This code doesn’t show how .return() is handled. We could add that, but I think it would obscure what’s going on rather than illuminate it. forof is easy to use, but there is a lot going on behind the scenes.

    When can I start using this?

    The forof loop is supported in all current Firefox releases. It’s supported in Chrome if you go to chrome://flags and enable “Experimental JavaScript”. It also works in Microsoft’s Spartan browser, but not in shipping versions of IE. If you’d like to use this new syntax on the web, but you need to support IE and Safari, you can use a compiler like Babel or Google’s Traceur to translate your ES6 code to Web-friendly ES5.

    On the server, you don’t need a compiler—you can start using forof in io.js (and Node, with the --harmony option) today.

    (UPDATE: This previously neglected to mention that forof is disabled by default in Chrome. Thanks to Oleg for pointing out the mistake in the comments.)

    {done: true}

    Whew!

    Well, we’re done for today, but we’re still not done with the forof loop.

    There is one more new kind of object in ES6 that works beautifully with forof. I didn’t mention it because it’s the topic of next week’s post. I think this new feature is the most magical thing in ES6. If you haven’t already encountered it in languages like Python and C#, you’ll probably find it mind-boggling at first. But it’s the easiest way to write an iterator, it’s useful in refactoring, and it might just change the way we write asynchronous code, both in the browser and on the server. So join us next week as we look at ES6 generators in depth.

  6. Firefox OS, Animations & the Dark Cubic-Bezier of the Soul

    I’ve been using Firefox OS daily for a couple of years now (wow, time flies!). While performance has steadily improved with efforts like Project Silk, I’ve often noticed delays in the user interface. I assumed the delays were because the hardware was well below the “flagship” hardware I’ve become accustomed to with Android and iOS devices.

    Last year, I built Firefox OS for a Nexus 4 and started using that as my daily phone. Quickly I realized that even with better hardware, I sometimes had to wait on Firefox OS for basic interactions, even when the task wasn’t computationally intensive. I moved on to a Nexus 5 and then a Sony Z3 Compact, both with better specs than the Nexus 4, and experienced the same thing.

    Time passed. Frustration grew. Whispers of a nameless fear…

    Running the numbers

    While reading Ralph Thomas’s post about creating animations based on physical models, I wondered about the implementation of animations in Firefox OS, and how that might be involved in this problem. I performed an audit of the number of instances of different animations, grouped by their duration. I removed progress indicators and things like the boot shutdown animation. Here are the animation and transition durations in Firefox OS, grouped by duration, for transitional interactions like scaling, opening, closing and sliding:

    • 0.1s: 15
    • 0.2s: 57
    • 0.3s: 79
    • 0.4s: 40
    • 0.5s: 78
    • 0.6s: 8

    A couple of things stand out. First, we have a pretty wide distribution of animation durations. Second, the vast majority of the animations are more than 300ms long!

    In fact, in more than 80 animations we are making the user wait more than half a second. These slow animations are dragging us down, resulting in a poorer overall experience of Firefox OS.

    How did we get here?

    The Firefox OS UX and interaction designers didn’t huddle in a room and design each interaction to be intentionally slow. The engineers who implemented these animations didn’t ever think to themselves “this feels really responsive… let’s make it slower!”

    My theory is that interactions like these don’t feel slow while you’re designing and implementing them, because you’re working with a single interaction at a time. When designing and developing an animation, I look for fluidity of motion, the aesthetics of that single action and how the visual impact enhances the task at hand, and then I iterate on duration and effects until it feels right.

    We do have guidelines for responsiveness and user-perceived performance in Firefox OS, written up by Gordon Brander, which you can see in the screenshot below. (Click the image for a larger, more readable version.) However, those guidelines don’t cover the sub-second period between the initial perception of cause and effect and the next actionable state of the user interface.

    Screenshot 2015-04-18 09.38.10

    Users have an entirely different experience than we do as developers and designers. Users make their way through our animations while hurriedly sending a text message, trying to capture that perfect moment on camera, entering their username and password, or arduously uploading a bunch of images one at a time. People are trying to get from point A to point B. They want to complete a task… well, actually not just one: Smartphone users are trying to complete 221 tasks every day, according to a study in the UK last October by Tecmark. All those animations add up! I assert that the aggregate of those 203 animations in Gaia that are 300ms and longer contributes to the frustrating feeling of slowness I was experiencing before digging into this.

    Making it feel fast

    So I tested this theory, by changing all animation durations in Gaia to 200ms, as a starting point. The result? Firefox OS feels far more responsive. Moving through tasks and navigating around the OS felt quick but not abrupt. The camera snaps to readiness. Texting feels so much more fluid and snappy. Apps pop up, instead of slowly hauling their creaky bones out of bed. The Rocketbar gets closer to living up to its name (though I still think the keyboard should animate up while the bar becomes active).

    Here’s a demo of some of our animations side by side, before and after this patch:

    There are a couple of things we can do about this in Gaia:

    1. I filed a bug to get this change landed in Gaia. The 200ms duration is a first stab at this until we can do further testing. Better to err on the snappy side instead of the sluggish side. We’ve got the thumbs-up from most of the 16 developers who had to review the changes, and are now working with the UX team to sign off before it can land. Kevin Grandon helped by adding a CSS variable that we can use across all of Gaia, which will make it easier to implement these types of changes OS-wide in the future as we learn more.
    2. I’m working with the Firefox OS UX team to define global and consistent best-practices for animations. These guidelines will not be correct 100% of the time, but can be a starting point when implementing new animations, ensuring that the defaults are based on research and experience.
    If you are a Firefox OS user, report bugs if you experience anything that feels slow. By reporting a bug, you can make change happen and help improve the user experience for everyone on Firefox OS.

    If you are a developer or designer, what are your animation best-practices? What user feedback have you received on the animations in your Web projects? Let us know in the comments below!

  7. ES6 In Depth: An Introduction

    Welcome to ES6 In Depth! In this new weekly series, we’ll be exploring ECMAScript 6, the upcoming new edition of the JavaScript language. ES6 contains many new language features that will make JS more powerful and expressive, and we’ll visit them one by one in weeks to come. But before we start in on the details, maybe it’s worth taking a minute to talk about what ES6 is and what you can expect.

    What falls under the scope of ECMAScript?

    The JavaScript programming language is standardized by ECMA (a standards body like W3C) under the name ECMAScript. Among other things, ECMAScript defines:

    What it doesn’t define is anything to do with HTML or CSS, or the Web APIs, such as the DOM (Document Object Model). Those are defined in separate standards. ECMAScript covers the aspects of JS that are present not only in the browser, but also in non-browser environments such as node.js.

    The new standard

    Last week, the final draft of the ECMAScript Language Specification, Edition 6, was submitted to the Ecma General Assembly for review. What does that mean?

    It means that this summer, we’ll have a new standard for the core JavaScript programming language.

    This is big news. A new JS language standard doesn’t drop every day. The last one, ES5, happened back in 2009. The ES standards committee has been working on ES6 ever since.

    ES6 is a major upgrade to the language. At the same time, your JS code will continue to work. ES6 was designed for maximum compatibility with existing code. In fact, many browsers already support various ES6 features, and implementation efforts are ongoing. This means all your JS code has already been running in browsers that implement some ES6 features! If you haven’t seen any compatibility issues by now, you probably never will.

    Counting to 6

    The previous editions of the ECMAScript standard were numbered 1, 2, 3, and 5.

    What happened to Edition 4? An ECMAScript Edition 4 was once planned—and in fact a ton of work was done on it—but it was eventually scrapped as too ambitious. (It had, for example, a sophisticated opt-in static type system with generics and type inference.)

    ES4 was contentious. When the standards committee finally stopped work on it, the committee members agreed to publish a relatively modest ES5 and then proceed to work on more substantial new features. This explicit, negotiated agreement was called “Harmony,” and it’s why the ES5 spec contains these two sentences:

    ECMAScript is a vibrant language and the evolution of the language is not complete. Significant technical enhancement will continue with future editions of this specification.

    This statement could be seen as something of a promise.

    Promises resolved

    ES5, the 2009 update to the language, introduced Object.create(), Object.defineProperty(), getters and setters, strict mode, and the JSON object. I’ve used all these features, and I like what ES5 did for the language. But it would be too much to say any of these features had a dramatic impact on the way I write JS code. The most important innovation, for me, was probably the new Array methods: .map(), .filter(), and so on.

    Well, ES6 is different. It’s the product of years of harmonious work. And it’s a treasure trove of new language and library features, the most substantial upgrade for JS ever. The new features range from welcome conveniences, like arrow functions and simple string interpolation, to brain-melting new concepts like proxies and generators.

    ES6 will change the way you write JS code.

    This series aims to show you how, by examining the new features ES6 offers to JavaScript programmers.

    We’ll start with a classic “missing feature” that I’ve been eager to see in JavaScript for the better part of a decade. So join us next week for a look at ES6 iterators and the new for-of loop.

  8. Network Activity and Battery Drain in Mobile Web Apps

    Editor’s note: This post describes the work of a group of students from Portland State University who worked with Mozilla on their senior project. Over the course of the last 6 months, they’ve worked with Mozillian Dietrich Ayala to create a JavaScript library that allows developers to optimize the usage of network operations, thus saving battery life. The group consists of 8 students with varied technology backgrounds, each assigned to take on a different task for the project. Congratulations to the team:

    Overview and Goals of the Project

    The goal of our senior Capstone project was to develop a JavaScript library to make it easier for developers to write applications that use less power by making fewer network requests. In emerging markets, where battery capacities in mobile devices may be smaller and signal strength may be poor, applications that make many requests create serious challenges for the usability of smartphones. Sometimes, applications designed for users in regions with robust network infrastructure can create significant negative effects for users with less reliable access. Reducing battery drain can provide better battery life and better user experiences for everyone. To improve this situation, we’ve created APIs that help developers write mobile applications in a way that minimizes network usage.

    In order to solve this problem effectively, we provide developers with mechanisms to delay non-critical requests, batch requests together, and detect when the network conditions are best for a given activity. This involved doing research to determine the efficacy of various solutions. Regardless of the effectiveness of our APIs, this research provides insight into saving battery usage. In addition to our research, we also focused on the developer ergonomics, hoping to make this easy for developers to use.

    Installation & Usage

    Installation of the library is simple: clone the “dist” folder and choose the library variant that best suits your needs. LocalForage is used in the library for storing the statistical details for each XMLHttpRequest (XHR). This way the developer can perform analysis to develop a set of dynamic heuristics such as utilizing when the user is most likely to make successful XHRs. However, if this is something you do not think will be commonly used, you can opt for a LocalForage-free version to get a smaller library memory footprint.

    We encourage you to check out our General Usage section and API Usage section to get a comprehensive idea of usage and context. A brief overview of how to use the core functions of the APIs is provided.

    Critical Requests

    When you need to make a critical XHR for something that the user needs right away, you make it using the following syntax:

    AL.ajax(url [, data] [, success] [, method])

    Where url indicates the endpoint, data is the parameter you pass JSON data to (i.e., POST XHR), success is called after the request has completed successfully, and the optional method parameter specifies the HTTP method to use (e.g. Patch, Post). If method is not specified, and the data field is null, GET will be used, but if data is used, POST will be the default.

    A sample critical request would look like this:

    AL.ajax('http://rocky-lake-3451.herokuapp.com/', {cats: 20}, function(response, status, xhr) {
    console.log('Response: ', response);
    console.log('Status: ', status);
    console.log('Xhr: ', xhr);
    });

    This code when executed would result in the following output:

    Response: {"request_method":"POST","request_parameters":[]}
    Status: 200
    Xhr: XMLHttpRequest { readyState=4, timeout=0, withCredentials=false, ...}

    Non-Critical Requests

    Non-critical requests are used for non-urgent needs and work by placing the non-critical XHR(s) in a queue which is fired upon certain conditions. The two default conditions are ‘battery is more than 10% and a critical request was just fired’ or ‘battery is more than 10% and the device is plugged in to a power source’. The syntax for making a non-critical request is the same as a critical one, with the exception of the function name and an additional parameter, timeout:

    AL.addNonCriticalRequest(url [, data] [, success] [, method] [, timeout])

    Here’s how timeout works: given a number of milliseconds, the XHR added (and all other XHR in the queue) will fire off if the queue is not already fired off by some other mechanism such as a critical request firing.

    Recording & Analysis

    XHRs are stored within LocalForage. There are a variety of functions to either retrieve the data or trim it. General retrieval syntax is in this format, where callback is an array of XHR-related objects that contain data relevant to the XHR such as start time, end time, and size of the request.

    AL.getHistory(callback)

    You can use this data in all manner of interesting ways, but at a basic level you’ll want to time the XHRs. Calculate the difference between start time and end time of the request for the five most recent requests by doing the following:

    
    function getRecords() {
    var elem = document.getElementById('recordsList');
    AL.getHistory(function (records) {
    if (records) {
    var counter = 0;
    var string = [];
    for (var i = Math.max(records.length - 5, 0); i < Math.max(records.length, 0); ++i) {
    string[counter] = records[i].end - records[i].begin;
    ++counter;
    }
    elem.innerHTML = string.toString();
    }
    else {
    console.log("Records is null");
    }
    });
    }

    Research Findings

    In order to gather data about the effectiveness of our APIs on reducing battery usage, we hooked up our reference device (a Flame) to a battery harness and used our demo app to process 30 requests of various different types of media (text, images, and video). All three tests were run on a WiFi network (our university’s WiFi network, specifically). We attempted to run all three tests on a 3G (T-Mobile) network, but due to poor connectivity, we were only able to run the text test over a cellular network.

    When running the tests on WiFi, we noticed that the WiFi chip was extremely efficient. It would turn on almost instantly and once all network requests were done, it would turn off just as quickly. Because of this, we realized that the library is not very useful when on a WiFi network; it is hard to be more efficient than instant on/off.

    When testing on a 3G network however, it became very apparent that this library could be useful. On the graph of power consumption, there is a very clear (and relatively slow) period where the chip warms up, increasing its power usage gradually until it is fully powered. Once all network activity is complete, there is an even longer period of cool down, as the power draw of the chip gradually declines. It is clear that stacking the requests together is useful on this type of network to avoid dead periods, when the phone is powering down the chips due to lack of activity just as another request comes in, causing chips to be powered back up again at the same slow rate.

    Text over WiFi

    Screen turned on around 2 seconds, 30 XHMLHttpRequest burst sent from roughly the 6 second mark to the end of the graph (~13.5 second mark)

    Text over 3G

    Screen turned on around 2 seconds, 30 XHMLHttpRequest burst sent from roughly the 2 second mark to roughly the 18 second mark

    In conclusion, we believe our library will prove useful when the cellphone is using a 3G network and will help conserve battery usage for requests that are not immediately necessary. Furthermore, the optional data analytics backbone can help advanced developers generate unique heuristics per user to further minimize battery drain.

    Portland State University Firefox OS Capstone Team

    Portland State University Firefox OS Capstone Team: Back row: Tim Huynh, Casey English, Nathan Kerr, Scott Anecito. Front row: Brianna Buckner, Ryan Niebur, Sean Mahan, Bin Gao (left to right)

  9. Drag Elements, Console History, and more – Firefox Developer Edition 39

    Quite a few big new features, improvements, and bug fixes made their way into Firefox Developer Edition 39. Update your Firefox Developer Edition, or Nightly builds to try them out!

    Inspector

    The Inspector now allows you to move elements around via drag and drop. Click and hold on an element and then drag it to where you want it to go. This feature was added by contributor Mahdi Dibaiee.

    Back in Firefox 33, a tooltip was added to the rule view to allow editing curves for cubic bezier CSS animations. In Developer Edition 39, we’ve greatly enhanced the tooltip’s UX by adding various standard curves you can try right away, as well as cleaning up the overall appearance. This enhancement was added by new contributor John Giannakos.

    cubic

    The CSS animations panel we debuted in Developer Edition 37 now includes a time machine. You can rewind, fast forward, and set the current time of your animations.

    Console

    Previously, when the DevTools console closed, your past Console history was lost. Now, Console history is persisted across sessions. The recent commands you’ve entered will remain accessible in the next toolbox you open, whether it’s in another tab or after restarting Firefox. Additionally, we’ve added a clearHistory console command to reset the stored list of commands.

    The shorthand $_ has been added as an alias for the last result evaluated in the Console. If you evaluated an expression without storing the result to a variable (for example), you can use this as a quick way to grab the last result.

    We now format pseudo-array-like objects as if they were arrays in the Console output. This makes a pseudo-array-like object easier to reason about and inspect, just like a real array. This feature was added by contributor Johan K. Jensen.

    pseudo-array

    WebIDE and Mobile

    WiFi debugging for Firefox OS has landed. WiFi debugging allows WebIDE to connect to your Firefox OS device via your local WiFi network instead of a USB cable. We’ll discuss this feature in more detail in a future post.

    WebIDE gained support for Cordova-based projects. If you’re working on a mobile app project using Cordova, WebIDE now knows how to build the project for devices it supports without any extra configuration.

    Other Changes

    • Attribute changes only flash the changed attribute in the Markup View, instead of the whole element.
    • Canvas Debugger now supports setTimeout for animations.
    • Inline box model highlighting.
    • Browser Toolbox can now be opened from a shortcut: Cmd-Opt-Shift-I / Ctrl-Alt-Shift-I.
    • Network Monitor now shows the remote server’s IP address and port.
    • When an element’s highlighted in the Inspector, you can now use the arrow keys to highlight the current element’s parent (left key), or its first child, or its next sibling if it has no children, or the next node in the tree if it has no siblings (right key). This is especially useful when an element and its parent occupy the same space on the screen, making it difficult to select one of them using only the mouse.

    For an even more complete list, check out all 200 bugs resolved during the Firefox 39 development cycle.

    Thanks to all the new developers who made their first DevTools contribution this release:

    • Anush
    • Brandon Max
    • Geoffroy Planquart
    • Johan K. Jensen
    • John Giannakos
    • Mahdi Dibaiee
    • Nounours Heureux
    • Wickie Lee
    • Willian Gustavo Veiga

    Do you have feedback, bug reports, feature requests, or questions? As always, you can comment here, add/vote for ideas on UserVoice, or get in touch with the team at @FirefoxDevTools on Twitter.

  10. WebRTC in Firefox 38: Multistream and renegotiation

    Building on the JSEP (Javascript Session Establishment Protocol) engine rewrite introduced in 37, Firefox 38 now has support for multistream (multiple tracks of the same type in a single PeerConnection), and renegotiation (multiple offer/answer exchanges in a single PeerConnection). As usual with such things, there are caveats and limitations, but the functionality seems to be pretty solid.

    Multistream and renegotiation features

    Why are these things useful, you ask? For example, now you can handle a group video call with a single PeerConnection (multistream), and do things like add/remove these streams on the fly (renegotiation). You can also add screensharing to an existing video call without needing a separate PeerConnection. Here are some advantages of this new functionality:

    • Simplifies your job as an app-writer
    • Requires fewer rounds of ICE (Interactive Connectivity Establishment – the protocol for establishing connection between the browsers), and reduces call establishment time
    • Requires fewer ports, both on the browser and on TURN relays (if using bundle, which is enabled by default)

    Now, there are very few WebRTC services that use multistream (the way it is currently specified, see below) or renegotiation. This means that real-world testing of these features is extremely limited, and there will probably be bugs. If you are working with these features, and are having difficulty, do not hesitate to ask questions in IRC at irc.mozilla.org on #media, since this helps us find these bugs.

    Also, it is important to note that Google Chrome’s current implementation of multistream is not going to be interoperable; this is because Chrome has not yet implemented the specification for multistream (called “unified plan” – check on their progress in the Google Chromium Bug tracker). Instead they are still using an older Google proposal (called “plan B”). These two approaches are mutually incompatible.

    On a related note, if you maintain or use a WebRTC gateway that supports multistream, odds are good that it uses “plan B” as well, and will need to be updated. This is a good time to start implementing unified plan support. (Check the Appendix below for examples.)

    Building a simple WebRTC video call page

    So let’s start with a concrete example. We are going to build a simple WebRTC video call page that allows the user to add screen sharing during the call. As we are going to dive deep quickly you might want to check out our earlier Hacks article, WebRTC and the Early API, to learn the basics.

    First we need two PeerConnections:

    pc1 = new mozRTCPeerConnection();
    pc2 = new mozRTCPeerConnection();

    Then we request access to camera and microphone and attach the resulting stream to the first PeerConnection:

    let videoConstraints = {audio: true, video: true};
    navigator.mediaDevices.getUserMedia(videoConstraints)
      .then(stream1) {
        pc1.addStream(stream1);
      });

    To keep things simple we want to be able to run the call just on one machine. But most computers today don’t have two cameras and/or microphones available. And just having a one-way call is not very exciting. So let’s use a built-in testing feature of Firefox for the other direction:

    let fakeVideoConstraints = {video: true, fake: true };
    navigator.mediaDevices.getUserMedia(fakeVideoConstraints)
      .then(stream2) {
        pc2.addStream(stream2);
      });

    Note: You’ll want to call this part from within the success callback of the first getUserMedia() call so that you don’t have to track with boolean flags if both getUserMedia() calls succeeded before you proceed to the next step.
    Firefox also has a built-in fake audio source (which you can turn on like this {audio: true, fake: true}). But listening to an 8kHz tone is not as pleasant as looking at the changing color of the fake video source.

    Now we have all the pieces ready to create the initial offer:

    pc1.createOffer().then(step1, failed);

    Now the WebRTC typical offer – answer flow follows:

    function step1(offer) {
      pc1_offer = offer;
      pc1.setLocalDescription(offer).then(step2, failed);
    }
     
    function step2() {
      pc2.setRemoteDescription(pc1_offer).then(step3, failed);
    }

    For this example we take a shortcut: Instead of passing the signaling message through an actual signaling relay, we simply pass the information into both PeerConnections as they are both locally available on the same page. Refer to our previous hacks article WebRTC and the Early API for a solution which actually uses FireBase as relay instead to connect two browsers.

    function step3() {
      pc2.createAnswer().then(step4, failed);
    }
     
    function step4(answer) {
      pc2_answer = answer;
      pc2.setLocalDescription(answer).then(step5, failed);
    }
     
    function step5() {
      pc1.setRemoteDescription(pc2_answer).then(step6, failed);
    }
     
    function step6() {
      log("Signaling is done");
    }

    The one remaining piece is to connect the remote videos once we receive them.

    pc1.onaddstream = function(obj) {
      pc1video.mozSrcObject = obj.stream;
    }

    Add a similar clone of this for our PeerConnection 2. Keep in mind that these callback functions are super trivial — they assume we only ever receive a single stream and only have a single video player to connect it. The example will get a little more complicated once we add the screen sharing.

    With this we should be able to establish a simple call with audio and video from the real devices getting sent from PeerConnection 1 to PeerConnection 2 and in the opposite direction a fake video stream that shows slowly changing colors.

    Implementing screen sharing

    Now let’s get to the real meat and add screen sharing to the already established call.

    function screenShare() {
      let screenConstraints = {video: {mediaSource: "screen"}};
     
      navigator.mediaDevices.getUserMedia(screenConstraints)
        .then(stream) {
          stream.getTracks().forEach(track) {
            screenStream = stream;
            screenSenders.push(pc1.addTrack(track, stream));
          });
        });
    }

    Two things are required to get screen sharing working:

    1. Only pages loaded over HTTPS are allowed to request screen sharing.
    2. You need to append your domain to the user preference  media.getusermedia.screensharing.allowed_domains in about:config to whitelist it for screen sharing.

    For the screenConstraints you can also use ‘window‘ or ‘application‘ instead of ‘screen‘ if you want to share less than the whole screen.
    We are using getTracks() here to fetch and store the video track out of the stream we get from the getUserMedia call, because we need to remember the track later when we want to be able to remove screen sharing from the call. Alternatively, in this case you could use the addStream() function used before to add new streams to a PeerConnection. But the addTrack() function gives you more flexibility if you want to handle video and audio tracks differently, for instance. In that case, you can fetch these tracks separately via the getAudioTracks() and getVideoTracks() functions instead of using the getTracks() function.

    Once you add a stream or track to an established PeerConnection this needs to be signaled to the other side of the connection. To kick that off, the onnegotiationneeded callback will be invoked. So your callback should be setup before adding a track or stream. The beauty here — from this point on we can simply re-use our signaling call chain. So the resulting screen share function looks like this:

    function screenShare() {
      let screenConstraints = {video: {mediaSource: "screen"}};
     
      pc1.onnegotiationneeded = function (event) {
        pc1.createOffer(step1, failed);
      };
     
      navigator.mediaDevices.getUserMedia(screenConstraints)
        .then(stream) {
          stream.getTracks().forEach(track) {
            screenStream = stream;
            screenSenders.push(pc1.addTrack(track, stream));
          });
        });
    }

    Now the receiving side also needs to learn that the stream from the screen sharing was successfully established. We need to slightly modify our initial onaddstream function for that:

    pc2.onaddstream = function(obj) {
      var stream = obj.stream;
      if (stream.getAudioTracks().length == 0) {
        pc3video.mozSrcObject = obj.stream;
      } else {
        pc2video.mozSrcObject = obj.stream;
      }
    }

    The important thing to note here: With multistream and renegotiation onaddstream can and will be called multiple times. In our little example onaddstream is called the first time we establish the connection and PeerConnection 2 starts receiving the audio and video from the real devices. And then it is called a second time when the video stream from the screen sharing is added.
    We are simply assuming here that the screen share will have no audio track in it to distinguish the two cases. There are probably cleaner ways to do this.

    Please refer to the Appendix for a bit more detail on what happens here under the hood.

    As the user probably does not want to share his/her screen until the end of the call let’s add a function to remove it as well.

    function stopScreenShare() {
      screenStream.stop();
      screenSenders.forEach(sender) {
        pc1.removeTrack(sender);
      });
    }

    We are holding on to a reference to the original stream to be able to call stop() on it to release the getUserMedia permission we got from the user. The addTrack() call in our screenShare() function returned us an RTCRtpSender object, which we are storing so we can hand it to the removeTrack() function.

    All of the code combined with some extra syntactic sugar can be found on our MultiStream test page.

    If you are going to build something which allows both ends of the call to add screen share, a more realistic scenario than our demo, you will need to handle special cases. For example, multiple users might accidentally try to add another stream (e.g. the screen share) exactly at the same time and you may end up with a new corner-case for renegotiation called “glare.” This is what happens when both ends of the WebRTC session decide to send new offers at the same time. We do not yet support the “rollback” session description type that can be used to recover from glare (see Jsep draft and the Firefox bug). Probably the best interim solution to prevent glare is to announce via your signaling channel that the user did something which is going to kick off another round of renegotiation. Then, wait for the okay from the far end before you call createOffer() locally.

    Appendix

    This is an example renegotiation offer SDP from Firefox 39 when adding the screen share:

    v=0
    o=mozilla...THIS_IS_SDPARTA-39.0a1 7832380118043521940 1 IN IP4 0.0.0.0
    s=-
    t=0 0
    a=fingerprint:sha-256 4B:31:DA:18:68:AA:76:A9:C9:A7:45:4D:3A:B3:61:E9:A9:5F:DE:63:3A:98:7C:E5:34:E4:A5:B6:95:C6:F2:E1
    a=group:BUNDLE sdparta_0 sdparta_1 sdparta_2
    a=ice-options:trickle
    a=msid-semantic:WMS *
    m=audio 9 RTP/SAVPF 109 9 0 8
    c=IN IP4 0.0.0.0
    a=candidate:0 1 UDP 2130379007 10.252.26.177 62583 typ host
    a=candidate:1 1 UDP 1694236671 63.245.221.32 54687 typ srflx raddr 10.252.26.177 rport 62583
    a=sendrecv
    a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
    a=ice-pwd:3aefa1a552633717497bdff7158dd4a1
    a=ice-ufrag:730b2351
    a=mid:sdparta_0
    a=msid:{d57d3917-64e9-4f49-adfb-b049d165c312} {920e9ffc-728e-0d40-a1b9-ebd0025c860a}
    a=rtcp-mux
    a=rtpmap:109 opus/48000/2
    a=rtpmap:9 G722/8000/1
    a=rtpmap:0 PCMU/8000
    a=rtpmap:8 PCMA/8000
    a=setup:actpass
    a=ssrc:323910839 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}
    m=video 9 RTP/SAVPF 120
    c=IN IP4 0.0.0.0
    a=candidate:0 1 UDP 2130379007 10.252.26.177 62583 typ host
    a=candidate:1 1 UDP 1694236671 63.245.221.32 54687 typ srflx raddr 10.252.26.177 rport 62583
    a=sendrecv
    a=fmtp:120 max-fs=12288;max-fr=60
    a=ice-pwd:3aefa1a552633717497bdff7158dd4a1
    a=ice-ufrag:730b2351
    a=mid:sdparta_1
    a=msid:{d57d3917-64e9-4f49-adfb-b049d165c312} {35eeb34f-f89c-3946-8e5e-2d5abd38c5a5}
    a=rtcp-fb:120 nack
    a=rtcp-fb:120 nack pli
    a=rtcp-fb:120 ccm fir
    a=rtcp-mux
    a=rtpmap:120 VP8/90000
    a=setup:actpass
    a=ssrc:2917595157 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}
    m=video 9 RTP/SAVPF 120
    c=IN IP4 0.0.0.0
    a=sendrecv
    a=fmtp:120 max-fs=12288;max-fr=60
    a=ice-pwd:3aefa1a552633717497bdff7158dd4a1
    a=ice-ufrag:730b2351
    a=mid:sdparta_2
    a=msid:{3a2bfe17-c65d-364a-af14-415d90bb9f52} {aa7a4ca4-189b-504a-9748-5c22bc7a6c4f}
    a=rtcp-fb:120 nack
    a=rtcp-fb:120 nack pli
    a=rtcp-fb:120 ccm fir
    a=rtcp-mux
    a=rtpmap:120 VP8/90000
    a=setup:actpass
    a=ssrc:2325911938 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}

    Note that each track gets its own m-section, denoted by the msid attribute.

    As you can see from the BUNDLE attribute, Firefox offers to put the new video stream, with its different msid value, into the same bundled transport. That means if the answerer agrees we can start sending the video stream over the already established transport. We don’t have to go through another ICE and DTLS round. And in case of TURN servers we save another relay resource.

    Hypothetically, this is what the previous offer would look like if it used plan B (as Chrome does):

    v=0
    o=mozilla...THIS_IS_SDPARTA-39.0a1 7832380118043521940 1 IN IP4 0.0.0.0
    s=-
    t=0 0
    a=fingerprint:sha-256 4B:31:DA:18:68:AA:76:A9:C9:A7:45:4D:3A:B3:61:E9:A9:5F:DE:63:3A:98:7C:E5:34:E4:A5:B6:95:C6:F2:E1
    a=group:BUNDLE sdparta_0 sdparta_1
    a=ice-options:trickle
    a=msid-semantic:WMS *
    m=audio 9 RTP/SAVPF 109 9 0 8
    c=IN IP4 0.0.0.0
    a=candidate:0 1 UDP 2130379007 10.252.26.177 62583 typ host
    a=candidate:1 1 UDP 1694236671 63.245.221.32 54687 typ srflx raddr 10.252.26.177 rport 62583
    a=sendrecv
    a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
    a=ice-pwd:3aefa1a552633717497bdff7158dd4a1
    a=ice-ufrag:730b2351
    a=mid:sdparta_0
    a=rtcp-mux
    a=rtpmap:109 opus/48000/2
    a=rtpmap:9 G722/8000/1
    a=rtpmap:0 PCMU/8000
    a=rtpmap:8 PCMA/8000
    a=setup:actpass
    a=ssrc:323910839 msid:{d57d3917-64e9-4f49-adfb-b049d165c312} {920e9ffc-728e-0d40-a1b9-ebd0025c860a}
    a=ssrc:323910839 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}
    m=video 9 RTP/SAVPF 120
    c=IN IP4 0.0.0.0
    a=candidate:0 1 UDP 2130379007 10.252.26.177 62583 typ host
    a=candidate:1 1 UDP 1694236671 63.245.221.32 54687 typ srflx raddr 10.252.26.177 rport 62583
    a=sendrecv
    a=fmtp:120 max-fs=12288;max-fr=60
    a=ice-pwd:3aefa1a552633717497bdff7158dd4a1
    a=ice-ufrag:730b2351
    a=mid:sdparta_1
    a=rtcp-fb:120 nack
    a=rtcp-fb:120 nack pli
    a=rtcp-fb:120 ccm fir
    a=rtcp-mux
    a=rtpmap:120 VP8/90000
    a=setup:actpass
    a=ssrc:2917595157 msid:{d57d3917-64e9-4f49-adfb-b049d165c312} {35eeb34f-f89c-3946-8e5e-2d5abd38c5a5}
    a=ssrc:2917595157 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}
    a=ssrc:2325911938 msid:{3a2bfe17-c65d-364a-af14-415d90bb9f52} {aa7a4ca4-189b-504a-9748-5c22bc7a6c4f}
    a=ssrc:2325911938 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}

    Note that there is only one video m-section, with two different msids, which are part of the ssrc attributes rather than in their own a lines (these are called “source-level” attributes). Continued…