Mozilla

Featured Articles

Sort by:

View:

  1. Trainspotting: Firefox 39

    Trainspotting is a series of articles highlighting features in the lastest version of Firefox. A new version of Firefox is shipped every six weeks – we at Mozilla call this pattern “release trains.”

    A new version of Firefox is here, and with it come some great improvements and additions to the Web platform and developer tools. This post will call out a few highlights.

    For a full list of changes and additions, take a look at the Firefox 39 release notes.

    DevTools Love

    The Firefox Developer Tools are constantly getting better. We’re listening to developers on UserVoice, and using their feedback to make tools that are more powerful and easier to use. One requested feature was the ability to re-order elements in the Inspector:

    Editing and tweaking CSS Animations is easier than ever – Firefox 39 lets developers pause, restart, slow down, and preview new timings without having to switch applications.

    Menu of animation easing presets in the Inspector

    CSS Scroll Snap Points

    CSS Scroll Snap Points in action

    CSS Scroll Snap Points let web developers instruct the browser to smoothly snap element scrolling to specific points along an axis, creating smoother, easier to interact with interfaces with fewer lines of code.

    Improvements to Firefox on Mac OS X

    Firefox gets some Mac- specific improvements and updates in version 39:

    • Project Silk enabled – Improves scrolling and animation performance by more closely timing painting with hardware vsync. Read more about Project Silk.
    • Unicode 8.0 skin tone emoji – Fixed a bug in the rendering of skin tone modifiers for emoji.
    • Dashed line performance – Rendering of dotted and dashed lines is vastly improved. Check out the fixed bug for more information.

    Service Workers Progress

    Firefox’s implementation of the Service Workers API continues – fetch is enabled for workers and is now generally available to web content, and the Cache and CacheStorage are now available behind a flag.

    There’s lots more changes and improvements in Firefox 39 – check out the Developer Release Notes for developer-oriented changes or the full list of bugs fixed in this release. Enjoy!

  2. Performance Testing Firefox OS With Raptor

    When we talk about performance for the Web, a number of familiar questions may come to mind:

    • Why does this page take so long to load?
    • How can I optimize my JavaScript to be faster?
    • If I make some changes to this code, will that make this app slower?

    I’ve been working on making these types of questions easier to answer for Gaia, the UI layer for Firefox OS, a completely web-centric mobile device OS. Writing performant web pages for the desktop has its own idiosyncrasies, and writing native applications using web technologies takes the challenge up an order of magnitude. I want to introduce the challenges I’ve faced in making performance an easier topic to tackle in Firefox OS, as well as document my solutions and expose holes in the Web’s APIs that need to be filled.

    From now on, I’ll refer to web pages, documents, and the like as applications, and while web “documents” typically don’t need the performance attention I’m going to give here, the same techniques could still apply.

    Fixing the lifecycle of applications

    A common question I get asked in regard to Firefox OS applications:

    How long did the app take to load?

    Tough question, as we can’t be sure we are speaking the same language. Based on UX and my own research at Mozilla, I’ve tried at adopt this definition for determining the time it takes an application to load:

    The amount of time it takes to load an application is measured from the moment a user initiates a request for the application to the moment the application appears ready for user interaction.

    On mobile devices, this is generally from the time the user taps on an icon to launch an app, until the app appears visually loaded; when it looks like a user can start interacting with the application. Some of this time is delegated to the OS to get the application to launch, which is outside the control of the application in question, but the bulk of the loading time should be within the app.

    So window load right?

    With SPAs (single-page applications), Ajax, script loaders, deferred execution, and friends, window load doesn’t hold much meaning anymore. If we could merely measure the time it takes to hit load, our work would be easy. Unfortunately, there is no way to infer the moment an application is visually loaded in a predictable way for everyone. Instead we rely on the apps to imply these moments for us.

    For Firefox OS, I helped develop a series of conventional moments that are relevant to almost every application for implying its loading lifecycle (also documented as a performance guideline on MDN):

    navigation loaded (navigationLoaded)

    The application designates that its core chrome or navigation interface exists in the DOM and has been marked as ready to be displayed, e.g. when the element is not display: none or any other functionality that would affect the visibility of the interface element.

    navigation interactive (navigationInteractive)

    The application designates that the core chrome or navigation interface has its events bound and is ready for user interaction.

    visually loaded (visuallyLoaded)

    The application designates that it is visually loaded, i.e., the “above-the-fold” content exists in the DOM and has been marked as ready to be displayed, again not display: none or other hiding functionality.

    content interactive (contentInteractive)

    The application designates that it has bound the events for the minimum set of functionality to allow the user to interact with “above-the-fold” content made available at visuallyLoaded.

    fully loaded (fullyLoaded)

    The application has been completely loaded, i.e., any relevant “below-the-fold” content and functionality have been injected into the DOM, and marked visible. The app is ready for user interaction. Any required startup background processing is complete and should exist in a stable state barring further user interaction.

    The important moment is visually loaded. This correlates directly with what the user perceives as “being ready.” As an added bonus, using the visuallyLoaded metric pairs nicely with camera-based performance verifications.

    Denoting moments

    With a clearly-defined application launch lifecycle, we can denote these moments with the User Timing API, available in Firefox OS starting with v2.2:

    window.performance.mark( string markName )
    

    Specifically during a startup:

    performance.mark('navigationLoaded');
    performance.mark('navigationInteractive');
    ...
    performance.mark('visuallyLoaded');
    ...
    performance.mark('contentInteractive');
    performance.mark('fullyLoaded');
    

    You can even use the measure() method to create a measurement between start and another mark, or even 2 other marks:

    // Denote point of user interaction
    performance.mark('tapOnButton');
    
    loadUI();
    
    // Capture the time from now (sectionLoaded) to tapOnButton
    performance.measure('sectionLoaded', 'tapOnButton');
    

    Fetching these performance metrics is pretty straightforward with getEntries, getEntriesByName, or getEntriesByType, which fetch a collection of the entries. The purpose of this article isn’t to cover the usage of User Timing though, so I’ll move on.

    Armed with the moment an application is visually loaded, we know how long it took the application to load because we can just compare it to—oh, wait, no. We don’t know the moment of user intent to launch.

    While desktop sites may be able to easily procure the moment at which a request was initiated, doing this on Firefox OS isn’t as simple. In order to launch an application, a user will typically tap an icon on the Homescreen. The Homescreen lives in a process separate from the app being launched, and we can’t communicate performance marks between them.

    Solving problems with Raptor

    Without the APIs or interaction mechanisms available in the platform to be able to overcome this and other difficulties, we’ve build tools to help. This is how the Raptor performance testing tool originated. With it, we can gather metrics from Gaia applications and answer the performance questions we have.

    Raptor was built with a few goals in mind:

    • Performance test Firefox OS without affecting performance. We shouldn’t need polyfills, test code, or hackery to get realistic performance metrics.
    • Utilize web APIs as much as possible, filling in gaps through other means as necessary.
    • Stay flexible enough to cater to the many different architectural styles of applications.
    • Be extensible for performance testing scenarios outside the norm.
    Problem: Determining moment of user intent to launch

    Given two independent applications — Homescreen and any other installed application — how can we create a performance marker in one and compare it in another? Even if we could send our performance mark from one app to another, they are incomparable. According to High-Resolution Time, the values produced would be monotonically increasing numbers from the moment of the page’s origin, which is different in each page context. These values represent the amount of time passed from one moment to another, and not to an absolute moment.

    The first breakdown in existing performance APIs is that there’s no way to associate a performance mark in one app with any other app. Raptor takes a simplistic approach: log parsing.

    Yes, you read that correctly. Every time Gecko receives a performance mark, it logs a message (i.e., to adb logcat) and Raptor streams and parses the log looking for these log markers. A typical log entry looks something like this (we will decipher it later):

    I/PerformanceTiming( 6118): Performance Entry: clock.gaiamobile.org|mark|visuallyLoaded|1074.739956|0.000000|1434771805380
    

    The important thing to notice in this log entry is its origin: clock.gaiamobile.org, or the Clock app; here the Clock app created its visually loaded marker. In the case of the Homescreen, we want to create a marker that is intended for a different context altogether. This is going to need some additional metadata to go along with the marker, but unfortunately the User Timing API does not yet have that ability. In Gaia, we have adopted an @ convention to override the context of a marker. Let’s use it to mark the moment of app launch as determined by the user’s first tap on the icon:

    performance.mark('appLaunch@' + appOrigin)
    

    Launching the Clock from the Homescreen and dispatching this marker, we get the following log entry:

    I/PerformanceTiming( 5582): Performance Entry: verticalhome.gaiamobile.org|mark|appLaunch@clock.gaiamobile.org|80081.169720|0.000000|1434771804212
    

    With Raptor we change the context of the marker if we see this @ convention.

    Problem: Incomparable numbers

    The second breakdown in existing performance APIs deals with the incomparability of performance marks across processes. Using performance.mark() in two separate apps will not produce meaningful numbers that can be compared to determine a length of time, because their values do not share a common absolute time reference point. Fortunately there is an absolute time reference that all JS can access: the Unix epoch.

    Looking at the output of Date.now() at any given moment will return the number of milliseconds that have elapsed since January 1st, 1970. Raptor had to make an important trade-off: abandon the precision of high-resolution time for the comparability of the Unix epoch. Looking at the previous log entry, let’s break down its output. Notice the correlation of certain pieces to their User Timing counterparts:

    • Log level and tag: I/PerformanceTiming
    • Process ID: 5582
    • Base context: verticalhome.gaiamobile.org
    • Entry type: mark, but could be measure
    • Entry name: appLaunch@clock.gaiamobile.org, the @ convention overriding the mark’s context
    • Start time: 80081.169720,
    • Duration: 0.000000, this is a mark, not a measure
    • Epoch: 1434771804212

    For every performance mark and measure, Gecko also captures the epoch of the mark, and we can use this to compare times from across processes.

    Pros and Cons

    Everything is a game of tradeoffs, and performance testing with Raptor is no exception:

    • We trade high-resolution times for millisecond resolution in order to compare numbers across processes.
    • We trade JavaScript APIs for log parsing so we can access data without injecting custom logic into every application, which would affect app performance.
    • We currently trade a high-level interaction API, Marionette, for low-level interactions using Orangutan behind the scenes. While this provides us with transparent events for the platform, it also makes writing rich tests difficult. There are plans to improve this in the future by adding Marionette integration.

    Why log parsing

    You may be a person that believes log parsing is evil, and to a certain extent I would agree with you. While I do wish for every solution to be solvable using a performance API, unfortunately this doesn’t exist yet. This is yet another reason why projects like Firefox OS are important for pushing the Web forward: we find use cases which are not yet fully implemented for the Web, poke holes to discover what’s missing, and ultimately improve APIs for everyone by pushing to fill these gaps with standards. Log parsing is Raptor’s stop-gap until the Web catches up.

    Raptor workflow

    Raptor is a Node.js module built into the Gaia project that enables the project to do performance tests against a device or emulator. Once you have the project dependencies installed, running performance tests from the Gaia directory is straightforward:

    1. Install the Raptor profile on the device; this configures various settings to assist with performance testing. Note: this is a different profile that will reset Gaia, so keep that in mind if you have particular settings stored.
      make raptor
    2. Choose a test to run. Currently, tests are stored in tests/raptor in the Gaia tree, so some manual discovery is needed. There are plans to improve the command-line API soon.
    3. Run the test. For example, you can performance test the cold launch of the Clock app using the following command, specifying the number of runs to launch it:
      APP=clock RUNS=5 node tests/raptor/launch_test
    4. Observe the console output. At the end of the test, you will be given a table of test results with some statistics about the performance runs completed. Example:
    [Cold Launch: Clock Results] Results for clock.gaiamobile.org
    
    Metric                            Mean     Median   Min      Max      StdDev  p95
    --------------------------------  -------  -------  -------  -------  ------  -------
    coldlaunch.navigationLoaded       214.100  212.000  176.000  269.000  19.693  247.000
    coldlaunch.navigationInteractive  245.433  242.000  216.000  310.000  19.944  274.000
    coldlaunch.visuallyLoaded         798.433  810.500  674.000  967.000  71.869  922.000
    coldlaunch.contentInteractive     798.733  810.500  675.000  967.000  71.730  922.000
    coldlaunch.fullyLoaded            802.133  813.500  682.000  969.000  72.036  928.000
    coldlaunch.rss                    10.850   10.800   10.600   11.300   0.180   11.200
    coldlaunch.uss                    0.000    0.000    0.000    0.000    0.000   n/a
    coldlaunch.pss                    6.190    6.200    5.900    6.400    0.114   6.300
    

    Visualizing Performance

    Access to raw performance data is helpful for a quick look at how long something takes, or to determine if a change you made causes a number to increase, but it’s not very helpful for monitoring changes over time. Raptor has two methods for visualizing performance data over time, in order to improve performance.

    Official metrics

    At raptor.mozilla.org, we have dashboards for persisting the values of performance metrics over time. In our automation infrastructure, we execute performance tests against devices for every new build generated by mozilla-central or b2g-inbound (Note: The source of builds could change in the future.) Right now this is limited to Flame devices running at 319MB of memory, but there are plans to expand to different memory configurations and additional device types in the very near future. When automation receives a new build, we run our battery of performance tests against the devices, capturing numbers such as application launch time and memory at fullyLoaded, reboot duration, and power current. These numbers are stored and visualized many times per day, varying based on the commits for the day.

    Looking at these graphs, you can drill down into specific apps, focus or expand your time query, and do advanced query manipulation to gain insight into performance. Watching trends over time, you can even pick out regressions that have sneaked into Firefox OS.

    Local visualization

    The very same visualization tool and backend used by raptor.mozilla.org is also available as a Docker image. After running the local Raptor tests, data will report to your own visualization dashboard based on those local metrics. There are some additional prerequisites for local visualization, so be sure to read the Raptor docs on MDN to get started.

    Performance regressions

    Building pretty graphs that display metrics is all well and fine, but finding trends in data or signal within noise can be difficult. Graphs help us understand data and make it accessible for others to easily communicate around the topic, but using graphs for finding regressions in performance is reactive; we should be proactive about keeping things fast.

    Regression hunting on CI

    Rob Wood has been doing incredible work in our pre-commit continuous integration efforts surrounding the detection of performance regressions in prospective commits. With every pull request to the Gaia repository, our automation runs the Raptor performance tests against the target branch with and without the patch applied. After a certain number of iterations for statistical accuracy, we have the ability to reject patches from landing in Gaia if a regression is too severe. For scalability purposes we use emulators to run these tests, so there are inherent drawbacks such as greater variability in the metrics reported. This variability limits the precision with which we can detect regressions.

    Regression hunting in automation

    Luckily we have the post-commit automation in place to run performance tests against real devices, and is where the dashboards receive their data from. Based on the excellent Python tool from Will Lachance, we query our historical data daily, attempting to discover any smaller regressions that could have crept into Firefox OS in the previous seven days. Any performance anomalies found are promptly reported to Bugzilla and relevant bug component watchers are notified.

    Recap and next steps

    Raptor, combined with User Timing, has given us the know-how to ask questions about the performance of Gaia and receive accurate answers. In the future, we plan on improving the API of the tool and adding higher-level interactions. Raptor should also be able to work more seamlessly with third-party applications, something that is not easily done right now.

    Raptor has been an exciting tool to build, while at the same time helping us drive the Web forward in the realm of performance. We plan on using it to keep Firefox OS fast, and to stay proactive about protecting Gaia performance.

  3. ES6 In Depth: Collections

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    Earlier this week, the ES6 specification, officially titled ECMA-262, 6th Edition, ECMAScript 2015 Language Specification, cleared the final hurdle and was approved as an Ecma standard. Congratulations to TC39 and everyone who contributed. ES6 is in the books!

    Even better news: it will not be six more years before the next update. The standard committee now aims to produce a new edition roughly every 12 months. Proposals for the 7th Edition are already in development.

    It is appropriate, then, to celebrate this occasion by talking about something I’ve been eager to see in JS for a long time—and which I think still has some room for future improvement!

    Hard cases for coevolution

    JS isn’t quite like other programming languages, and sometimes this influences the evolution of the language in surprising ways.

    ES6 modules are a good example. Other languages have module systems. Racket has a great one. Python too. When the standard committee decided to add modules to ES6, why didn’t they just copy an existing system?

    JS is different, because it runs in web browsers. I/O can take a long time. Therefore JS needs a module system that can support loading code asynchronously. It can’t afford to serially search for modules in multiple directories, either. Copying existing systems was no good. The ES6 module system would need to do some new things.

    How this influenced the final design is an interesting story. But we’re not here to talk about modules.

    This post is about what the ES6 standard calls “keyed collections”: Set, Map, WeakSet, and WeakMap. These features are, in most respects, just like the hash tables in other languages. But the standard committee made some interesting tradeoffs along the way, because JS is different.

    Why collections?

    Anyone familiar with JS knows that there’s already something like a hash table built into the language: objects.

    A plain Object, after all, is pretty much nothing but an open-ended collection of key-value pairs. You can get, set, and delete properties, iterate over them—all the things a hash table can do. So why add a new feature at all?

    Well, many programs do use plain objects to store key-value pairs, and for programs where this works well, there is no particular reason to switch to Map or Set. Still, there are some well-known issues with using objects this way:

    • Objects being used as lookup tables can’t also have methods, without some risk of collision.

    • Therefore programs must either use Object.create(null) (rather than plain {}) or exercise care to avoid misinterpreting builtin methods (like Object.prototype.toString) as data.

    • Property keys are always strings (or, in ES6, symbols). Objects can’t be keys.

    • There’s no efficient way to ask how many properties an object has.

    ES6 adds a new concern: plain objects are not iterable, so they will not cooperate with the forof loop, the ... operator, and so on.

    Again, there are plenty of programs where none of that really matters, and a plain object will continue to be the right choice. Map and Set are for the other cases.

    Because they are designed to avoid collisions between user data and builtin methods, the ES6 collections do not expose their data as properties. This means that expressions like obj.key or obj[key] cannot be used to access hash table data. You’ll have to write map.get(key). Also, hash table entries, unlike properties, are not inherited via the prototype chain.

    The upside is that, unlike plain Objects, Map and Set do have methods, and more methods can be added, either in the standard or in your own subclasses, without conflict.

    Set

    A Set is a collection of values. It’s mutable, so your program can add and remove values as it goes. So far, this is just like an array. But there are as many differences between sets and arrays as there are similarities.

    First, unlike an array, a set never contains the same value twice. If you try to add a value to a set that’s already in there, nothing happens.

    > var desserts = new Set("🍪🍦🍧🍩");
    > desserts.size
        4
    > desserts.add("🍪");
        Set [ "🍪", "🍦", "🍧", "🍩" ]
    > desserts.size
        4
    

    This example uses strings, but a Set can contain any type of JS value. Just as with strings, adding the same object or number more than once has no added effect.

    Second, a Set keeps its data organized to make one particular operation fast: membership testing.

    > // Check whether "zythum" is a word.
    > arrayOfWords.indexOf("zythum") !== -1  // slow
        true
    > setOfWords.has("zythum")               // fast
        true
    

    What you don’t get with a Set is indexing:

    > arrayOfWords[15000]
        "anapanapa"
    > setOfWords[15000]   // sets don't support indexing
        undefined
    

    Here are all the operations on sets:

    • new Set creates a new, empty set.

    • new Set(iterable) makes a new set and fills it with data from any iterable value.

    • set.size gets the number of values in the set.

    • set.has(value) returns true if the set contains the given value.

    • set.add(value) adds a value to the set. If the value was already in the set, nothing happens.

    • set.delete(value) removes a value from the set. If the value wasn’t in the set, nothing happens. Both .add() and .delete() return the set object itself, so you can chain them.

    • set[Symbol.iterator]() returns a new iterator over the values in the set. You won’t normally call this directly, but this method is what makes sets iterable. It means you can write for (v of set) {...} and so on.

    • set.forEach(f) is easiest to explain with code. It’s like shorthand for:

      for (let value of set)
          f(value, value, set);
      

      This method is analogous to the .forEach() method on arrays.

    • set.clear() removes all values from the set.

    • set.keys(), set.values(), and set.entries() return various iterators. These are provided for compatibility with Map, so we’ll talk about them below.

    Of all these features, the constructor new Set(iterable) stands out as a powerhouse, because it operates at the level of whole data structures. You can use it to convert an array to a set, eliminating duplicate values with a single line of code. Or, pass it a generator: it will run the generator to completion and collect the yielded values into a set. This constructor is also how you copy an existing Set.

    I promised last week to complain about the new collections in ES6. I’ll start here. As nice as Set is, there are some missing methods that would make nice additions to a future standard:

    • Functional helpers that are already present on arrays, like .map(), .filter(), .some(), and .every().

    • Non-mutating set1.union(set2) and set1.intersection(set2).

    • Methods that can operate on many values at once: set.addAll(iterable), set.removeAll(iterable), and set.hasAll(iterable).

    The good news is that all of these can be implemented efficiently using the methods provided by ES6.

    Map

    A Map is a collection of key-value pairs. Here’s what Map can do:

    • new Map returns a new, empty map.

    • new Map(pairs) creates a new map and fills it with data from an existing collection of [key, value] pairs. pairs can be an existing Map object, an array of two-element arrays, a generator that yields two-element arrays, etc.

    • map.size gets the number of entries in the map.

    • map.has(key) tests whether a key is present (like key in obj).

    • map.get(key) gets the value associated with a key, or undefined if there is no such entry (like obj[key]).

    • map.set(key, value) adds an entry to the map associating key with value, overwriting any existing entry with the same key (like obj[key] = value).

    • map.delete(key) deletes an entry (like delete obj[key]).

    • map.clear() removes all entries from the map.

    • map[Symbol.iterator]() returns an iterator over the entries in the map. The iterator represents each entry as a new [key, value] array.

    • map.forEach(f) works like this:

      for (let [key, value] of map)
        f(value, key, map);
      

      The odd argument order is, again, by analogy to Array.prototype.forEach().

    • map.keys() returns an iterator over all the keys in the map.

    • map.values() returns an iterator over all the values in the map.

    • map.entries() returns an iterator over all the entries in the map, just like map[Symbol.iterator](). In fact, it’s just another name for the same method.

    What is there to complain about? Here are some features not present in ES6 that I think would be useful:

    • A facility for default values, like Python’s collections.defaultdict.

    • A helper function, Map.fromObject(obj), to make it easy to write maps using object-literal syntax.

    Again, these features are easy to add.

    OK. Remember how I started this article with a bit about how unique concerns about running in the browser affect the design of JS language features? This is where we start to talk about that. I’ve got three examples. Here are the first two.

    JS is different, part 1: Hash tables without hash codes?

    There’s one useful feature that the ES6 collection classes do not support at all, as far as I can tell.

    Suppose we have a Set of URL objects.

    var urls = new Set;
    urls.add(new URL(location.href));  // two URL objects.
    urls.add(new URL(location.href));  // are they the same?
    alert(urls.size);  // 2
    

    These two URLs really ought to be considered equal. They have all the same fields. But in JavaScript, these two objects are distinct, and there is no way to overload the language’s notion of equality.

    Other languages support this. In Java, Python, and Ruby, individual classes can overload equality. In many Scheme implementations, individual hash tables can be created that use different equality relations. C++ supports both.

    However, all of these mechanisms require users to implement custom hashing functions and all expose the system’s default hashing function. The committee chose not to expose hash codes in JS—at least, not yet—due to open questions about interoperability and security, concerns that are not as pressing in other languages.

    JS is different, part 2: Surprise! Predictability!

    You would think that deterministic behavior from a computer could hardly be surprising. But people are often surprised when I tell them that Map and Set iteration visits entries in the order they were inserted into the collection. It’s deterministic.

    We’re used to certain aspects of hash tables being arbitrary. We’ve learned to accept it. But there are good reasons to try to avoid arbitrariness. As I wrote in 2012:

    • There is evidence that some programmers find arbitrary iteration order surprising or confusing at first. [1][2][3][4][5][6]
    • Property enumeration order is unspecified in ECMAScript, yet all the major implementations have been forced to converge on insertion order, for compatibility with the Web as it is. There is, therefore, some concern that if TC39 does not specify a deterministic iteration order, “the web will just go and specify it for us”.[7]
    • Hash table iteration order can expose some bits of object hash codes. This imposes some astonishing security concerns on the hashing function implementer. For example, an object’s address must not be recoverable from the exposed bits of its hash code. (Revealing object addresses to untrusted ECMAScript code, while not exploitable by itself, would be a bad security bug on the Web.)

    When all this was being discussed back in February 2012, I argued in favor of arbitrary iteration order. Then I set out to show by experiment that keeping track insertion order would make a hash table too slow. I wrote a handful of C++ microbenchmarks. The results surprised me.

    And that’s how we ended up with hash tables that track insertion order in JS!

    Strong reasons to use weak collections

    Last week, we discussed an example involving a JS animation library. We wanted to store a boolean flag for every DOM object, like this:

    if (element.isMoving) {
      smoothAnimations(element);
    }
    element.isMoving = true;
    

    Unfortunately, setting an expando property on a DOM object like this is a bad idea, for reasons discussed in the original post.

    That post showed how to solve this problem using symbols. But couldn’t we do the same thing using a Set? It might look like this:

    if (movingSet.has(element)) {
      smoothAnimations(element);
    }
    movingSet.add(element);
    

    There is only one drawback: Map and Set objects keep a strong reference to every key and value they contain. This means that if a DOM element is removed from the document and dropped, garbage collection can’t recover that memory until that element is removed from movingSet as well. Libraries typically have mixed success, at best, in imposing complex clean-up-after-yourself requirements on their users. So this could lead to memory leaks.

    ES6 offers a surprising fix for this. Make movingSet a WeakSet rather than a Set. Memory leak solved!

    This means it is possible to solve this particular problem using either a weak collection or symbols. Which is better? A full discussion of the tradeoffs would, unfortunately, make this post a little too long. If you can use a single symbol across the whole lifetime of the web page, that’s probably fine. If you end up wanting many short-lived symbols, that’s a danger sign: consider using WeakMaps instead to avoid leaking memory.

    WeakMap and WeakSet

    WeakMap and WeakSet are specified to behave exactly like Map and Set, but with a few restrictions:

    • WeakMap supports only new, .has(), .get(), .set(), and .delete().

    • WeakSet supports only new, .has(), .add(), and .delete().

    • The values stored in a WeakSet and the keys stored in a WeakMap must be objects.

    Note that neither type of weak collection is iterable. You can’t get entries out of a weak collection except by asking for them specifically, passing in the key you’re interested in.

    These carefully crafted restrictions enable the garbage collector to collect dead objects out of live weak collections. The effect is similar to what you could get with weak references or weak-keyed dictionaries, but ES6 weak collections get the memory management benefits without exposing the fact that GC happened to scripts.

    JS is different, part 3: Hiding GC nondeterminism

    Behind the scenes, the weak collections are implemented as ephemeron tables.

    In short, a WeakSet does not keep a strong reference to the objects it contains. When an object in a WeakSet is collected, it is simply removed from the WeakSet. WeakMap is similar. It does not keep a strong reference to any of its keys. If a key is alive, the associated value is alive.

    Why accept these restrictions? Why not just add weak references to JS?

    Again, the standard committee has been very reluctant to expose nondeterministic behavior to scripts. Poor cross-browser compatibility is the bane of Web development. Weak references expose implementation details of the underlying garbage collector—the very definition of platform-specific arbitrary behavior. Of course applications shouldn’t depend on platform-specific details, but weak references also make it very hard to know just how much you’re depending on the GC behavior in the browser you’re currently testing. They’re hard to reason about.

    By contrast, the ES6 weak collections have a more limited feature set, but that feature set is rock solid. The fact that a key or value has been collected is never observable, so applications can’t end up depending on it, even by accident.

    This is one case where a Web-specific concern has led to a surprising design decision that makes JS a better language.

    When can I use collections in my code?

    All four collection classes are currently shipping in Firefox, Chrome, Microsoft Edge, and Safari. To support older browsers, use a polyfill, like es6-collections.

    WeakMap was first implemented in Firefox by Andreas Gal, who went on to a stint as Mozilla’s CTO. Tom Schuster implemented WeakSet. I implemented Map and Set. Thanks to Tooru Fujisawa for contributing several patches in this area.

    Next week, ES6 In Depth starts a two-week summer break. This series has covered a lot of ground, but some of ES6’s most powerful features are yet to come. So please join us when we return with new content on July 9.

  4. ES6 In Depth: Using ES6 today with Babel and Broccoli

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    ES6 is here, and people are already talking about ES7, what the future holds, and what shiny features a new standard can offer. As web developers, we wonder how we can make use of it all. More than once, in previous ES6 In Depth posts, we’ve encouraged you to start coding in ES6, with a little help from some interesting tools. We’ve teased you with the possibility:

    If you’d like to use this new syntax on the Web, you can use Babel or Google’s Traceur to translate your ES6 code to web-friendly ES5.

    Today we’re going to show you step-by-step how it is done. The above-mentioned tools are called transpilers. A transpiler is also known as a source-to-source compiler—a compiler that translates between programming languages operating at comparable levels of abstraction. Transpilers let us write code using ES6 while also guaranteeing that we’ll be able to execute the code in every browser.

    Transpilation our salvation

    A transpiler is very easy to use. You can describe what it does in only two steps:

    1. We write code with ES6 syntax.

    let q = 99;
    let myVariable = `${q} bottles of beer on the wall, ${q} bottles of beer.`;
    

    2. We use the code above as input for the transpiler, which will process it and produce the following output:

    "use strict";
    
    var q = 99;
    var myVariable = "" + q + " bottles of beer on the wall, " + q + " bottles of beer."
    

    This is the good old JavaScript we know. It can be used in any browser.

    The internals of how a transpiler goes from input to output are highly complex and fall out of scope for this article. Just as we can drive a car without knowing all the internal engine mechanics, today we’ll leave the transpiler as a black box that is able to process our code.

    Babel in action

    There are a couple of different ways to use Babel in a project. There is a command line tool, which you can use with commands of the form:

    babel script.js --out-file script-compiled.js
    

    A browser-ready version is also available. You can include Babel as a regular JS library and then you can place your ES6 code in script tags with the type "text/babel".

    <script src="node_modules/babel-core/browser.js"></script>
    <script type="text/babel">
    // Your ES6 code
    </script>
    

    These methods do not scale when your code base starts to grow and you start splitting everything into multiple files and folders. At that moment, you’ll need a build tool and a way to integrate Babel with a build pipeline.

    In the following sections, we’ll integrate Babel into a build tool, Broccoli.js, and we’ll write and execute our first lines of ES6 through a couple of examples. In case you run into trouble, you can review the complete source code here: broccoli-babel-examples. Inside the repository you’ll find three sample projects:

    1. es6-fruits
    2. es6-website
    3. es6-modules

    Each one builds on the previous example. We start with the bare minimum and progress to a general solution, which can be used as the starting point of an ambitious project. In this post, we’ll cover the first two examples in detail. After we are done, you’ll be able to read and understand the code in the third example on your own.

    If you are thinking —I’ll just wait for browsers to support the new features— you’ll be left behind. Full compliance, if it ever happens, will take a long time. Transpilers are here to stay; new ECMAScript standards are planned to be released yearly. So, we’ll continue to see new standards released more often than uniform browser platforms. Hop in now and take advantage of the new features.

    Our first Broccoli & Babel project

    Broccoli is a tool designed to build projects as quickly as possible. You can uglify and minify files, among many other things, through the use of Broccoli plugins. It saves us the burden of handling files, directories, and executing commands each time we introduce changes to a project. Think of it as:

    Comparable to the Rails asset pipeline in scope, though it runs on Node and is backend-agnostic.

    Project setup

    Node

    As you might have guessed, you’ll have to install Node 0.11 or later.

    If you are in a unix system, avoid installing from the package manager (apt, yum). That is to avoid using root privileges during installation. It’s best to manually install the binaries, provided at the previous link, with your current user. You can read why using root is not recommended in Do not sudo npm. In there you’ll find other installation alternatives.

    Broccoli

    We’ll set up our Broccoli project first with:

    mkdir es6-fruits
    cd es6-fruits
    npm init
    # Create an empty file called Brocfile.js
    touch Brocfile.js
    

    Now we install broccoli and broccoli-cli

    # the broccoli library
    npm install --save-dev broccoli
    # command line tool
    npm install -g broccoli-cli
    

    Write some ES6

    We’ll create a src folder and inside we’ll put a fruits.js file.

    mkdir src
    vim src/fruits.js
    

    In our new file we’ll write a small script using ES6 syntax.

    let fruits = [
      {id: 100, name: 'strawberry'},
      {id: 101, name: 'grapefruit'},
      {id: 102, name: 'plum'}
    ];
    
    for (let fruit of fruits) {
      let message = `ID: ${fruit.id} Name: ${fruit.name}`;
    
      console.log(message);
    }
    
    console.log(`List total: ${fruits.length}`);
    

    The code sample above makes use of three ES6 features:

    1. let for local scope declarations (to be discussed in an upcoming blog post)
    2. for-of loops
    3. template strings

    Save the file and try to execute it.

    node src/fruits.js
    

    It won’t work yet, but we are about to make it executable by Node and any browser.

    let fruits = [
        ^^^^^^
    SyntaxError: Unexpected identifier
    

    Transpilation time

    Now we’ll use Broccoli to load our code and push it through Babel. We’ll edit the file Brocfile.js and add this code to it:

    // import the babel plugin
    var babel = require('broccoli-babel-transpiler');
    
    // grab the source and transpile it in 1 step
    fruits = babel('src'); // src/*.js
    
    module.exports = fruits;
    

    Notice that we require broccoli-babel-transpiler, a Broccoli plugin that wraps around the Babel library, so we must install it with:

    npm install --save-dev broccoli-babel-transpiler
    

    Now we can build our project and execute our script with:

    broccoli build dist # compile
    node dist/fruits.js # execute ES5
    

    The output should look like this:

    ID: 100 Name: strawberry
    ID: 101 Name: grapefruit
    ID: 102 Name: plum
    List total: 3
    

    That was easy! You can open dist/fruits.js to see what the transpiled code looks like. A nice feature of the Babel transpiler is that it produces readable code.

    Writing ES6 code for a website

    For our second example we’ll take it up a notch. First, exit the es6-fruits folder and create a new directory es6-website using the steps listed under Project setup above.

    In the src folder we’ll create three files:

    src/index.html

    <!DOCTYPE html>
    <html>
      <head>
        <title>ES6 Today</title>
      </head>
      <style>
        body {
          border: 2px solid #9a9a9a;
          border-radius: 10px;
          padding: 6px;
          font-family: monospace;
          text-align: center;
        }
        .color {
          padding: 1rem;
          color: #fff;
        }
      </style>
      <body>
        <h1>ES6 Today</h1>
        <div id="info"></div>
        <hr>
        <div id="content"></div>
    
        <script src="//code.jquery.com/jquery-2.1.4.min.js"></script>
        <script src="js/my-app.js"></script>
      </body>
    </html>
    

    src/print-info.js

    function printInfo() {
      $('#info')
      .append('<p>minimal website example with' +
              'Broccoli and Babel</p>');
    }
    
    $(printInfo);
    

    src/print-colors.js

    // ES6 Generator
    function* hexRange(start, stop, step) {
      for (var i = start; i < stop; i += step) {
        yield i;
      }
    }
    
    function printColors() {
      var content$ = $('#content');
    
      // contrived example
      for ( var hex of hexRange(900, 999, 10) ) {
        var newDiv = $('<div>')
          .attr('class', 'color')
          .css({ 'background-color': `#${hex}` })
          .append(`hex code: #${hex}`);
        content$.append(newDiv);
      }
    }
    
    $(printColors);
    

    You might have noticed this bit: function* hexRange — yes, that’s an ES6 generator. This feature is not currently supported in all browsers. To be able to use it, we’ll need a polyfill. Babel provides this and we’ll put it to use very soon.

    The next step is to merge all the JS files and use them within a website. The hardest part is writing our Brocfile. This time we install 4 plugins:

    npm install --save-dev broccoli-babel-transpiler
    npm install --save-dev broccoli-funnel
    npm install --save-dev broccoli-concat
    npm install --save-dev broccoli-merge-trees
    

    Let’s put them to use:

    // Babel transpiler
    var babel = require('broccoli-babel-transpiler');
    // filter trees (subsets of files)
    var funnel = require('broccoli-funnel');
    // concatenate trees
    var concat = require('broccoli-concat');
    // merge trees
    var mergeTrees = require('broccoli-merge-trees');
    
    // Transpile the source files
    var appJs = babel('src');
    
    // Grab the polyfill file provided by the Babel library
    var babelPath = require.resolve('broccoli-babel-transpiler');
    babelPath = babelPath.replace(/\/index.js$/, '');
    babelPath += '/node_modules/babel-core';
    var browserPolyfill = funnel(babelPath, {
      files: ['browser-polyfill.js']
    });
    
    // Add the Babel polyfill to the tree of transpiled files
    appJs = mergeTrees([browserPolyfill, appJs]);
    
    // Concatenate all the JS files into a single file
    appJs = concat(appJs, {
      // we specify a concatenation order
      inputFiles: ['browser-polyfill.js', '**/*.js'],
      outputFile: '/js/my-app.js'
    });
    
    // Grab the index file
    var index = funnel('src', {files: ['index.html']});
    
    // Grab all our trees and
    // export them as a single and final tree
    module.exports = mergeTrees([index, appJs]);
    

    Time to build and execute our code.

    broccoli build dist
    

    This time you should see the following structure in the dist folder:

    $> tree dist/
    dist/
    ├── index.html
    └── js
        └── my-app.js
    

    That is a static website you can serve with any server to verify that the code is working. For instance:

    cd dist/
    python -m SimpleHTTPServer
    # visit http://localhost:8000/
    

    You should see this:

    simple ES6 website

    More fun with Babel and Broccoli

    The second example above gives an idea of how much we can accomplish with Babel. It might be enough to keep you going for a while. If you want to do more with ES6, Babel, and Broccoli, you should check out this repository: broccoli-babel-boilerplate. It is also a Broccoli+Babel setup, that takes it up at least two notches. This boilerplate handles modules, imports, and unit testing.

    You can try an example of that configuration in action here: es6-modules. All the magic is in the Brocfile and it’s very similar to what we have done already.


    As you can see, Babel and Broccoli really do make it quite practical to use ES6 features in web sites right now. Thanks to Gastón I. Silva for contributing this week’s post!

    Next week, ES6 In Depth starts a two-week summer break. This series has covered a lot of ground, but some of ES6’s most powerful features are yet to come. So please join us when we return with new content on July 9.

    Jason Orendorff

    ES6 In Depth Editor

  5. ES6 In Depth: Symbols

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    What are ES6 symbols?

    Symbols are not logos.

    They’re not little pictures you can use in your code.

    let 😻 = 😺 × 😍;  // SyntaxError
    

    They’re not a literary device that stands for something else.

    They’re definitely not the same thing as cymbals.

    (It is not a good idea to use cymbals in programming. They have a tendency to crash.)

    So, what are symbols?

    The seventh type

    Since JavaScript was first standardized in 1997, there have been six types. Until ES6, every value in a JS program fell into one of these categories.

    • Undefined
    • Null
    • Boolean
    • Number
    • String
    • Object

    Each type is a set of values. The first five sets are all finite. There are, of course, only two Boolean values, true and false, and they aren’t making new ones. There are rather more Number and String values. The standard says there are 18,437,736,874,454,810,627 different Numbers (including NaN, the Number whose name is short for “Not a Number”). That’s nothing compared to the number of different possible Strings, which I think is (2144,115,188,075,855,872 − 1) ÷ 65,535 …though I may have miscounted.

    The set of Object values, however, is open-ended. Each object is a unique, precious snowflake. Every time you open a Web page, a rush of new objects is created.

    ES6 symbols are values, but they’re not strings. They’re not objects. They’re something new: a seventh type of value.

    Let’s talk about a scenario where they might come in handy.

    One simple little boolean

    Sometimes it would be awfully convenient to stash some extra data on a JavaScript object that really belongs to someone else.

    For example, suppose you’re writing a JS library that uses CSS transitions to make DOM elements zip around on the screen. You’ve noticed that trying to apply multiple CSS transitions to a single div at the same time doesn’t work. It causes ugly, discontinuous “jumps”. You think you can fix this, but first you need a way to find out if a given element is already moving.

    How can you solve this?

    One way is to use CSS APIs to ask the browser if the element is moving. But that sounds like overkill. Your library should already know the element is moving; it’s the code that set it moving in the first place!

    What you really want is a way to keep track of which elements are moving. You could keep an array of all moving elements. Each time your library is called upon to animate an element, you can search the array to see if that element is already there.

    Hmm. A linear search will be slow if the array is big.

    What you really want to do is just set a flag on the element:

    if (element.isMoving) {
      smoothAnimations(element);
    }
    element.isMoving = true;
    

    There are some potential problems with this too. They all relate to the fact that your code isn’t the only code using the DOM.

    1. Other code using for-in or Object.keys() may stumble over the property you created.

    2. Some other clever library author may have thought of this technique first, and your library would interact badly with that existing library.

    3. Some other clever library author may think of it in the future, and your library would interact badly with that future library.

    4. The standard committee may decide to add an .isMoving() method to all elements. Then you’re really hosed!

    Of course you can address the last three problems by choosing a string so tedious or so silly that nobody else would ever name anything that:

    if (element.__$jorendorff_animation_library$PLEASE_DO_NOT_USE_THIS_PROPERTY$isMoving__) {
      smoothAnimations(element);
    }
    element.__$jorendorff_animation_library$PLEASE_DO_NOT_USE_THIS_PROPERTY$isMoving__ = true;
    

    This seems not quite worth the eye strain.

    You could generate a practically unique name for the property using cryptography:

    // get 1024 Unicode characters of gibberish
    var isMoving = SecureRandom.generateName();
    
    ...
    
    if (element[isMoving]) {
      smoothAnimations(element);
    }
    element[isMoving] = true;
    

    The object[name] syntax lets you use literally any string as a property name. So this will work: collisions are virtually impossible, and your code looks OK.

    But this is going to lead to a bad debugging experience. Every time you console.log() an element with that property on it, you’ll be looking a huge string of garbage. And what if you need more than one property like this? How do you keep them straight? They’ll have different names every time you reload.

    Why is this so hard? We just want one little boolean!

    Symbols are the answer

    Symbols are values that programs can create and use as property keys without risking name collisions.

    var mySymbol = Symbol();
    

    Calling Symbol() creates a new symbol, a value that’s not equal to any other value.

    Just like a string or number, you can use a symbol as a property key. Because it’s not equal to any string, this symbol-keyed property is guaranteed not to collide with any other property.

    obj[mySymbol] = "ok!";  // guaranteed not to collide
    console.log(obj[mySymbol]);  // ok!
    

    Here is how you could use a symbol in the situation discussed above:

    // create a unique symbol
    var isMoving = Symbol("isMoving");
    
    ...
    
    if (element[isMoving]) {
      smoothAnimations(element);
    }
    element[isMoving] = true;
    

    A few notes about this code:

    • The string "isMoving" in Symbol("isMoving") is called a description. It’s helpful for debugging. It’s shown when you write the symbol to console.log(), when you convert it to a string using .toString(), and possibly in error messages. That’s all.

    • element[isMoving] is called a symbol-keyed property. It’s simply a property whose name is a symbol rather than a string. Apart from that, it is in every way a normal property.

    • Like array elements, symbol-keyed properties can’t be accessed using dot syntax, as in obj.name. They must be accessed using square brackets.

    • It’s trivial to access a symbol-keyed property if you’ve already got the symbol. The above example shows how to get and set element[isMoving], and we could also ask if (isMoving in element) or even delete element[isMoving] if we needed to.

    • On the other hand, all of that is only possible as long as isMoving is in scope. This makes symbols a mechanism for weak encapsulation: a module that creates a few symbols for itself can use them on whatever objects it wants to, without fear of colliding with properties created by other code.

    Because symbol keys were designed to avoid collisions, JavaScript’s most common object-inspection features simply ignore symbol keys. A for-in loop, for instance, only loops over an object’s string keys. Symbol keys are skipped. Object.keys(obj) and Object.getOwnPropertyNames(obj) do the same. But symbols are not exactly private: it is possible to use the new API Object.getOwnPropertySymbols(obj) to list the symbol keys of an object. Another new API, Reflect.ownKeys(obj), returns both string and symbol keys. (We’ll discuss the Reflect API in full in an upcoming post.)

    Libraries and frameworks will likely find many uses for symbols, and as we’ll see later, the language itself is using of them for a wide range of purposes.

    But what are symbols, exactly?

    > typeof Symbol()
    "symbol"
    

    Symbols aren’t exactly like anything else.

    They’re immutable once created. You can’t set properties on them (and if you try that in strict mode, you’ll get a TypeError). They can be property names. These are all string-like qualities.

    On the other hand, each symbol is unique, distinct from all others (even others that have the same description) and you can easily create new ones. These are object-like qualities.

    ES6 symbols are similar to the more traditional symbols in languages like Lisp and Ruby, but not so closely integrated into the language. In Lisp, all identifiers are symbols. In JS, identifiers and most property keys are still considered strings. Symbols are just an extra option.

    One quick caveat about symbols: unlike almost anything else in the language, they can’t be automatically converted to strings. Trying to concatenate a symbol with strings will result in a TypeError.

    > var sym = Symbol("<3");
    > "your symbol is " + sym
    // TypeError: can't convert symbol to string
    > `your symbol is ${sym}`
    // TypeError: can't convert symbol to string
    

    You can avoid this by explicitly converting the symbol to a string, writing String(sym) or sym.toString().

    Three sets of symbols

    There are three ways to obtain a symbol.

    • Call Symbol(). As we already discussed, this returns a new unique symbol each time it’s called.

    • Call Symbol.for(string). This accesses a set of existing symbols called the symbol registry. Unlike the unique symbols defined by Symbol(), symbols in the symbol registry are shared. If you call Symbol.for("cat") thirty times, it will return the same symbol each time. The registry is useful when multiple web pages, or multiple modules within the same web page, need to share a symbol.

    • Use symbols like Symbol.iterator, defined by the standard. A few symbols are defined by the standard itself. Each one has its own special purpose.

    If you still aren’t sure if symbols will be all that useful, this last category is interesting, because they show how symbols have already proven useful in practice.

    How the ES6 spec is using well-known symbols

    We’ve already seen one way that ES6 uses a symbol to avoid conflicts with existing code. A few weeks ago, in the post on iterators, we saw that the loop for (var item of myArray) starts by calling myArray[Symbol.iterator](). I mentioned that this method could have been called myArray.iterator(), but a symbol is better for backward compatibility.

    Now that we know what symbols are all about, it’s easy to understand why this was done and what it means.

    Here are a few of the other places where ES6 uses well-known symbols. (These features are not implemented in Firefox yet.)

    • Making instanceof extensible. In ES6, the expression object instanceof constructor is specified as a method of the constructor: constructor[Symbol.hasInstance](object). This means it is extensible.

    • Eliminating conflicts between new features and old code. This is seriously obscure, but we found that certain ES6 Array methods broke existing web sites just by being there. Other Web standards had similar problems: simply adding new methods in the browser would break existing sites. However, the breakage was mainly caused by something called dynamic scoping, so ES6 introduces a special symbol, Symbol.unscopables, that Web standards can use to prevent certain methods from getting involved in dynamic scoping.

    • Supporting new kinds of string-matching. In ES5, str.match(myObject) tried to convert myObject to a RegExp. In ES6, it first checks to see if myObject has a method myObject[Symbol.match](str). Now libraries can provide custom string-parsing classes that work in all the places where RegExp objects work.

    Each of these uses is quite narrow. It’s hard to see any of these features by themselves having a major impact in my day-to-day code. The long view is more interesting. Well-known symbols are JavaScript’s improved version of the __doubleUnderscores in PHP and Python. The standard will use them in the future to add new hooks into the language with no risk to your existing code.

    When can I use ES6 symbols?

    Symbols are implemented in Firefox 36 and Chrome 38. I implemented them for Firefox myself, so if your symbols ever act like cymbals, you’ll know who to talk to.

    To support browsers that do not yet have native support for ES6 symbols, you can use a polyfill, such as core.js. Since symbols are not exactly like anything previously in the language, the polyfill isn’t perfect. Read the caveats.

    Next week, we’ll have two new posts. First, we’ll cover some long-awaited features that are finally coming to JavaScript in ES6—and complain about them. We’ll start with two features that date back almost to the dawn of programming. We’ll continue with two features that are very similar, but powered by ephemerons. So please join us next week as we look at ES6 collections in depth.

    And, stick around for a bonus post by Gastón Silva on a topic that isn’t an ES6 feature at all, but might provide the nudge you need to start using ES6 in your own projects. See you then!

  6. The state of Web Components

    Web Components have been on developers’ radars for quite some time now. They were first introduced by Alex Russell at Fronteers Conference 2011. The concept shook the community up and became the topic of many future talks and discussions.

    In 2013 a Web Components-based framework called Polymer was released by Google to kick the tires of these new APIs, get community feedback and add some sugar and opinion.

    By now, 4 years on, Web Components should be everywhere, but in reality Chrome is the only browser with ‘some version’ of Web Components. Even with polyfills it’s clear Web Components won’t be fully embraced by the community until the majority of browsers are on-board.

    Why has this taken so long?

    To cut a long story short, vendors couldn’t agree.

    Web Components were a Google effort and little negotiation was made with other browsers before shipping. Like most negotiations in life, parties that don’t feel involved lack enthusiasm and tend not to agree.

    Web Components were an ambitious proposal. Initial APIs were high-level and complex to implement (albeit for good reasons), which only added to contention and disagreement between vendors.

    Google pushed forward, they sought feedback, gained community buy-in; but in hindsight, before other vendors shipped, usability was blocked.

    Polyfills meant theoretically Web Components could work on browsers that hadn’t yet implemented, but these have never been accepted as ‘suitable for production’.

    Aside from all this, Microsoft haven’t been in a position to add many new DOM APIs due to the Edge work (nearing completion). And Apple, have been focusing on alternative features for Safari.

    Custom Elements

    Of all the Web Components technologies, Custom Elements have been the least contentious. There is general agreement on the value of being able to define how a piece of UI looks and behaves and being able to distribute that piece cross-browser and cross-framework.

    ‘Upgrade’

    The term ‘upgrade’ refers to when an element transforms from a plain old HTMLElement into a shiny custom element with its defined life-cycle and prototype. Today, when elements are upgraded, their createdCallback is called.

    var proto = Object.create(HTMLElement.prototype);
    proto.createdCallback = function() { ... };
    document.registerElement('x-foo', { prototype: proto });

    There are five proposals so far from multiple vendors; two stand out as holding the most promise.

    ‘Dmitry’

    An evolved version of the createdCallback pattern that works well with ES6 classes. The createdCallback concept lives on, but sub-classing is more conventional.

    class MyEl extends HTMLElement {
      createdCallback() { ... }
    }
    
    document.registerElement("my-el", MyEl);

    Like in today’s implementation, the custom element begins life as HTMLUnknownElement then some time later the prototype is swapped (or ‘swizzled’) with the registered prototype and the createdCallback is called.

    The downside of this approach is that it’s different from how the platform itself behaves. Elements are ‘unknown’ at first, then transform into their final form at some point in the future, which can lead to developer confusion.

    Synchronous constructor

    The constructor registered by the developer is invoked by the parser at the point the custom element is created and inserted into the tree.

    class MyEl extends HTMLElement {
      constructor() { ... }
    }
    
    document.registerElement("my-el", MyEl);

    Although this seems sensible, it means that any custom elements in the initial downloaded document will fail to upgrade if the scripts that contain their registerElement definition are loaded asynchronously. This is not helpful heading into a world of asynchronous ES6 modules.

    Additionally synchronous constructors come with platform issues related to .cloneNode().

    A direction is expected to be decided by vendors at a face-to-face meeting in July 2015.

    is=””

    The is attribute gives developers the ability to layer the behaviour of a custom element on top of a standard built-in element.

    <input type="text" is="my-text-input">

    Arguments for

    1. Allows extending the built-in features of a element that aren’t exposed as primitives (eg. accessibility characteristics, <form> controls, <template>).
    2. They give means to ‘progressively enhance’ an element, so that it remains functional without JavaScript.

    Arguments against

    1. Syntax is confusing.
    2. It side-steps the underlying problem that we’re missing many key accessibility primitives in the platform.
    3. It side-steps the underlying problem that we don’t have a way to properly extend built-in elements.
    4. Use-cases are limited; as soon as developers introduce Shadow DOM, they lose all built-in accessibility features.

    Consensus

    It is generally agreed that is is a ‘wart’ on the Custom Elements spec. Google has already implemented is and sees it as a stop-gap until lower-level primitives are exposed. Right now Mozilla and Apple would rather ship a Custom Elements V1 sooner and address this problem properly in a V2 without polluting the platform with ‘warts’.

    HTML as Custom Elements is a project by Domenic Denicola that attempts to rebuild built-in HTML elements with custom elements in an attempt to uncover DOM primitives the platform is missing.

    Shadow DOM

    Shadow DOM yielded the most contention by far between vendors. So much so that features had to be split into a ‘V1′ and ‘V2′ agenda to help reach agreement quicker.

    Distribution

    Distribution is the phase whereby children of a shadow host get visually ‘projected’ into slots inside the host’s Shadow DOM. This is the feature that enables your component to make use of content the user nests inside it.

    Current API

    The current API is fully declarative. Within the Shadow DOM you can use special <content> elements to define where you want the host’s children to be visually inserted.

    <content select="header"></content>

    Both Apple and Microsoft pushed back on this approach due to concerns around complexity and performance.

    A new Imperative API

    Even at the face-to-face meeting, agreement couldn’t be made on a declarative API, so all vendors agreed to pursue an imperative solution.

    All four vendors (Microsoft, Google, Apple and Mozilla) were tasked with specifying this new API before a July 2015 deadline. So far there have been three suggestions. The simplest of the three looks something like:

    var shadow = host.createShadowRoot({
      distribute: function(nodes) {
        var slot = shadow.querySelector('content');
        for (var i = 0; i < nodes.length; i++) {
          slot.add(nodes[i]);
        }
      }
    });
    
    shadow.innerHTML = '<content></content>';
    
    // Call initially ...
    shadow.distribute();
    
    // then hook up to MutationObserver

    The main obstacle is: timing. If the children of the host node change and we redistribute when the MutationObserver callback fires, asking for a layout property will return an incorrect result.

    myHost.appendChild(someElement);
    someElement.offsetTop; //=> old value
    
    // distribute on mutation observer callback (async)
    
    someElement.offsetTop; //=> new value

    Calling offsetTop will perform a synchronous layout before distribution!

    This might not seems like the end of the world, but scripts and browser internals often depend on the value of offsetTop being correct to perform many different operations, such as: scrolling elements into view.

    If these problems can’t be solved we may see a retreat back to discussions over a declarative API. This will either be in the form of the current <content select> style, or the newly proposed ‘named slots’ API (from Apple).

    A new Declarative API – ‘Named Slots’

    The ‘named slots’ proposal is a simpler variation of the current ‘content select’ API, whereby the component user must explicitly label their content with the slot they wish it to be distributed to.

    Shadow Root of <x-page>:

    <slot name="header"></slot>
    <slot></slot>
    <slot name="footer"></slot>
    <div>some shadow content</div>
    

    Usage of <x-page>:

    <x-page>
      <header slot="header">header</header>
      <footer slot="footer">footer</footer>
      <h1>my page title</h1>
      <p>my page content<p>
    </x-page>

    Composed/rendered tree (what the user sees):

    <x-page>
      <header slot="header">header</header>
      <h1>my page title</h1>
      <p>my page content<p>
      <footer slot="footer">footer</footer>
      <div>some shadow content</div>
    </x-page>

    The browser has looked at the direct children of the shadow host (myXPage.children) and seen if any of them have a slot attribute that matches the name of a <slot> element in the host’s shadowRoot.

    When a match is found, the node is visually ‘distributed’ in place of the corresponding <slot> element. Any children left undistributed at the end of this matching process are distributed to a default (unamed) <slot> element (if one exists).

    For:
    1. Distribution is more explicit, easier to understand, less ‘magic’.
    2. Distribution is simpler for the engine to compute.
    Against:
    1. Doesn’t explain how built-in elements, like <select>, work.
    2. Decorating content with slot attributes is more work for the user.
    3. Less expressive.

    ‘closed’ vs. ‘open’

    When a shadowRoot is ‘closed’ the it cannot be accessed via myHost.shadowRoot. This gives a component author some assurance that users won’t poke into implementation details, similar to how you can use closures to keep things private.

    Apple felt strongly that this was an important feature that they would block on. They argued that implementation details should never be exposed to the outside world and that ‘closed’ mode would be a required feature when ‘isolated’ custom elements became a thing.

    Google on the other hand felt that ‘closed’ shadow roots would prevent some accessibility and component tooling use-cases. They argued that it’s impossible to accidentally stumble into a shadowRoot and that if people want to they likely have a good reason. JS/DOM is open, let’s keep it that way.

    At the April meeting it became clear that to move forward, ‘mode’ needed to be a feature, but vendors were struggling to reach agreement on whether this should default to ‘open’ or ‘closed’. As a result, all agreed that for V1 ‘mode’ would be a required parameter, and thus wouldn’t need a specified default.

    element.createShadowRoot({ mode: 'open' });
    element.createShadowRoot({ mode: 'closed' });

    Shadow piercing combinators

    A ‘piercing combinator’ is a special CSS ‘combinator’ that can target elements inside a shadow root from the outside world. An example is /deep/ later renamed to >>>:

    .foo >>> div { color: red }

    When Web Components were first specified it was thought that these were required, but after looking at how they were being used it seemed to only bring problems, making it too easy to break the style boundaries that make Web Components so appealing.

    Performance

    Style calculation can be incredibly fast inside a tightly scoped Shadow DOM if the engine doesn’t have to take into consideration any outside selectors or state. The very presence of piercing combinators forbids these kind of optimisations.

    Alternatives

    Dropping shadow piercing combinators doesn’t mean that users will never be able to customize the appearance of a component from the outside.

    CSS custom-properties (variables)

    In Firefox OS we’re using CSS Custom Properties to expose specific style properties that can be defined (or overridden) from the outside.

    External (user):

    x-foo { --x-foo-border-radius: 10px; }
    

    Internal (author):

    .internal-part { border-radius: var(--x-foo-border-radius, 0); }
    Custom pseudo-elements

    We have also seen interest expressed from several vendors in reintroducing the ability to define custom pseudo selectors that would expose given internal parts to be styled (similar to how we style parts of <input type=”range”> today).

    x-foo::my-internal-part { ... }

    This will likely be considered for a Shadow DOM V2 specification.

    Mixins – @extend

    There is proposed specification to bring SASS’s @extend behaviour to CSS. This would be a useful tool for component authors to allow users to provide a ‘bag’ of properties to apply to a specific internal part.

    External (user):

    .x-foo-part {
      background-color: red;
      border-radius: 4px;
    }

    Internal (author):

    .internal-part {
      @extend .x-foo-part;
    }

    Multiple shadow roots

    Why would I want more than one shadow root on the same element?, I hear you ask. The answer is: inheritance.

    Let’s imagine I’m writing an <x-dialog> component. Within this component I write all the markup, styling, and interactions to give me an opening and closing dialog window.

    <x-dialog>
      <h1>My title</h1>
      <p>Some details</p>
      <button>Cancel</button>
      <button>OK</button>
    </x-dialog>

    The shadow root pulls any user provided content into div.inner via the <content> insertion point.

    <div class="outer">
      <div class="inner">
      <content></content>
      </div>
    </div>

    I also want to create <x-dialog-alert> that looks and behaves just like <x-dialog> but with a more restricted API, a bit like alert('foo').

    <x-dialog-alert>foo</x-dialog-alert>
    var proto = Object.create(XDialog.prototype);
    
    proto.createdCallback = function() {
      XDialog.prototype.createdCallback.call(this);
      this.createShadowRoot();
      this.shadowRoot.innerHTML = templateString;
    };
    
    document.registerElement('x-dialog-alert', { prototype: proto });
    

    The new component will have its own shadow root, but it’s designed to work on top of the parent class’s shadow root. The <shadow> represents the ‘older’ shadow root and allows us to project content inside it.

    <shadow>
      <h1>Alert</h1>
      <content></content>
      <button>OK</button>
    </shadow>

    Once you get your head round multiple shadow roots, they become a powerful concept. The downside is they bring a lot of complexity and introduce a lot of edge cases.

    Inheritance without multiple shadows

    Inheritance is still possible without multiple shadow roots, but it involves manually mutating the super class’s shadow root.

    
    var proto = Object.create(XDialog.prototype);
    
    proto.createdCallback = function() {
      XDialog.prototype.createdCallback.call(this);
      var inner = this.shadowRoot.querySelector('.inner');
    
      var h1 = document.createElement('h1');
      h1.textContent = 'Alert';
      inner.insertBefore(h1, inner.children[0]);
    
      var button = document.createElement('button');
      button.textContent = 'OK';
      inner.appendChild(button);
    
      ...
    };
    
    document.registerElement('x-dialog-alert', { prototype: proto });
    

    The downsides of this approach are:

    1. Not as elegant.
    2. Your sub-component is dependent on the implementation details of the super-component.
    3. This wouldn’t be possible if the super component’s shadow root was ‘closed’, as this.shadowRoot would be undefined.

    HTML Imports

    HTML Imports provide a way to import all assets defined in one .html document, into the scope of another.

    <link rel="import" href="/path/to/imports/stuff.html">

    As previously stated, Mozilla is not currently intending to implementing HTML Imports. This is in part because we’d like to see how ES6 modules pan out before shipping another way of importing external assets, and partly because we don’t feel they enable much that isn’t already possible.

    We’ve been working with Web Components in Firefox OS for over a year and have found using existing module syntax (AMD or Common JS) to resolve a dependency tree, registering elements, loaded using a normal <script> tag seems to be enough to get stuff done.

    HTML Imports do lend themselves well to a simpler/more declarative workflow, such as the older <element> and Polymer’s current registration syntax.

    With this simplicity has come criticism from the community that Imports don’t offer enough control to be taken seriously as a dependency management solution.

    Before the decision was made a few months ago, Mozilla had a working implementation behind a flag, but struggled through an incomplete specification.

    What will happen to them?

    Apple’s Isolated Custom Elements proposal makes use of an HTML Imports style approach to provide custom elements with their own document scope;: Perhaps there’s a future there.

    At Mozilla we want to explore how importing custom element definitions can align with upcoming ES6 module APIs. We’d be prepared to implement if/when they appear to enable developers to do stuff they can’t already do.

    To conclude

    Web Components are a prime example of how difficult it is to get large features into the browser today. Every API added lives indefinitely and remains as an obstacle to the next.

    Comparable to picking apart a huge knotted ball of string, adding a bit more, then tangling it back up again. This knot, our platform, grows ever larger and more complex.

    Web Components have been in planning for over three years, but we’re optimistic the end is near. All major vendors are on board, enthusiastic, and investing significant time to help resolve the remaining issues.

    Let’s get ready to componentize the web!

    More

  7. ES6 In Depth: Arrow functions

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    Arrows have been part of JavaScript from the very beginning. The first JavaScript tutorials advised wrapping inline scripts in HTML comments. This would prevent browsers that didn’t support JS from erroneously displaying your JS code as text. You would write something like this:

    <script language="javascript">
    <!--
      document.bgColor = "brown";  // red
    // -->
    </script>
    

    Old browsers would see two unsupported tags and a comment; only new browsers would see JS code.

    To support this odd hack, the JavaScript engine in your browser treats the characters <!-- as the start of a one-line comment. No joke. This has really been part of the language all along, and it works to this day, not just at the top of an inline <script> but everywhere in JS code. It even works in Node.

    As it happens, this style of comment is standardized for the first time in ES6. But this isn’t the arrow we’re here to talk about.

    The arrow sequence --> also denotes a one-line comment. Weirdly, while in HTML characters before the --> are part of the comment, in JS the rest of the line after the --> is a comment.

    It gets stranger. This arrow indicates a comment only when it appears at the start of a line. That’s because in other contexts, --> is an operator in JS, the “goes to” operator!

    function countdown(n) {
      while (n --> 0)  // "n goes to zero"
        alert(n);
      blastoff();
    }
    

    This code really works. The loop runs until n gets to 0. This too is not a new feature in ES6, but a combination of familiar features, with a little misdirection thrown in. Can you figure out what’s going on here? As usual, the answer to the puzzle can be found on Stack Overflow.

    Of course there is also the less-than-or-equal-to operator, <=. Perhaps you can find more arrows in your JS code, Hidden Pictures style, but let’s stop here and observe that an arrow is missing.

    <!-- single-line comment
    --> “goes to” operator
    <= less than or equal to
    => ???

    What happened to =>? Today, we find out.

    First, let’s talk a bit about functions.

    Function expressions are everywhere

    A fun feature of JavaScript is that any time you need a function, you can just type that function right in the middle of running code.

    For example, suppose you are trying to tell the browser what to do when the user clicks on a particular button. You start typing:

    $("#confetti-btn").click(
    

    jQuery’s .click() method takes one argument: a function. No problem. You can just type in a function right here:

    $("#confetti-btn").click(function (event) {
      playTrumpet();
      fireConfettiCannon();
    });
    

    Writing code like this comes quite naturally to us now. So it’s strange to recall that before JavaScript popularized this kind of programming, many languages did not have this feature. Of course Lisp had function expressions, also called lambda functions, in 1958. But C++, Python, C#, and Java all existed for years without them.

    Not anymore. All four have lambdas now. Newer languages universally have lambdas built in. We have JavaScript to thank for this—and early JavaScript programmers who fearlessly built libraries that depended heavily on lambdas, leading to widespread adoption of the feature.

    It is just slightly sad, then, that of all the languages I’ve mentioned, JavaScript’s syntax for lambdas has turned out to be the wordiest.

    // A very simple function in six languages.
    function (a) { return a > 0; } // JS
    [](int a) { return a > 0; }  // C++
    (lambda (a) (> a 0))  ;; Lisp
    lambda a: a > 0  # Python
    a => a > 0  // C#
    a -> a > 0  // Java
    

    A new arrow in your quiver

    ES6 introduces a new syntax for writing functions.

    // ES5
    var selected = allJobs.filter(function (job) {
      return job.isSelected();
    });
    
    // ES6
    var selected = allJobs.filter(job => job.isSelected());
    

    When you just need a simple function with one argument, the new arrow function syntax is simply Identifier => Expression. You get to skip typing function and return, as well as some parentheses, braces, and a semicolon.

    (I am personally very grateful for this feature. Not having to type function is important to me, because I inevitably type functoin instead and have to go back and correct it.)

    To write a function with multiple arguments (or no arguments, or rest parameters or defaults, or a destructuring argument) you’ll need to add parentheses around the argument list.

    // ES5
    var total = values.reduce(function (a, b) {
      return a + b;
    }, 0);
    
    // ES6
    var total = values.reduce((a, b) => a + b, 0);
    

    I think it looks pretty nice.

    Arrow functions work just as beautifully with functional tools provided by libraries, like Underscore.js and Immutable. In fact, the examples in Immutable’s documentation are all written in ES6, so many of them already use arrow functions.

    What about not-so-functional settings? Arrow functions can contain a block of statements instead of just an expression. Recall our earlier example:

    // ES5
    $("#confetti-btn").click(function (event) {
      playTrumpet();
      fireConfettiCannon();
    });
    

    Here’s how it will look in ES6:

    // ES6
    $("#confetti-btn").click(event => {
      playTrumpet();
      fireConfettiCannon();
    });
    

    A minor improvement. The effect on code using Promises can be more dramatic, as the }).then(function (result) { lines can pile up.

    Note that an arrow function with a block body does not automatically return a value. Use a return statement for that.

    There is one caveat when using arrow functions to create plain objects. Always wrap the object in parentheses:

    // create a new empty object for each puppy to play with
    var chewToys = puppies.map(puppy => {});   // BUG!
    var chewToys = puppies.map(puppy => ({})); // ok
    

    Unfortunately, an empty object {} and an empty block {} look exactly the same. The rule in ES6 is that { immediately following an arrow is always treated as the start of a block, never the start of an object. The code puppy => {} is therefore silently interpreted as an arrow function that does nothing and returns undefined.

    Even more confusing, an object literal like {key: value} looks exactly like a block containing a labeled statement—at least, that’s how it looks to your JavaScript engine. Fortunately { is the only ambiguous character, so wrapping object literals in parentheses is the only trick you need to remember.

    What’s this?

    There is one subtle difference in behavior between ordinary function functions and arrow functions. Arrow functions do not have their own this value. The value of this inside an arrow function is always inherited from the enclosing scope.

    Before we try and figure out what that means in practice, let’s back up a bit.

    How does this work in JavaScript? Where does its value come from? There’s no short answer. If it seems simple in your head, it’s because you’ve been dealing with it for a long time!

    One reason this question comes up so often is that function functions receive a this value automatically, whether they want one or not. Have you ever written this hack?

    {
      ...
      addAll: function addAll(pieces) {
        var self = this;
        _.each(pieces, function (piece) {
          self.add(piece);
        });
      },
      ...
    }
    

    Here, what you’d like to write in the inner function is just this.add(piece). Unfortunately, the inner function doesn’t inherit the outer function’s this value. Inside the inner function, this will be window or undefined. The temporary variable self serves to smuggle the outer value of this into the inner function. (Another way is to use .bind(this) on the inner function. Neither way is particularly pretty.)

    In ES6, this hacks mostly go away if you follow these rules:

    • Use non-arrow functions for methods that will be called using the object.method() syntax. Those are the functions that will receive a meaningful this value from their caller.
    • Use arrow functions for everything else.
    // ES6
    {
      ...
      addAll: function addAll(pieces) {
        _.each(pieces, piece => this.add(piece));
      },
      ...
    }
    

    In the ES6 version, note that the addAll method receives this from its caller. The inner function is an arrow function, so it inherits this from the enclosing scope.

    As a bonus, ES6 also provides a shorter way to write methods in object literals! So the code above can be simplified further:

    // ES6 with method syntax
    {
      ...
      addAll(pieces) {
        _.each(pieces, piece => this.add(piece));
      },
      ...
    }
    

    Between methods and arrows, I might never type functoin again. It’s a nice thought.

    There’s one more minor difference between arrow and non-arrow functions: arrow functions don’t get their own arguments object, either. Of course, in ES6, you’d probably rather use a rest parameter or default value anyway.

    Using arrows to pierce the dark heart of computer science

    We’ve talked about the many practical uses of arrow functions. There’s one more possible use case I’d like to talk about: ES6 arrow functions as a learning tool, to uncover something deep about the nature of computation. Whether that is practical or not, you’ll have to decide for yourself.

    In 1936, Alonzo Church and Alan Turing independently developed powerful mathematical models of computation. Turing called his model a-machines, but everyone instantly started calling them Turing machines. Church wrote instead about functions. His model was called the λ-calculus. (λ is the lowercase Greek letter lambda.) This work was the reason Lisp used the word LAMBDA to denote functions, which is why we call function expressions “lambdas” today.

    But what is the λ-calculus? What is “model of computation” supposed to mean?

    It’s hard to explain in just a few words, but here is my attempt: the λ-calculus is one of the first programming languages. It was not designed to be a programming language—after all, stored-program computers wouldn’t come along for another decade or two—but rather a ruthlessly simple, stripped-down, purely mathematical idea of a language that could express any kind of computation you wished to do. Church wanted this model in order to prove things about computation in general.

    And he found that he only needed one thing in his system: functions.

    Think how extraordinary this claim is. Without objects, without arrays, without numbers, without if statements, while loops, semicolons, assignment, logical operators, or an event loop, it is possible to rebuild every kind of computation JavaScript can do, from scratch, using only functions.

    Here is an example of the sort of “program” a mathematician could write, using Church’s λ notation:

    fix = λf.(λx.f(λv.x(x)(v)))(λx.f(λv.x(x)(v)))
    

    The equivalent JavaScript function looks like this:

    var fix = f => (x => f(v => x(x)(v)))
                   (x => f(v => x(x)(v)));
    

    That is, JavaScript contains an implementation of the λ-calculus that actually runs. The λ-calculus is in JavaScript.

    The stories of what Alonzo Church and later researchers did with the λ-calculus, and how it has quietly insinuated itself into almost every major programming language, are beyond the scope of this blog post. But if you’re interested in the foundations of computer science, or you’d just like to see how a language with nothing but functions can do things like loops and recursion, you could do worse than to spend some rainy afternoon looking into Church numerals and fixed-point combinators, and playing with them in your Firefox console or Scratchpad. With ES6 arrows on top of its other strengths, JavaScript can reasonably claim to be the best language for exploring the λ-calculus.

    When can I use arrows?

    ES6 arrow functions were implemented in Firefox by me, back in 2013. Jan de Mooij made them fast. Thanks to Tooru Fujisawa and ziyunfei for patches.

    Arrow functions are also implemented in the Microsoft Edge preview release. They’re also available in Babel, Traceur, and TypeScript, in case you’re interested in using them on the Web right now.

    Our next topic is one of the stranger features in ES6. We’ll get to see typeof x return a totally new value. We’ll ask: When is a name not a string? We’ll puzzle over the meaning of equality. It’ll be weird. So please join us next week as we look at ES6 symbols in depth.

  8. New Performance Tools in Firefox Developer Edition 40

    Today Mozilla is pleased to announce the availability of Firefox Developer Edition 40 (DE 40) featuring all-new performance tools! In this post we will cover some of DE 40’s new developer tools, fixes, and improvements made to existing tools. In addition, a couple of videos showcase some of these features.

    Note: Many of the new features were introduced in May, in an earlier Mozilla Hacks post.

    Introducing the new performance tools

    Firefox Developer Edition features a new performance tool that gives developers a better understanding of what is happening from a performance standpoint within their applications. Web developers can use these tools to profile performance in any kind of website, app, or game; for a fun insight into how these tools can be used to optimize HTML5 games, check out our post about the “Power Surge” game right after you’re done here.

    All performance tools can now be found grouped together under the Performance tab, for easier usage. Performance is all about timing, so you can view browser events in the context of a timeline, which in turn can be extended to include a number of detailed views based on the metrics you choose to monitor.

    In the following video, Dan Callahan demonstrates how to use the new performance tools.


    The Performance tab contains the new timeline, which includes: Waterfall view, Call Tree view and a Flame Chart view.

    All of the views above provide details of application performance that can be correlated with a recorded timeline overview. The timeline displays a compressed view of the Waterfall, minimum, maximum, and average frame rates, and a graphical representation of the frame rate. Left-clicking on the view and dragging to the desired range allows you to zoom into this timeline. This also simultaneously updates all three new views to represent a particular selected range.

    The recording view gives developers a quick way to zoom into areas where frame rate problems are occurring.
    recording

    The Waterfall view provides a graphical timeline of events occurring within the application. These events include markers for occurrences such as reflows, restyles, JavaScript calls, garbage collection, and paint operations. Using a simple filter button you can select the events you want to display in the Waterfall.

    perffilter

    You can use console commands like console.timeStamp() to indicate, with a marker on the Waterfall, when a specific event occurs. Also, you can graphically show timespans using the console.time() and console.timeEnd() functions.

    consoletimestamp

    The Call Tree view shows the results of the JavaScript profiler for the specified range. Using this view you can see the approximate time spent in a function. The table displays total time spent within a function call or the self-time that a particular function call is using. The total time encapsulates all time spent in the function and includes time spent in nested function calls. The self-time only includes time spent in the particular function, excluding nested calls. This view can be very helpful when trying to locate functions that are consuming a large portion of processing time. This view has been available in previous iterations of Firefox, and should be familiar to developers who have used the tool in the past.

    calltreeexample
    The Flame Chart view is similar to the Call Tree in that it graphically illustrates the call stack for a selected range. For example, in the screenshot below the drawCirc() function is taking over 25 milliseconds (ms) to complete, which is larger than the allotted time for frame generation to produce 60 frames per second.
    flamechartexample

    Performance profiles can be created, saved, imported, or deleted. In addition, multiple profiles can be opened at once to contrast and compare performance statistics between runs. Profiles can be created programmatically or using the console, by entering console.profile(“NameOfProfile”) to start a profile and console.profileEnd(“NameOfProfile”) to stop a profile. This allows you to fine-tune when profiling starts and stops within your code.
    consoleprofile
    You can find complete docs for the performance tools on MDN. These include a tour of the UI, reference pages for each of the main tools, and some examples in which we use the tools to diagnose performance problems in CSS animations and JavaScript-heavy pages.

    Additional features and improvements

    In addition to the new Performance tools we’ve also implemented many new convenience features — mostly inspired by direct feedback from developers via our UserVoice channel — and over ninety bug fixes, representing a ton of hard work over the last eight weeks from Firefox Developer Tools staff as well as many contributors. Please continue to submit your feedback.

    This video from Matthew “Potch” Claypotch shows off some the most requested feature implementations for Developer Edition 40.

    Network Monitor improvements

    As seen in the video above, the Network Monitor includes many improvements such data collection when the Network Tab is not active, and the ability to see quickly when an asset is loaded from the cache as opposed to the network.
    cached
    It is now possible to copy post data, URL parameters, and Request and Response headers using the context menu when selecting a row entry.
    postdata

    CSS docs integration

    Firefox Developer Tools now support integration with MDN documentation for CSS properties, providing more information for developers while they are debugging web app styling and layout. To access this feature, you can right-click (Ctrl + click on Mac) on CSS properties within the Inspector, and select “Show MDN Docs” from the context menu.
    mdncsslink

    Improved Inspector layout

    In the Inspector, whitespace in text node layout is cleaned up, providing a better view of your markup.
    whitespace

    Additional fixes

    Many additional fixes are also included, like improvements to the Animation Inspector, scroll into view context menu support and Inspector search improvements. To see all the bugs addressed in this release have a look at this big list in Bugzilla.

    We’d like to send a gigantic special thank you to all the contributors and individuals who reported bugs, tested patches, and spent many hours working to make Firefox Developer Tools impressive.

  9. Power Surge – optimize the JavaScript in this HTML5 game using Firefox Developer Edition

    Power Surge!

    The Firefox Developer Tools team wanted to find a fun way to show off the great performance tools we’ve just added to the Firefox Developer Edition browser. We partnered with Przemysław Sikorski (aka rezoner) author of Playground.js and the arcade puzzle game QbQbQb, to create “Power Surge,” a fun game which shows off how the new Performance tools can help developers find slow JavaScript code. The special twist is that the more you optimize the code in the game, the more ships and powers you win in gameplay!

    First off, check out this quick screencast from Mozilla engineer Harald Kirschner on how Power Surge works:

    To get going with Power Surge:

    We’d really like to hear back from you about Power Surge! Tweet to us at @FirefoxDevtools and @mozhacks to share your solutions.

  10. ES6 In Depth: Destructuring

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    Editor’s note: An earlier version of today’s post, by Firefox Developer Tools engineer Nick Fitzgerald, originally appeared on Nick’s blog as Destructuring Assignment in ES6.

    What is destructuring assignment?

    Destructuring assignment allows you to assign the properties of an array or object to variables using syntax that looks similar to array or object literals. This syntax can be extremely terse, while still exhibiting more clarity than the traditional property access.

    Without destructuring assignment, you might access the first three items in an array like this:

    var first = someArray[0];
    var second = someArray[1];
    var third = someArray[2];
    

    With destructuring assignment, the equivalent code becomes more concise and readable:

    var [first, second, third] = someArray;
    

    SpiderMonkey (Firefox’s JavaScript engine) already has support for most of destructuring, but not quite all of it. Track SpiderMonkey’s destructuring (and general ES6) support in bug 694100.

    Destructuring arrays and iterables

    We already saw one example of destructuring assignment on an array above. The general form of the syntax is:

    [ variable1, variable2, ..., variableN ] = array;
    

    This will just assign variable1 through variableN to the corresponding item in the array. If you want to declare your variables at the same time, you can add a var, let, or const in front of the assignment:

    var [ variable1, variable2, ..., variableN ] = array;
    let [ variable1, variable2, ..., variableN ] = array;
    const [ variable1, variable2, ..., variableN ] = array;
    

    In fact, variable is a misnomer since you can nest patterns as deep as you would like:

    var [foo, [[bar], baz]] = [1, [[2], 3]];
    console.log(foo);
    // 1
    console.log(bar);
    // 2
    console.log(baz);
    // 3
    

    Furthermore, you can skip over items in the array being destructured:

    var [,,third] = ["foo", "bar", "baz"];
    console.log(third);
    // "baz"
    

    And you can capture all trailing items in an array with a “rest” pattern:

    var [head, ...tail] = [1, 2, 3, 4];
    console.log(tail);
    // [2, 3, 4]
    

    When you access items in the array that are out of bounds or don’t exist, you get the same result you would by indexing: undefined.

    console.log([][0]);
    // undefined
    
    var [missing] = [];
    console.log(missing);
    // undefined
    

    Note that destructuring assignment with an array assignment pattern also works for any iterable:

    function* fibs() {
      var a = 0;
      var b = 1;
      while (true) {
        yield a;
        [a, b] = [b, a + b];
      }
    }
    
    var [first, second, third, fourth, fifth, sixth] = fibs();
    console.log(sixth);
    // 5
    

    Destructuring objects

    Destructuring on objects lets you bind variables to different properties of an object. You specify the property being bound, followed by the variable you are binding its value to.

    var robotA = { name: "Bender" };
    var robotB = { name: "Flexo" };
    
    var { name: nameA } = robotA;
    var { name: nameB } = robotB;
    
    console.log(nameA);
    // "Bender"
    console.log(nameB);
    // "Flexo"
    

    There is a helpful syntactical shortcut for when the property and variable names are the same:

    var { foo, bar } = { foo: "lorem", bar: "ipsum" };
    console.log(foo);
    // "lorem"
    console.log(bar);
    // "ipsum"
    

    And just like destructuring on arrays, you can nest and combine further destructuring:

    var complicatedObj = {
      arrayProp: [
        "Zapp",
        { second: "Brannigan" }
      ]
    };
    
    var { arrayProp: [first, { second }] } = complicatedObj;
    
    console.log(first);
    // "Zapp"
    console.log(second);
    // "Brannigan"
    

    When you destructure on properties that are not defined, you get undefined:

    var { missing } = {};
    console.log(missing);
    // undefined
    

    One potential gotcha you should be aware of is when you are using destructuring on an object to assign variables, but not to declare them (when there is no let, const, or var):

    { blowUp } = { blowUp: 10 };
    // Syntax error
    

    This happens because the JavaScript grammar tells the engine to parse any statement starting with { as a block statement (for example, { console } is a valid block statement). The solution is to either wrap the whole expression in parentheses:

    ({ safe } = {});
    // No errors
    

    Destructuring values that are not an object, array, or iterable

    When you try to use destructuring on null or undefined, you get a type error:

    var {blowUp} = null;
    // TypeError: null has no properties
    

    However, you can destructure on other primitive types such as booleans, numbers, and strings, and get undefined:

    var {wtf} = NaN;
    console.log(wtf);
    // undefined
    

    This may come unexpected, but upon further examination the reason turns out to be simple. When using an object assignment pattern, the value being destructured is required to be coercible to an Object. Most types can be converted to an object, but null and undefined may not be converted. When using an array assignment pattern, the value must have an iterator.

    Default values

    You can also provide default values for when the property you are destructuring is not defined:

    var [missing = true] = [];
    console.log(missing);
    // true
    
    var { message: msg = "Something went wrong" } = {};
    console.log(msg);
    // "Something went wrong"
    
    var { x = 3 } = {};
    console.log(x);
    // 3
    

    (Editor’s note: This feature is currently implemented in Firefox only for the first two cases, not the third. See bug 932080.)

    Practical applications of destructuring

    Function parameter definitions

    As developers, we can often expose more ergonomic APIs by accepting a single object with multiple properties as a parameter instead of forcing our API consumers to remember the order of many individual parameters. We can use destructuring to avoid repeating this single parameter object whenever we want to reference one of its properties:

    function removeBreakpoint({ url, line, column }) {
      // ...
    }
    

    This is a simplified snippet of real world code from the Firefox DevTools JavaScript debugger (which is also implemented in JavaScript—yo dawg). We have found this pattern particularly pleasing.

    Configuration object parameters

    Expanding on the previous example, we can also give default values to the properties of the objects we are destructuring. This is particularly helpful when we have an object that is meant to provide configuration and many of the object’s properties already have sensible defaults. For example, jQuery’s ajax function takes a configuration object as its second parameter, and could be rewritten like this:

    jQuery.ajax = function (url, {
      async = true,
      beforeSend = noop,
      cache = true,
      complete = noop,
      crossDomain = false,
      global = true,
      // ... more config
    }) {
      // ... do stuff
    };
    

    This avoids repeating var foo = config.foo || theDefaultFoo; for each property of the configuration object.

    (Editor’s note: Unfortunately, default values within object shorthand syntax still aren’t implemented in Firefox. I know, we’ve had several paragraphs to work on it since that earlier note. See bug 932080 for the latest updates.)

    With the ES6 iteration protocol

    ECMAScript 6 also defines an iteration protocol, which we talked about earlier in this series. When you iterate over Maps (an ES6 addition to the standard library), you get a series of [key, value] pairs. We can destructure this pair to get easy access to both the key and the value:

    var map = new Map();
    map.set(window, "the global");
    map.set(document, "the document");
    
    for (var [key, value] of map) {
      console.log(key + " is " + value);
    }
    // "[object Window] is the global"
    // "[object HTMLDocument] is the document"
    

    Iterate over only the keys:

    for (var [key] of map) {
      // ...
    }
    

    Or iterate over only the values:

    for (var [,value] of map) {
      // ...
    }
    

    Multiple return values

    Although multiple return values aren’t baked into the language proper, they don’t need to be because you can return an array and destructure the result:

    function returnMultipleValues() {
      return [1, 2];
    }
    var [foo, bar] = returnMultipleValues();
    

    Alternatively, you can use an object as the container and name the returned values:

    function returnMultipleValues() {
      return {
        foo: 1,
        bar: 2
      };
    }
    var { foo, bar } = returnMultipleValues();
    

    Both of these patterns end up much better than holding onto the temporary container:

    function returnMultipleValues() {
      return {
        foo: 1,
        bar: 2
      };
    }
    var temp = returnMultipleValues();
    var foo = temp.foo;
    var bar = temp.bar;
    

    Or using continuation passing style:

    function returnMultipleValues(k) {
      k(1, 2);
    }
    returnMultipleValues((foo, bar) => ...);
    

    Importing names from a CommonJS module

    Not using ES6 modules yet? Still using CommonJS modules? No problem! When importing some CommonJS module X, it is fairly common that module X exports more functions than you actually intend to use. With destructuring, you can be explicit about which parts of a given module you’d like to use and avoid cluttering your namespace:

    const { SourceMapConsumer, SourceNode } = require("source-map");
    

    (And if you do use ES6 modules, you know that a similar syntax is available in import declarations.)

    Conclusion

    So, as you can see, destructuring is useful in many individually small cases. At Mozilla we’ve had a lot of experience with it. Lars Hansen introduced JS destructuring in Opera ten years ago, and Brendan Eich added support to Firefox a bit later. It shipped in Firefox 2. So we know that destructuring sneaks into your everyday use of the language, quietly making your code a bit shorter and cleaner all over the place.

    Five weeks ago, we said that ES6 would change the way you write JavaScript. It is this sort of feature we had particularly in mind: simple improvements that can be learned one at a time. Taken together, they will end up affecting every project you work on. Revolution by way of evolution.

    Updating destructuring to comply with ES6 has been a team effort. Special thanks to Tooru Fujisawa (arai) and Arpad Borsos (Swatinem) for their excellent contributions.

    Support for destructuring is under development for Chrome, and other browsers will undoubtedly add support in time. For now, you’ll need to use Babel or Traceur if you want to use destructuring on the Web.


    Thanks again to Nick Fitzgerald for this week’s post.

    Next week, we’ll cover a feature that is nothing more or less than a slightly shorter way to write something JS already has—something that has been one of the fundamental building blocks of the language all along. Will you care? Is slightly shorter syntax something you can get excited about? I confidently predict the answer is yes, but don’t take my word for it. Join us next week and find out, as we look at ES6 arrow functions in depth.

    Jason Orendorff

    ES6 In Depth editor