Featured Articles

Sort by:


  1. Scroll snapping explained

    Have you ever tried to snap your page’s contents after scrolling? There are many JavaScript libraries out there providing this functionality. Here are a few examples:

    As this is a common use case related to page layout and behavior, the W3C has published a pure CSS approach to scroll snapping.

    CSS scroll snapping, (available since July’s Firefox 39 release), allows you to control where to stop on an overflowing element when it’s scrolled. This lets you section your page into logical divisions and thus create smoother, easier-to-interact-with user interfaces. Touch devices in particular benefit from this feature, where it is easier for people to pan through pages instead of tapping through hierarchical structures.

    Image gallery

    Image galleries are surely the most common use case for scroll snapping: Users can flip through the images, viewing one image at a time by swiping or scrolling the page. So let’s see how this can be achieved with the new properties:

    img {
      width: 200px;
    .photoGallery {
      width: 200px;
      overflow: auto;
      white-space: nowrap;
      scroll-snap-points-x: repeat(100%);
      scroll-snap-type: mandatory;

    The related HTML code looks like this:

    <div class="photoGallery">
      <img src="img1.png"><img src="img2.png"><img src="img3.png">

    Here’s a live demo:

    See the Pen wKvYdK by Potch (@potch) on CodePen.

    The code above creates a simple image gallery with three images, which can be scrolled through horizontally.

    In this case the size of the images and their containing <div> are set to 200 pixels. The overflow: auto; displays a scrollbar on clients that support it. white-space: nowrap; serves to keep all images horizontally aligned. The definition “scroll-snap-points-x: repeat(100%);” sets a repeated horizontal snap point at 100% of the viewport width of the container <div>, in this case at 200 pixel intervals. Setting scroll-snap-type: mandatory; has the effect that snapping is forced. The viewport will always snap at a snap point, so the display will never stay in between two images.

    Item lists

    You’ve probably seen plenty of online product pages listing different features with an image of each feature and a description next to it, or an interface displaying a series of user testimonials.

    In these cases, scroll snapping lets you align the sections so that maximal display space is used.

    .features {
      width: 400px;
      height: 250px;
      padding: 0;
      overflow: auto;
      scroll-snap-type: proximity;
      scroll-snap-destination: 0 16px;
    .features > section {
      clear: both;
      margin: 20px 0;
      scroll-snap-coordinate: 0 0;
    img {
      width: 50px;
      height: 50px;
      margin: 5px 10px;
      float: left;
    section:last-child {
      margin-bottom: 60px;

    And here’s the related HTML code:

    <div class="features">
      <section id="feature1">
        <img src="feature1.png"/>
        <p>Lorem ipsum...</p>
      <section id="feature2">
        <img src="feature2.png"/>
        <p>Lorem ipsum...</p>

    Here’s a live demo:

    See the Pen NGWOjN by Potch (@potch) on CodePen.

    When you scroll within the example above, each feature section is positioned so that the top is aligned with the top of the viewport to display as much text as possible. This is achieved by applying scroll-snap-coordinate: 0 0; to the sections. The two zeros refer to the x and y coordinates of the element where it will snap to the container element. scroll-snap-destination: 0 16px; defines the offset position within the container element to which the inner elements should snap. In this case, that’s 16 pixels below the top of the container so that the text at top of the section has some margin at the top.

    In addition to the properties currently defined within the CSS Scroll Snap Points specification, Gecko implements the additional properties scroll-snap-type-x and scroll-snap-type-y, for setting the snap type individually per axis. These long-hand properties may be added to the specification in the future.

    Currently snap points can only be set through coordinates, either referring to the start edge of the container element or the ones within it. Sometimes this requires some calculation to set them at the right position. Future extensions to this feature may extend the functionality to be able to set snap points on the box model instead, which would make it easier to place them. There’s already a discussion about this within the www-style mailing list.

    Has this new functionality caught your interest? Then it’s time to give it a try! And if you don’t remember how to use the different properties, you can always refer to the documentation on MDN.

  2. Flash-Free Clipboard for the Web

    As part of our effort to grow the Web platform and make it accessible to new devices, we are trying to reduce the Web’s dependence on Flash. As part of that effort, we are standardizing and exposing useful features which are currently only available to Flash to the entirety of the Web platform.

    One of the reasons why many sites still use Flash is because of its copy and cut clipboard APIs. Flash exposes an API for programmatically copying text to the user’s clipboard on a button press. This has been used to implement handy features, such as GitHub’s “clone URL” button. It’s also useful for things such as editor UIs, which want to expose a button for copying to the clipboard, rather than requiring users to use keyboard shortcuts or the context menu.

    Unfortunately, Web APIs haven’t provided the functionality to copy text to the clipboard through JavaScript, which is why visiting GitHub with Flash disabled shows an ugly grey box where the button is supposed to be. Fortunately, we have a solution. The editor APIs provide document.execCommand as an entry point for executing editor commands. The "copy" and cut" commands have previously been disabled for web pages, but with Firefox 41, which is currently in Beta, and slated to move to release in mid-September, it is becoming available to JavaScript within user-action initiated callbacks.

    Using execCommand("cut"/"copy")

    The execCommand("cut"/"copy") API is only available during a user-triggered callback, such as a click. If you try to call it at a different time, execCommand will return false, meaning that the command failed to execute. Running execCommand("cut") will copy the current selection to the clipboard, so let’s go about implementing a basic copy-to-clipboard button.

    // button which we are attaching the event to
    var button = ...;
    // input containing the text we want to copy 
    var input = ...;
    button.addEventListener("click", function(event) {
      // Select the input node's contents;
      // Copy it to the clipboard

    That code will trigger a copy of the text in the input to the clipboard upon the click of the button in Firefox 41 and above. However, you probably want to also handle failure situations, potentially to fallback to another Flash-based approach such as ZeroClipboard, or even just to tell the user that their browser doesn’t support the functionality.

    The execCommand method will return false if the action failed, for example, due to being called outside of a user-initiated callback, but on older versions of Firefox, we would also throw a security exception if you attempted to use the "cut" or "copy" APIs. Thus, if you want to be sure that you capture all failures, make sure to surround the call in a try-catch block, and also interpret an exception as a failure.

    // button which we are attaching the event to
    var button = ...;
    // input containing the text we want to copy
    var input = ...;
    button.addEventListener("click", function(event) {
      event.preventDefault();; // Select the input node's contents
      var succeeded;
      try {
        // Copy it to the clipboard
        succeeded = document.execCommand("copy");
      } catch (e) {
        succeeded = false;
      if (succeeded) {
        // The copy was successful!
      } else {
        // The copy failed :(

    The "cut" API is also exposed to web pages through the same mechanism, so just s/copy/cut/, and you’re all set to go!

    Feature testing

    The editor APIs provide a method document.queryCommandSupported("copy") intended to allow API consumers to determine whether a command is supported by the browser. Unfortunately, in versions of Firefox prior to 41, we returned true from document.queryCommandSupported("copy") even though the web page was unable to actually perform the copy operation. However, attempting to execute document.execCommand("copy") would throw a SecurityException. So, attempting to copy on load, and checking for this exception is probably the easiest way to feature-detect support for document.execCommand("copy") in Firefox.

    var supported = document.queryCommandSupported("copy");
    if (supported) {
      // Check that the browser isn't Firefox pre-41
      try {
      } catch (e) {
        supported = false;
    if (!supported) {
      // Fall back to an alternate approach like ZeroClipboard

    Support in other browsers

    Google Chrome and Internet Explorer both also support this API. Chrome uses the same restriction as Firefox (that it must be run in a user-initiated callback). Internet Explorer allows it to be called at any time, except it first prompts the user with a dialog, asking for permission to access the clipboard.

    For more information about the API and browser support, see MDN documentation for document.execCommand().

  3. Developer Edition 42: Wifi Debugging, Win10, Multiprocess Firefox, ReactJS tools, and more

    Firefox 42 has arrived! In this release, we put a lot of effort into the quality and polish of the Developer Edition browser. Although many of the bugs resolved this release don’t feature in the Release Notes, these small fixes make the tools faster and more stable. But there’s still a lot to report, including a major change to how Firefox works.

    Debugging over wifi

    Now, with remote website debugging, you can debug Firefox for Android devices over wifi – no USB cable or ADB needed.

    Multiprocess is enabled by default

    Multiprocess Firefox (aka E10s) has been enabled by default in Developer Edition. When it’s enabled, Firefox renders and executes web-related content in a single background content process. If you experience any issues with addons after updating to Developer Edition 42, try disabling incompatible addons or reverting to a single process mode using about:preferences.

    Windows 10 theme support

    The Developer Edition theme has a new look in Windows 10 to match the OS styling. Take a look:

    Screenshot of the dark Developer Edition theme in Windows 10

    Dark Developer Edition theme – Windows 10

    Screenshot of the light Developer Edition theme in Windows 10

    Light Developer Edition theme – Windows 10

    React Developer Tools support for Firefox

    If you’re developing with ReactJS, you may have noticed that the React project recently released a beta for their developer tools extension, including initial support for Firefox. While there are no official builds yet of the Firefox version, the source is available on github.

    Other notable changes

    • Asynchronous call stacks now allow you to follow the code flow through setTimeout, DOM event handlers, and Promise handlers. (Bug 981514)
    • There is a new configurable Firefox OS simulator page in WebIDE. From here, you can change a simulator to run with a custom profile and screen size, using a list of presets from reference devices. (Bug 1156834)
    • CSS filter presets are now available in the inspector. (Bug 1153184)
    • The MDN tooltip now uses syntax highlighting for code samples. (Bug 1154469)
    • When using the “copy” keyboard shortcut in the inspector, the outerHTML of the selected node is now copied onto the clipboard. (Bug 968241)
    • New UX improvements have landed in the style editor’s search feature. (Bug 1159001, Bug 1153474)
    • CSS variables are now treated as normal declarations in the inspector. (Bug 1142206)
    • CSS autocomplete popup now supports pressing ‘down’ to list all results in an empty value field (Bug 1142206)

    Thanks to everyone who contributed time and energy to help the DevTools team in this release of Firefox Developer Edition 42! Each release takes a lot of effort from people writing patches, testing, documenting, reporting bugs, sending feedback, discussing features, etc. You can help set our priorities by sharing constructive feedback and letting us know what you’d like from Firefox Developer Tools.

    You can download Firefox Developer Edition now, for free.

  4. ES6 In Depth: The Future

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    Last week’s article on ES6 modules wrapped up a 4-month survey of the major new features in ES6.

    This post covers over a dozen more new features that we never got around to talking about at length. Consider it a fun tour of all the closets and oddly-shaped upstairs rooms in this mansion of a language. Maybe a vast underground cavern or two. If you haven’t read the other parts of the series, take a look; this installment may not be the best place to start!

    (a picture of the Batcave, inexplicably)

    “On your left, you can see typed arrays…”

    One more quick warning: Many of the features below are not widely implemented yet.

    OK. Let’s get started.

    Features you may already be using

    ES6 standardizes some features that were previously in other standards, or widely implemented but nonstandard.

    • Typed arrays, ArrayBuffer, and DataView. These were all standardized as part of WebGL, but they’ve been used in many other APIs since then, including Canvas, the Web Audio API, and WebRTC. They’re handy whenever you need to process large volumes of raw binary or numeric data.

      For example, if the Canvas rendering context is missing a feature you want, and if you’re feeling sufficiently hardcore about it, you can just implement it yourself:

      var context = canvas.getContext("2d");
      var image = context.getImageData(0, 0, canvas.width, canvas.height);
      var pixels =;  // a Uint8ClampedArray object
      // ... Your code here!
      // ... Hack on the raw bits in `pixels`
      // ... and then write them back to the canvas:
      context.putImageData(image, 0, 0);

      During standardization, typed arrays picked up methods like .slice(), .map(), and .filter().

    • Promises. Writing just one paragraph about promises is like eating just one potato chip. Never mind how hard it is; it barely even makes sense as a thing to do. What to say? Promises are the building blocks of asynchronous JS programming. They represent values that will become available later. So for example, when you call fetch(), instead of blocking, it returns a Promise object immediately. The fetch goes on in the background, and it’ll call you back when the response arrives. Promises are better than callbacks alone, because they chain really nicely, they’re first-class values with interesting operations on them, and you can get error handling right with a lot less boilerplate. They’re polyfillable in the browser. If you don’t already know all about promises, check out Jake Archibald’s very in-depth article.

    • Functions in block scope. You shouldn’t be using this one, but it’s possible you have been. Maybe unintentionally.

      In ES1-5, this code was technically illegal:

      if (temperature > 100) {
        function chill() {
          return fan.switchOn().then(obtainLemonade);

      That function declaration inside an if block was supposedly forbidden. They were only legal at toplevel, or inside the outermost block of a function.

      But it worked in all major browsers anyway. Sort of.

      Not compatibly. The details were a little different in each browser. But it sort of worked, and many web pages still use it.

      ES6 standardizes this, thank goodness. The function is hoisted to the top of the enclosing block.

      Unfortunately, Firefox and Safari don’t implement the new standard yet. So for now, use a function expression instead:

      if (temperature > 100) {
        var chill = function () {    
          return fan.switchOn().then(obtainLemonade);

      The only reason block-scoped functions weren’t standardized years ago is that the backward-compatibility constraints were incredibly complicated. Nobody thought they could be solved. ES6 threads the needle by adding a very strange rule that only applies in non-strict code. I can’t explain it here. Trust me, use strict mode.

    • Function names. All the major JS engines have also long supported a nonstandard .name property on functions that have names. ES6 standardizes this, and makes it better by inferring a sensible .name for some functions that were heretofore considered nameless:

      > var lessThan = function (a, b) { return a < b; };

      For other functions, such as callbacks that appear as arguments to .then methods, the spec still can’t figure out a name. is then the empty string.

    Nice things

    • Object.assign(target, ...sources). A new standard library function, similar to Underscore’s _.extend().

    • The spread operator for function calls. This is nothing to do with Nutella, even though Nutella is a tasty spread. But it is a delicious feature, and I think you'll like it.

      Back in May, we introduced rest parameters. They’re a way for functions to receive any number of arguments, a more civilized alternative to the random, clumsy arguments object.

      function log(...stuff) {  // stuff is the rest parameter.
        var rendered =; // It's a real array.

      What we didn’t say is that there’s matching syntax for passing any number of arguments to a function, a more civilized alternative to fn.apply():

      // log all the values from an array

      Of course it works with any iterable object, so you can log all the stuff in a Set by writing log(...mySet).

      Unlike rest parameters, it makes sense to use the spread operator multiple times in a single argument list:

      // kicks are before trids
      log("Kicks:", ...kicks, "Trids:", ...trids);

      The spread operator is handy for flattening an array of arrays:

      > var smallArrays = [[], ["one"], ["two", "twos"]];
      > var oneBigArray = [].concat(...smallArrays);
      > oneBigArray
          ["one", "two", "twos"]

      ...but maybe this one of those pressing needs that only I have. If so, I blame Haskell.

    • The spread operator for building arrays. Also back in May, we talked about “rest” patterns in destructuring. They’re a way to get any number of elements out of an array:

      > var [head, ...tail] = [1, 2, 3, 4];
      > head
      > tail
          [2, 3, 4]

      Guess what! There’s matching syntax for getting any number of elements into an array:

      > var reunited = [head, ...tail];
      > reunited
          [1, 2, 3, 4]

      This follows all the same rules as the spread operator for function calls: you can use the spread operator many times in the same array, and so on.

    • Proper tail calls. This one is too amazing for me to try to explain here.

      To understand this feature, there’s no better place to start than page 1 of Structure and Interpretation of Computer Programs. If you enjoy it, just keep reading. Tail calls are explained in section 1.2.1, “Linear Recursion and Iteration”. The ES6 standard requires that implementations be “tail-recursive”, as the term is defined there.

      None of the major JS engines have implemented this yet. It’s hard to implement. But all in good time.


    • Unicode version upgrade. ES5 required implementations to support at least all the characters in Unicode version 3.0. ES6 implementations must support at least Unicode 5.1.0. You can now use characters from Linear B in your function names!

      Linear A is still a bit risky, both because it was not added to Unicode until version 7.0 and because it might be hard to maintain code written in a language that has never been deciphered.

      (Even in JavaScript engines that support the emoji added in Unicode 6.1, you can’t use 😺 as a variable name. For some reason, the Unicode Consortium decided not to classify it as an identifier character. 😾)

    • Long Unicode escape sequences. ES6, like earlier versions, supports four-digit Unicode escape sequences. They look like this: \u212A. These are great. You can use them in strings. Or if you’re feeling playful and your project has no code review policy whatsoever, you can use them in variable names. But then, for a character like U+13021 (𓀡), the Egyptian hieroglyph of a guy standing on his head, there's a slight problem. The number 13021 has five digits. Five is more than four.

      In ES5, you had to write two escapes, a UTF-16 surrogate pair. This felt exactly like living in the Dark Ages: cold, miserable, barbaric. ES6, like the dawn of the Italian Renaissance, brings tremendous change: you can now write \u{13021}.

    • Better support for characters outside the BMP. The .toUpperCase() and .toLowerCase() methods now work on strings written in the Deseret alphabet!

      In the same vein, String.fromCodePoint(...codePoints) is a function very similar to the older String.fromCharCode(...codeUnits), but with support for code points beyond the BMP.

    • Unicode RegExps. ES6 regular expressions support a new flag, the u flag, which causes the regular expression to treat characters outside the BMP as single characters, not as two separate code units. For example, without the u, /./ only matches half of the character "😭". But /./u matches the whole thing.

      Putting the u flag on a RegExp also enables more Unicode-aware case-insensitive matching and long Unicode escape sequences. For the whole story, see Mathias Bynens’s very detailed post.

    • Sticky RegExps. A non-Unicode-related feature is the y flag, also known as the sticky flag. A sticky regular expression only looks for matches starting at the exact offset given by its .lastIndex property. If there isn’t a match there, rather than scanning forward in the string to find a match somewhere else, a sticky regexp immediately returns null.

    • An official internationalization spec. ES6 implementations that provide any internationalization features must support ECMA-402, the ECMAScript 2015 Internationalization API Specification. This separate standard specifies the Intl object. Firefox, Chrome, and IE11+ already fully support it. So does Node 0.12.


    • Binary and octal number literals. If you need a fancy way to write the number 8,675,309, and 0x845fed isn’t doing it for you, you can now write 0o41057755 (octal) or 0b100001000101111111101101 (binary).

      Number(str) also now recognizes strings in this format: Number("0b101010") returns 42.

      (Quick reminder: number.toString(base) and parseInt(string, base) are the original ways to convert numbers to and from arbitrary bases.)

    • New Number functions and constants. These are pretty niche. If you’re interested, you can browse the standard yourself, starting at Number.EPSILON.

      Maybe the most interesting new idea here is the “safe integer” range, from −(253 - 1) to +(253 - 1) inclusive. This special range of numbers has existed as long as JS. Every integer in this range can be represented exactly as a JS number, as can its nearest neighbors. In short, it’s the range where ++ and -- work as expected. Outside this range, odd integers aren’t representable as 64-bit floating-point numbers, so incrementing and decrementing the numbers that are representable (all of which are even) can’t give a correct result. In case this matters to your code, the standard now offers constants Number.MIN_SAFE_INTEGER and Number.MAX_SAFE_INTEGER, and a predicate Number.isSafeInteger(n).

    • New Math functions. ES6 adds hyperbolic trig functions and their inverses, Math.cbrt(x) for computing cube roots, Math.hypot(x, y) for computing the hypotenuse of a right triangle, Math.log2(x) and Math.log10(x) for computing logarithms in common bases, Math.clz32(x) to help compute integer logarithms, and a few others.

      Math.sign(x) gets the sign of a number.

      ES6 also adds Math.imul(x, y), which does signed multiplication modulo 232. This is a very strange thing to want... unless you are working around the fact that JS does not have 64-bit integers or big integers. In that case it’s very handy. This helps compilers. Emscripten uses this function to implement 64-bit integer multiplication in JS.

      Similarly Math.fround(x) is handy for compilers that need to support 32-bit floating-point numbers.

    The end

    Is this everything?

    Well, no. I didn’t even mention the object that’s the common prototype of all built-in iterators, the top-secret GeneratorFunction constructor,, v2), how Symbol.species helps support subclassing builtins like Array and Promise, or how ES6 specifies details of how multiple globals work that have never been standardized before.

    I’m sure I missed a few things, too.

    But if you’ve been following along, you have a pretty good picture of where we’re going. You know you can use ES6 features today, and if you do, you’ll be opting in to a better language.

    A few days ago, Josh Mock remarked to me that he had just used eight different ES6 features in about 50 lines of code, without even really thinking about it. Modules, classes, argument defaults, Set, Map, template strings, arrow functions, and let. (He missed the for-of loop.)

    This has been my experience, too. The new features hang together very well. They end up affecting almost every line of JS code you write.

    Meanwhile, every JS engine is hurrying to implement and optimize the features we’ve been discussing for the past few months.

    Once we’re done, the language will be complete. We’ll never have to change anything again. I’ll have to find something else to work on.

    Just kidding. Proposals for ES7 are already picking up steam. Just to pick a few:

    • Exponentation operator. 2 ** 8 will return 256. Implemented in Firefox Nightly.

    • Array.prototype.includes(value). Returns true if this array contains the given value. Implemented in Firefox Nightly; polyfillable.

    • SIMD. Exposes 128-bit SIMD instructions provided by modern CPUs. These instructions do an arithmetic operation on 2, or 4, or 8 adjacent array elements at a time. They can dramatically speed up a wide variety of algorithms for streaming audio and video, cryptography, games, image processing, and more. Very low-level, very powerful. Implemented in Firefox Nightly; polyfillable.

    • Async functions. We hinted at this feature in the post on generators. Async functions are like generators, but specialized for asynchronous programming. When you call a generator, it returns an iterator. When you call an async function, it returns a promise. Generators use the yield keyword to pause and produce a value; async functions instead use the await keyword to pause and wait for a promise.

      It’s hard to describe them in a few sentences, but async functions will be the landmark feature in ES7.

    • Typed Objects. This is a follow-up to typed arrays. Typed arrays have elements that are typed. A typed object is simply an object whose properties are typed.

      // Create a new struct type. Every Point has two fields
      // named x and y.
      var Point = new TypedObject.StructType({
        x: TypedObject.int32,
        y: TypedObject.int32
      // Now create an instance of that type.
      var p = new Point({x: 800, y: 600});
      console.log(p.x); // 800

      You would only do this for performance reasons. Like typed arrays, typed objects offer a few of the benefits of typing (compact memory usage and speed), but on a per-object, opt-in basis, in contrast to languages where everything is statically typed.

      They’re are also interesting for JS as a compilation target.

      Implemented in Firefox Nightly.

    • Class and property decorators. Decorators are tags you add to a property, class, or method. An example shows what this is about:

      import debug from "jsdebug";
      class Person {
        hasRoundHead(assert) {
          return this.head instanceof Spheroid;

      @debug.logWhenCalled is the decorator here. You can imagine what it does to the method.

      The proposal explains how this would work in detail, with many examples.

    There’s one more exciting development I have to mention. This one is not a language feature.

    TC39, the ECMAScript standard committee, is moving toward more frequent releases and a more public process. Six years passed between ES5 and ES6. The committee aims to ship ES7 just 12 months after ES6. Subsequent editions of the standard will be released on a 12-month cadence. Some of the features listed above will be ready in time. They will “catch the train” and become part of ES7. Those that aren’t finished in that timeframe can catch the next train.

    It’s been great fun sharing the staggering amount of good stuff in ES6. It’s also a pleasure to be able to say that a feature dump of this size will probably never happen again.

    Thanks for joining us for ES6 In Depth! I hope you enjoyed it. Keep in touch.

  5. Flying a drone in your browser with WebBluetooth

    There are tons of devices around us, and the number is only growing. And more and more of these devices come with connectivity. From suitcases to plants to eggs. This brings new challenges: how can we discover devices around us, and how can we interact with them?

    Currently device interactions are handled by separate apps running on mobile phones. But this does not solve the discoverability issue. I need to know which devices are around me before I know which app to install. When I’m standing in front of a meeting room I don’t care about which app to install, or even what the name or ID of the meeting room is. I just want to make a booking or see availability, and as fast as possible.


    Scott Jenson from Google has been thinking about discoverability for a while, and came up with the Physical Web project, whose premise is:

    Walk up and use anything

    The idea is that you use Bluetooth Smart, the low energy variant of bluetooth, to broadcast URLs to the world. Your phone picks up the advertisment package, decodes it, and shows some information to the user. One click and the user is redirected to a web page with relevant content. This can be used for a variety of things:

    • A meeting room can broadcast a URL to its calendar for scheduling.
    • A movie poster can broadcast a URL to show viewing times and trailers.
    • A prescription medicine can broadcast a URL with information about the medication or how to refill it.
    • Look around you. Examples of other use cases are everywhere, waiting to be implemented.

    However, the material world is not a one-way street, and this presents a problem. Broadcasting a URL is great for informing me about things like movie times, but it does not allow me to interact more deeply with the device. If I want to fly a drone I don’t just want to discover that there’s a drone near me, I also want to interact with the drone straight away. We need to have a way for web pages to communicate back to devices.

    Enter the work of the Web Bluetooth W3C group, that includes representatives of Mozilla’s Bluetooth team, who are working on bringing bluetooth APIs to the browser. If the Physical Web allows us to walk up to any device and get the URL of a web app, then WebBluetooth allows the web app to connect to the device and talk back to it.

    At this point, there’s still a lot of work to be done. The bluetooth API is only exposed to certified content on Firefox OS, and thus is not currently accessible to ordinary web content. Until security issues have been cleared this will continue to be the case. A second issue is that Physical Web beacons broadcast a URL. How can a specific web resource know which specific device has broadcast the URL?

    As you can see, lots of work remains to be done, but this blog is called Mozilla Hacks for a reason. Let’s start hacking!

    Adding Physical Web support to Firefox OS

    Since most of the work around WebBluetooth has been done for Firefox OS, I’ve made it my weapon of choice. I want the process of discovering devices to be as painless and obvious as possible. I figured the lockscreen would be the best possible place. Whenever you have bluetooth enabled on your Firefox OS phone, a new notification would then pop up asking you to search for devices (tracking bug).

    Tap, tap, tap

    navigator.mozBluetooth.defaultAdapter.startLeScan([]).then(handle => {
      handle.ondevicefound = e => {
        console.log('Found', e.device, e.scanRecord);
      setTimeout(() => {
      }, 5000);
    }, err => console.error(err));

    As you can see on the third line, we have a scanRecord. This is the advertisement package that the device broadcasts. It’s nothing more than a set of bytes, and you are free to declare your own protocol. For our purpose—broadcasting URLs over bluetooth—Google has already developed two ways of encoding: UriBeacon and EddyStone, both of which can be found in the wild today.

    Parsing the advertisement package is pretty straightforward. Here’s some code I wrote to parse UriBeacons. Parsing the UriBeacon will give you a URL, which is often shortened, because of limited bytes in the advertisement package —this makes for an uninformative UI:

    So what the hack (pun intended) is this device?

    To get some information about the web page behind the beacon we can do an AJAX request and parse the content of the page to enhance the information displayed on the lockscreen:

    function resolveURI(uri, ele) {
    var x = new XMLHttpRequest({ mozSystem: true });
    x.onload = e => {
      var h = document.createElement('html');
      h.innerHTML = x.responseText;
      // After following 301/302s, this contains the last resolved URL
      console.log('url is', x.responseURL);
      var titleEl = h.querySelector('title');
      var metaEl = h.querySelector('meta[name="description"]');
      var bodyEl = h.querySelector('body');
      if (titleEl && titleEl.textContent) {
        console.log('title is', titleEl.textContent);
      if (metaEl && metaEl.content) {
        console.log('description is', metaEl.content);
      else if (bodyEl && bodyEl.textContent) {
        console.log('description is', bodyEl.textContent);
    x.onerror = err => console.error('Loading', uri, 'failed', err);'GET', uri);

    This yields a nicer notification that actually describes the beacon.

    Much nicer

    A drone that doesn’t broadcast a URL

    Unfortunately not all BLE devices broadcast URLs at this point. All of this new technology is experimental and very cool, but not yet fully implemented. We’ve got high hopes that this will change in the near future. Because I still want to be able to fly my drone now, I added some code that transforms the data a drone broadcasts into a URL.

    The web application

    Now that we’ve solved the issue of discoverability, we need a way to control the drone from the browser. Since bluetooth access is not available for web content, we need to make some changes to Gecko, where the Firefox OS security model is implemented. If you are interested in the changes, here’s the commit. We also needed a sneaky hack to make sure the tab’s process is run with the right Linux permissions.

    With these changes in place, we open up navigator.mozBluetooth to all content, and run every tab in Firefox in a process that is part of the ‘bluetooth’ Linux group, ensuring access to the hardware. If you’re playing around with this build later, please note that with my “sneaky” hack implemented, you are now running a build where no security is guaranteed. Using a build hack like this, with security disabled, is fine for IoT experimentation, but is definitely not recommended as a production solution. When the Web Bluetooth spec is finalized, and official support lands in Gecko, proper security will be implemented.

    With the API in place, we can start writing the application. When you tap on the Physical Web notification on the lockscreen, we pass the device address in as a parameter. This is subject to change. For the ongoing discussion take a look at the Eddystone -> Web Bluetooth handoff.

    var address = 'aa:bb:cc:dd:ee'; // parsed from URL
    var counter = 0;
    navigator.mozBluetooth.defaultAdapter.startLeScan([]).then(handle => {
      handle.ondevicefound = e => {
        if (e.device.address !== address) return;
        // write some code to fly the drone
    }, err => console.error(err));

    Now that we have a reference to the device address, we can set up a connection. The protocol we use to talk back and forth to the device is called GATT, the Generic Attribute Profile. The idea behind GATT is that a device can have multiple standard services. For example, a heart rate sensor can implement the battery service and the heart rate service. Because these services are standardized, a consuming application only needs to write the implementation logic once, and can talk to any heart rate monitor.

    Characteristics are aspects of a given service. For example, a heart rate service will implement heart rate measurement and heart rate max. Characteristics can be readable and writeable depending on how they are defined. This goes the same with the drone. It has a service for flying the drone and characteristics to let you control the drone from your phone.

    Luckily Martin Dlouhý (as far as I can tell, he was the first) has already decoded the communication protocol for the Rolling Spider drone, so we can use his work and the new Bluetooth API to start flying…

    // Have a way of knowing when the connection drops
    e.device.gatt.onconnectionstatechanged = cse => {
      console.log('connectionStateChanged', cse);
    // Receive events (battery change f.e.) from device
    e.device.gatt.oncharacteristicchanged = cce => {
      console.log('characteristicChanged', cce);
    // Set up the connection
    e.device.gatt.connect().then(() => {
      return e.device.gatt.discoverServices();
    }).then(() => {
      // devices have services, and services have characteristics
      var services =;
      console.log('services', services);
      // find the characteristic that handles flying the drone
      var c = services.reduce((curr, f) => curr.concat(f.characteristics), [])
        .filter(c => c.uuid === '9a66fa0b-0800-9191-11e4-012d1540cb8e')[0];
      // take off instruction!
      var buffer = new Uint8Array(0x04, counter++, 0x02, 0x00, 0x01, 0x00]);
      c.writeValue(buffer).then(() => {
        console.log('take off successful!');

    The Mozilla team in Taipei used this to create a demo application for Firefox OS, demonstrating the capabilities of the new API during the Mozilla Work Week in Whistler last June. With the API now available in the browser, we can take that work, host it as a web page, beef up the graphics a bit, and have a web site flying a drone!

    Such amaze

    Such amaze. Much drone.


    It’s an exciting time for the Web! With more and more devices coming online we need a way to discover and interact with them without much hassle. The combination of Physical Web and WebBluetooth allows us to create frictionless experiences for users willing to interact with real-world appliances and new devices. Although we’re a long way off, we’re heading in the right direction. Google and Mozilla are actively developing the technology; I’ve got high hopes that everything in this blog post will be common knowledge in a year!

    If that’s not fast enough for you, you can play around with an experimental build of Firefox OS which enables everything seen in this post. This build runs on the Flame developer device. First, upgrade to nightly_v3 base image, then flash this build.


    Thanks to Tzu-Lin Huang and Sean Lee for building the initial drone code; the WebBluetooth team in Mozilla Taipei (especially Jocelyn Liu) for quick feedback and patches when I complained about the API; Chris Williams for putting the drone in my gift bag; Scott Jenson for answering my numerous questions about the Physical Web; and Telenor Digital for letting me play with drones for two weeks.

  6. ES6 In Depth: Modules

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    When I started on Mozilla’s JavaScript team back in 2007, the joke was that the length of a typical JavaScript program was one line.

    This was two years after Google Maps launched. Not long before that, the predominant use of JavaScript had been form validation, and sure enough, your average <input onchange=> handler would be… one line of code.

    Things have changed. JavaScript projects have grown to jaw-dropping sizes, and the community has developed tools for working at scale. One of the most basic things you need is a module system, a way to spread your work across multiple files and directories—but still make sure all your bits of code can access one another as needed—but also be able to load all that code efficiently. So naturally, JavaScript has a module system. Several, actually. There are also several package managers, tools for installing all that software and coping with high-level dependencies. You might think ES6, with its new module syntax, is a little late to the party.

    Well, today we’ll see whether ES6 adds anything to these existing systems, and whether or not future standards and tools will be able to build on it. But first, let’s just dive in and see what ES6 modules look like.

    Module basics

    An ES6 module is a file containing JS code. There’s no special module keyword; a module mostly reads just like a script. There are two differences.

    • ES6 modules are automatically strict-mode code, even if you don’t write "use strict"; in them.

    • You can use import and export in modules.

    Let’s talk about export first. Everything declared inside a module is local to the module, by default. If you want something declared in a module to be public, so that other modules can use it, you must export that feature. There are a few ways to do this. The simplest way is to add the export keyword.

    // kittydar.js - Find the locations of all the cats in an image.
    // (Heather Arthur wrote this library for real)
    // (but she didn't use modules, because it was 2013)
    export function detectCats(canvas, options) {
      var kittydar = new Kittydar(options);
      return kittydar.detectCats(canvas);
    export class Kittydar {
      ... several methods doing image processing ...
    // This helper function isn't exported.
    function resizeCanvas() {

    You can export any top-level function, class, var, let, or const.

    And that’s really all you need to know to write a module! You don’t have to put everything in an IIFE or a callback. Just go ahead and declare everything you need. Since the code is a module, not a script, all the declarations will be scoped to that module, not globally visible across all scripts and modules. Export the declarations that make up the module’s public API, and you’re done.

    Apart from exports, the code in a module is pretty much just normal code. It can use globals like Object and Array. If your module runs in a web browser, it can use document and XMLHttpRequest.

    In a separate file, we can import and use the detectCats() function:

    // demo.js - Kittydar demo program
    import {detectCats} from "kittydar.js";
    function go() {
        var canvas = document.getElementById("catpix");
        var cats = detectCats(canvas);
        drawRectangles(canvas, cats);

    To import multiple names from a module, you would write:

    import {detectCats, Kittydar} from "kittydar.js";

    When you run a module containing an import declaration, the modules it imports are loaded first, then each module body is executed in a depth-first traversal of the dependency graph, avoiding cycles by skipping anything already executed.

    And those are the basics of modules. It’s really quite simple. ;-)

    Export lists

    Rather than tagging each exported feature, you can write out a single list of all the names you want to export, wrapped in curly braces:

    export {detectCats, Kittydar};
    // no `export` keyword required here
    function detectCats(canvas, options) { ... }
    class Kittydar { ... }

    An export list doesn’t have to be the first thing in the file; it can appear anywhere in a module file’s top-level scope. You can have multiple export lists, or mix export lists with other export declarations, as long as no name is exported more than once.

    Renaming imports and exports

    Once in a while, an imported name happens to collide with some other name that you also need to use. So ES6 lets you rename things when you import them:

    // suburbia.js
    // Both these modules export something named `flip`.
    // To import them both, we must rename at least one.
    import {flip as flipOmelet} from "eggs.js";
    import {flip as flipHouse} from "real-estate.js";

    Similarly, you can rename things when you export them. This is handy if you want to export the same value under two different names, which occasionally happens:

    // unlicensed_nuclear_accelerator.js - media streaming without drm
    // (not a real library, but maybe it should be)
    function v1() { ... }
    function v2() { ... }
    export {
      v1 as streamV1,
      v2 as streamV2,
      v2 as streamLatestVersion

    Default exports

    The new standard is designed to interoperate with existing CommonJS and AMD modules. So suppose you have a Node project and you’ve done npm install lodash. Your ES6 code can import individual functions from Lodash:

    import {each, map} from "lodash";
    each([3, 2, 1], x => console.log(x));

    But perhaps you’ve gotten used to seeing _.each rather than each and you still want to write things that way. Or maybe you want to use _ as a function, since that’s a useful thing to do in Lodash.

    For that, you can use a slightly different syntax: import the module without curly braces.

    import _ from "lodash";

    This shorthand is equivalent to import {default as _} from "lodash";. All CommonJS and AMD modules are presented to ES6 as having a default export, which is the same thing that you would get if you asked require() for that module—that is, the exports object.

    ES6 modules were designed to let you export multiple things, but for existing CommonJS modules, the default export is all you get. For example, as of this writing, the famous colors package doesn’t have any special ES6 support as far as I can tell. It’s a collection of CommonJS modules, like most packages on npm. But you can import it right into your ES6 code.

    // ES6 equivalent of `var colors = require("colors/safe");`
    import colors from "colors/safe";

    If you’d like your own ES6 module to have a default export, that’s easy to do. There’s nothing magic about a default export; it’s just like any other export, except it’s named "default". You can use the renaming syntax we already talked about:

    let myObject = {
      field1: value1,
      field2: value2
    export {myObject as default};

    Or better yet, use this shorthand:

    export default {
      field1: value1,
      field2: value2

    The keywords export default can be followed by any value: a function, a class, an object literal, you name it.

    Module objects

    Sorry this is so long. But JavaScript is not alone: for some reason, module systems in all languages tend to have a ton of individually small, boring convenience features. Fortunately, there’s just one thing left. Well, two things.

    import * as cows from "cows";

    When you import *, what’s imported is a module namespace object. Its properties are the module’s exports. So if the “cows” module exports a function named moo(), then after importing “cows” this way, you can write: cows.moo().

    Aggregating modules

    Sometimes the main module of a package is little more than importing all the package’s other modules and exporting them in a unified way. To simplify this kind of code, there’s an all-in-one import-and-export shorthand:

    // world-foods.js - good stuff from all over
    // import "sri-lanka" and re-export some of its exports
    export {Tea, Cinnamon} from "sri-lanka";
    // import "equatorial-guinea" and re-export some of its exports
    export {Coffee, Cocoa} from "equatorial-guinea";
    // import "singapore" and export ALL of its exports
    export * from "singapore";

    Each one of these export-from statements is similar to an import-from statement followed by an export. Unlike a real import, this doesn’t add the re-exported bindings to your scope. So don’t use this shorthand if you plan to write some code in world-foods.js that makes use of Tea. You’ll find that it’s not there.

    If any name exported by “singapore” happened to collide with the other exports, that would be an error, so use export * with care.

    Whew! We’re done with syntax! On to the interesting parts.

    What does import actually do?

    Would you believe… nothing?

    Oh, you’re not that gullible. Well, would you believe the standard mostly doesn’t say what import does? And that this is a good thing?

    ES6 leaves the details of module loading entirely up to the implementation. The rest of module execution is specified in detail.

    Roughly speaking, when you tell the JS engine to run a module, it has to behave as though these four steps are happening:

    1. Parsing: The implementation reads the source code of the module and checks for syntax errors.

    2. Loading: The implementation loads all imported modules (recursively). This is the part that isn’t standardized yet.

    3. Linking: For each newly loaded module, the implementation creates a module scope and fills it with all the bindings declared in that module, including things imported from other modules.

      This is the part where if you try to import {cake} from "paleo", but the “paleo” module doesn’t actually export anything named cake, you’ll get an error. And that’s too bad, because you were so close to actually running some JS code. And having cake!

    4. Run time: Finally, the implementation runs the statements in the body of each newly-loaded module. By this time, import processing is already finished, so when execution reaches a line of code where there’s an import declaration… nothing happens!

    See? I told you the answer was “nothing”. I don’t lie about programming languages.

    But now we get to the fun part of this system. There’s a cool trick. Because the system doesn’t specify how loading works, and because you can figure out all the dependencies ahead of time by looking at the import declarations in the source code, an implementation of ES6 is free to do all the work at compile time and bundle all your modules into a single file to ship them over the network! And tools like webpack actually do this.

    This is a big deal, because loading scripts over the network takes time, and every time you fetch one, you may find that it contains import declarations that require you to load dozens more. A naive loader would require a lot of network round trips. But with webpack, not only can you use ES6 with modules today, you get all the software engineering benefits with no run-time performance hit.

    A detailed specification of module loading in ES6 was originally planned—and built. One reason it isn’t in the final standard is that there wasn’t consensus on how to achieve this bundling feature. I hope someone figures it out, because as we’ll see, module loading really should be standardized. And bundling is too good to give up.

    Static vs. dynamic, or: rules and how to break them

    For a dynamic language, JavaScript has gotten itself a surprisingly static module system.

    • All flavors of import and export are allowed only at toplevel in a module. There are no conditional imports or exports, and you can’t use import in function scope.

    • All exported identifiers must be explicitly exported by name in the source code. You can’t programmatically loop through an array and export a bunch of names in a data-driven way.

    • Module objects are frozen. There is no way to hack a new feature into a module object, polyfill style.

    • All of a module’s dependencies must be loaded, parsed, and linked eagerly, before any module code runs. There’s no syntax for an import that can be loaded lazily, on demand.

    • There is no error recovery for import errors. An app may have hundreds of modules in it, and if anything fails to load or link, nothing runs. You can’t import in a try/catch block. (The upside here is that because the system is so static, webpack can detect those errors for you at compile time.)

    • There is no hook allowing a module to run some code before its dependencies load. This means that modules have no control over how their dependencies are loaded.

    The system is quite nice as long as your needs are static. But you can imagine needing a little hack sometimes, right?

    That’s why whatever module-loading system you use will have a programmatic API to go alongside ES6’s static import/export syntax. For example, webpack includes an API that you can use for “code splitting”, loading some bundles of modules lazily on demand. The same API can help you break most of the other rules listed above.

    The ES6 module syntax is very static, and that’s good—it’s paying off in the form of powerful compile-time tools. But the static syntax was designed to work alongside a rich dynamic, programmatic loader API.

    When can I use ES6 modules?

    To use modules today, you’ll need a compiler such as Traceur or Babel. Earlier in this series, Gastón I. Silva showed how to use Babel and Broccoli to compile ES6 code for the web; building on that article, Gastón has a working example with support for ES6 modules. This post by Axel Rauschmayer contains an example using Babel and webpack.

    The ES6 module system was designed mainly by Dave Herman and Sam Tobin-Hochstadt, who defended the static parts of the system against all comers (including me) through years of controversy. Jon Coppeard is implementing modules in Firefox. Additional work on a JavaScript Loader Standard is underway. Work to add something like <script type=module> to HTML is expected to follow.

    And that’s ES6.

    This has been so much fun that I don’t want it to end. Maybe we should do just one more episode. We could talk about odds and ends in the ES6 spec that weren’t big enough to merit their own article. And maybe a little bit about what the future holds. Please join me next week for the stunning conclusion of ES6 In Depth.

  7. Trainspotting: Firefox 40

    Trainspotting is a series of articles highlighting features in the lastest version of Firefox. A new version of Firefox is shipped every six weeks – we at Mozilla call this pattern “release trains.”

    Firefox keeps on shippin' shippin' shippin' /
    Into the future…
    —Steve Miller Band, probably

    Like a big ol’ jet airliner, a new version of Firefox has been cleared for takeoff! Let’s take a look at some of the snazzy new things in store for both users and developers.

    For a full list of changes and additions, take a look at the Firefox 40 release notes.

    Developer Tools

    Find what you’re looking for in the Inspector, but don’t know where it is on the page? You can now scroll an element into view via the Markup View in the Inspector:


    Sift through complex stylesheets more easily by filtering CSS rules:

    You can now toggle how colors are represented by Shift+clicking on them in the Rules view:


    The Web Console will now warn of code that is unreachable because it comes after a return statement:


    The Developer Tools have also gained a powerful new set of Performance analysis tools, which are demonstrated along with all the other Firefox 40 Developer Tools changes in this in-depth blog post.

    Signed Add-ons


    Malicious extensions are a growing problem in all browsers. Because Firefox Add-ons have tremendous power, there needs to be a better way to protect users from malicious code running wild. Starting in Firefox 42, all Firefox add-ons will be required to be signed in order to be able to be installed by end-users. In Firefox 40, users will be warned about un-signed extensions, but can opt to install them anyway. You can read more about why extension signing is needed, and also check out the overall plan for the roll-out of signed extensions.

    Event offsetX and offsetY

    Sometimes a good idea is a good idea, even if it takes 14 years! Firefox now supports the offsetX and offsetY properties for MouseEvents. This makes it much easier for code to track mouse events on an element within a page, without needing to know where in the page the element is. As always, perform capability checks to ensure that your code works across browsers:

    el.addEventListener(function (e) {
      var x, y;
      if ('offsetX' in e) {
         x = e.offsetX;
         y = e.offsetY;
      } else {
        // addition needed for every offsetParent up the chain
        x = e.clientX + /* ... */;
        y = e.clientY + /* ... */;
      addGlitterMouseTrails(x, y);

    But Wait, There’s More!

    Every new version of Firefox has dozens of bug fixes and changes to make browsing and web development better- I’ve only touched upon a few. Finally, it’s well worth noting that 55 developers contributed their first code change to Firefox in this release, and 49 of them were brand new volunteers. Shipping would not be the same without these awesome contributions! Thank you!

    For all the rest of the details, check out the Developer Release Notes or even the full list of fixed bugs. Happy Browsing!

  8. ES6 In Depth: Subclassing

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    Two weeks ago, we described the new classes system added in ES6 for handling trivial cases of object constructor creation. We showed how you can use it to write code that looks like this:

    class Circle {
        constructor(radius) {
            this.radius = radius;
        static draw(circle, canvas) {
            // Canvas drawing code
        static get circlesMade() {
            return !this._count ? 0 : this._count;
        static set circlesMade(val) {
            this._count = val;
        area() {
            return Math.pow(this.radius, 2) * Math.PI;
        get radius() {
            return this._radius;
        set radius(radius) {
            if (!Number.isInteger(radius))
                throw new Error("Circle radius must be an integer.");
            this._radius = radius;

    Unfortunately, as some people pointed out, there wasn’t time to talk then about the rest of the power of classes in ES6. Like traditional class systems (C++ or Java, for example), ES6 allows for inheritance, where one class uses another as a base, and then extends it by adding more features of its own. Let’s take a closer look at the possibilities of this new feature.

    Before we get started talking about subclassing, it will be useful to spend a moment reviewing property inheritance and the dynamic prototype chain.

    JavaScript Inheritance

    When we create an object, we get the chance to put properties on it, but it also inherits the properties of its prototype objects. JavaScript programmers will be familiar with the existing Object.create API which allows us to do this easily:

    var proto = {
        value: 4,
        method() { return 14; }
    var obj = Object.create(proto);
    obj.value; // 4
    obj.method(); // 14

    Further, when we add properties to obj with the same name as ones on proto, the properties on obj shadow those on proto.

    obj.value = 5;
    obj.value; // 5
    proto.value; // 4

    Basic Subclassing

    With this in mind, we can now see how we should hook up the prototype chains of the objects created by a class. Recall that when we create a class, we make a new function corresponding to the constructor method in the class definition which holds all the static methods. We also create an object to be the prototype property of that created function, which will hold all the instance methods. To create a new class which inherits all the static properties, we will have to make the new function object inherit from the function object of the superclass. Similarly, we will have to make the prototype object of the new function inherit from the prototype object of the superclass, for the instance methods.

    That description is very dense. Let’s try an example, showing how we could hook this up without new syntax, and then adding a trivial extension to make things more aesthetically pleasing.

    Continuing with our previous example, suppose we have a class Shape that we want to subclass:

    class Shape {
        get color() {
            return this._color;
        set color(c) {
            this._color = parseColorAsRGB(c);
            this.markChanged();  // repaint the canvas later

    When we try to write code that does this, we have the same problem we had in the previous post with static properties: there’s no syntactic way to change the prototype of a function as you define it. While you can get around this with Object.setPrototypeOf, the approach is generally less performant and less optimizable for engines than having a way to create a function with the intended prototype.

    class Circle {
        // As above
    // Hook up the instance properties
    Object.setPrototypeOf(Circle.prototype, Shape.prototype);
    // Hook up the static properties
    Object.setPrototypeOf(Circle, Shape);

    This is pretty ugly. We added the classes syntax so that we could encapsulate all of the logic about how the final object would look in one place, rather than having other “hooking things up” logic afterwards. Java, Ruby, and other object-oriented languages have a way of declaring that a class declaration is a subclass of another, and we should too. We use the keyword extends, so we can write:

    class Circle extends Shape {
        // As above

    You can put any expression you want after extends, as long as it’s a valid constructor with a prototype property. For example:

    • Another class
    • Class-like functions from existing inheritance frameworks
    • A normal function
    • A variable that contains a function or class
    • A property access on an object
    • A function call

    You can even use null, if you don’t want instances to inherit from Object.prototype.

    Super Properties

    So we can make subclasses, and we can inherit properties, and sometimes our methods will even shadow (think override) the methods we inherit. But what if you want to circumvent this shadowing mechanic?

    Suppose we want to write a subclass of our Circle class that handles scaling the circle by some factor. To do this, we could write the following somewhat contrived class:

    class ScalableCircle extends Circle {
        get radius() {
            return this.scalingFactor * super.radius;
        set radius() {
            throw new Error("ScalableCircle radius is constant." +
                            "Set scaling factor instead.");
        // Code to handle scalingFactor

    Notice that the radius getter uses super.radius. This new super keyword allows us to bypass our own properties, and look for the property starting with our prototype, thus bypassing any shadowing we may have done.

    Super property accesses (super[expr] works too, by the way) can be used in any function defined with method definition syntax. While these functions can be pulled off of the original object, the accesses are tied to the object on which the method was first defined. This means that pulling the method off into a local variable will not change the behavior of the super access.

    var obj = {
        toString() {
            return "MyObject: " + super.toString();
    obj.toString(); // MyObject: [object Object]
    var a = obj.toString;
    a(); // MyObject: [object Object]

    Subclassing Builtins

    Another thing you might want to do is write extensions to the JavaScript language builtins. The builtin data structures add a huge amount of power to the language, and being able to create new types that leverage that power is amazingly useful, and was a foundational part of the design of subclassing. Suppose you want to write a versioned array. (I know. Trust me, I know.) You should be able to make changes and then commit them, or roll back to previously committed changes. One way to write a quick version of this is by subclassing Array.

    class VersionedArray extends Array {
        constructor() {
            this.history = [[]];
        commit() {
            // Save changes to history.
        revert() {
            this.splice(0, this.length, this.history[this.history.length - 1]);

    Instances of VersionedArray retain a few important properties. They’re bonafide instances of Array, complete with map, filter, and sort. Array.isArray() will treat them like arrays, and they will even get the auto-updating array length property. Even further, functions that would return a new array (like Array.prototype.slice()) will return a VersionedArray!

    Derived Class Constructors

    You may have noticed the super() in the constructor method of that last example. What gives?

    In traditional class models, constructors are used to initalize any internal state for instances of the class. Each consecutive subclass is responsible for initializing the state associated with that specific subclass. We want to chain these calls, so that subclasses share the same initialization code with the class they are extending.

    To call a super constructor, we use the super keyword again, this time as if it were a function. This syntax is only valid inside constructor methods of classes that use extends. With super, we can rewrite our Shape class.

    class Shape {
        constructor(color) {
            this._color = color;
    class Circle extends Shape {
        constructor(color, radius) {
            this.radius = radius;
        // As from above

    In JavaScript, we tend to write constructors that operate on the this object, installing properties and initializing internal state. Normally, the this object is created when we invoke the constructor with new, as if with Object.create() on the constructor’s prototype property. However, some builtins have different internal object layouts. Arrays, for example, are laid out differently than regular objects in memory. Because we want to be able to subclass builtins, we let the basemost constructor allocate the this object. If it’s a builtin, we will get the object layout we want, and if it’s a normal constructor, we will get the default this object we expect.

    Probably the strangest consequence is the way this is bound in subclass constructors. Until we run the base constructor, and allow it to allocate the this object, we don’t have a this value. Consequently, all accesses to this in subclass constructors that occur before the call to the super constructor will result in a ReferenceError.

    As we saw in the last post, where you could omit the constructor method, derived class constructors can be omitted, and it is as if you had written:

    constructor(...args) {

    Sometimes, constructors do not interact with the this object. Instead, they create an object some other way, initalize it, and return it directly. If this is the case, it is not necessary to use super. Any constructor may directly return an object, independent of whether super constructors were ever invoked.

    Another strange side effect of having the basemost class allocate the this object is that sometimes the basemost class doesn’t know what kind of object to allocate. Suppose you were writing an object framework library, and you wanted a base class Collection, some subclasses of which were arrays, and some of which were maps. Then, by the time you ran the Collection constructor, you wouldn’t be able to tell which kind of object to make!

    Since we’re able to subclass builtins, when we run the builtin constructor, internally we already have to know about the prototype of the original class. Without it, we wouldn’t be able to create an object with the proper instance methods. To combat this strange Collection case, we’ve added syntax to expose that information to JavaScript code. We’ve added a new Meta Property, which corresponds to the constructor that was directly invoked with new. Calling a function with new sets to be the called function, and calling super within that function forwards the value.

    This is hard to understand, so I’ll just show you what I mean:

    class foo {
        constructor() {
    class bar extends foo {
        // This is included explicitly for clarity. It is not necessary
        // to get these results.
        constructor() {
    // foo directly invoked, so is foo
    new foo(); // foo
    // 1) bar directly invoked, so is bar
    // 2) bar invokes foo via super(), so is still bar
    new bar(); // bar

    We’ve solved the problem with Collection described above, because the Collection constructor can just check and use it to derive the class lineage, and determine which builtin to use. is valid inside any function, and if the function is not invoked with new, it will be set to undefined.

    The Best of Both Worlds

    Hope you’ve survived this brain dump of new features. Thanks for hanging on. Let’s now take a moment to talk about whether they solve problems well. Many people have been quite outspoken about whether inheritance is even a good thing to codify in a language feature. You may believe that inheritance is never as good as composition for making objects, or that the cleanliness of new syntax isn’t worth the resulting lack of design flexibility, as compared with the old prototypal model. It’s undeniable that mixins have become a dominant idiom for creating objects that share code in an extensible way, and for good reason: They provide an easy way to share unrelated code to the same object without having to understand how those two unrelated pieces should fit into the same inheritance structure.

    There are many vehemently held beliefs on this topic, but I think there are a few things worth noting. First, the addition of classes as a language feature does not make their use mandatory. Second, and equally important, the addition of classes as a language feature doesn’t mean they are always the best way to solve inheritance problems! In fact, some problems are better suited to modeling with prototypal inheritance. At the end of the day, classes are just another tool that you can use; not the only tool nor necessarily the best.

    If you want to continue to use mixins, you may wish that you could reach for classes that inherit from several things, so that you could just inherit from each mixin, and have everything be great. Unfortunately, it would be quite jarring to change the inheritance model now, so JavaScript does not implement multiple inheritance for classes. That being said, there is a hybrid solution to allow mixins inside a class-based framework. Consider the following functions, based on the well-known extend mixin idiom.

    function mix(...mixins) {
        class Mix {}
        // Programmatically add all the methods and accessors
        // of the mixins to class Mix.
        for (let mixin of mixins) {
            copyProperties(Mix, mixin);
            copyProperties(Mix.prototype, mixin.prototype);
        return Mix;
    function copyProperties(target, source) {
        for (let key of Reflect.ownKeys(source)) {
            if (key !== "constructor" && key !== "prototype" && key !== "name") {
                let desc = Object.getOwnPropertyDescriptor(source, key);
                Object.defineProperty(target, key, desc);

    We can now use this function mix to create a composed superclass, without ever having to create an explicit inheritance relationship between the various mixins. Imagine writing a collaborative editing tool in which editing actions are logged, and their content needs to be serialized. You can use the mix function to write a class DistributedEdit:

    class DistributedEdit extends mix(Loggable, Serializable) {
        // Event methods

    It’s the best of both worlds. It’s also easy to see how to extend this model to handle mixin classes that themselves have superclasses: we can simply pass the superclass to mix and have the return class extend it.

    Current Availability

    OK, we’ve talked a lot about subclassing builtins and all these new things, but can you use any of it now?

    Well, sort of. Of the major browser vendors, Chrome has shipped the most of what we’ve talked about today. When in strict mode, you should be able to do just about everything we discussed, except subclass Array. Other builtin types will work, but Array poses some extra challenges, so it’s not surprising that it’s not finished yet. I am writing the implementation for Firefox, and aim to hit the same target (everything but Array) very soon. Check out bug 1141863 for more information, but it should land in the Nightly version of Firefox in a few weeks.

    Further off, Edge has support for super, but not for subclassing builtins, and Safari does not support any of this functionality.

    Transpilers are at a disadvantage here. While they are able to create classes, and to do super, there’s basically no way to fake subclassing builtins, because you need engine support to get instances of the base class back from builtin methods (think Array.prototype.splice).

    Phew! That was a long one. Next week, Jason Orendorff will be back to discuss the ES6 module system.

  9. Pointer Events now in Firefox Nightly

    This past February Pointer Events became a W3C Recommendation. In the intervening time Microsoft Open Tech and Mozilla have been working together to implement the specification. As consumers continue to expand the range of devices that are used to explore the web with different input mechanisms such as touch, pen or mouse, it is important to provide a unified API that developers can use within their applications. In this effort we have just reached a major milestone: Pointer events are now available in Firefox Nightly. We are very excited about this effort which represents a great deal of cooperation across several browser vendors in an effort to produce a high quality industry standard API with growing support.

    Be sure to download Firefox Nightly and give it a try and give us your feedback on the implementation either using the dev-platform mailing list or the group. If you have feedback on the specification please send those to

    The intent of this specification is to expand the open web to support a variety of input mechanisms beyond the mouse, while maintaining compatibility with most web-based content, which is built around mouse events. The API is designed to create one solution that will handle a variety of input devices, with a focus on pointing devices (mouse, pens, and touch). The pointer is defined in the spec as a hardware-agnostic device that can target a specific set of screen coordinates. Pointer events are intentionally similar to the current set of events associated with mouse events.

    In the current Nightly build, pointer events for mouse input are now supported. Additionally, if you’re using Windows, once you’ve set two preferences, touch events can be enabled now. The first property, Async Pan & Zoom (APZ) is enabled by setting the layers.async-pan-zoom.enabled Firefox configuration preference to true. The dom.w3c_touch_events.enabled should also be enabled by setting this value to 1 in your preferences.

    This post covers some of the basic features of the new API.

    Using the Pointer API

    Before getting started with the Pointer API, it’s important to test whether your current browser supports the API. This can be done with code similar to this example:

    if (window.PointerEvent) {
      // use mouse events

    The Pointer API provides support for pointerdown, pointerup, pointercancel, pointermove, pointerover, pointerout, gotpointercapture, and lostpointercapture events. Most of these should be familiar to you if you have coded event handling for mouse input before. For example, if you need a web app to move an image around a canvas when touched or clicked on, you can use the following code:

    function DragImage() {
        var imageGrabbed = false;
        var ctx;
        var cnv;
        var myImage;
        var x = 0;
        var y = 0;
        var rect;
        this.imgMoveEvent = function(evt) {
            if (imageGrabbed) {
                ctx.clearRect(0, 0, cnv.width, cnv.height);
                x = evt.clientX - rect.left;
                y = evt.clientY -;
                ctx.drawImage(myImage, x, y, 30, 30);
        this.imgDownEvent = function(evt) {
            //Could use canvas hit regions
            var xcl = evt.clientX - rect.left;
            var ycl = evt.clientY -;
            if (xcl > x && xcl < x + 30 && ycl > y && ycl < y + 30) {
                imageGrabbed = true;
        this.imgUpEvent = function(evt) {
            imageGrabbed = false;
        this.initDragExample = function() {
            if (window.PointerEvent) {
                cnv = document.getElementById("myCanvas");
                ctx = cnv.getContext('2d');
                rect = cnv.getBoundingClientRect();
                x = 0;
                y = 0;
                myImage = new Image();
                myImage.onload = function() {
                    ctx.drawImage(myImage, 0, 0, 30, 30);
                myImage.src = 'images/ff.jpg';
                cnv.addEventListener("pointermove", this.imgMoveEvent, false);
                cnv.addEventListener("pointerdown", this.imgDownEvent, false);
                cnv.addEventListener("pointerup", this.imgUpEvent, false);

    PointerCapture events are used when there’s the possibility that a pointer device could leave the region of an existing element while tracking the event. For example, suppose you’re using a slider and your finger slips off the actual element –you’ll want to continue to track the pointer movements. You can set PointerCapture by using code similar to this:

    var myElement = document.getElementById("myelement");
    myelement.addEventListener("pointerdown", function(e) {
        if (this.setPointerCapture) {
        //specify the id of the point to capture
    }, false);

    This code guarantees that you still get pointermove events, even if you leave the region of myelement. If you do not set the PointerCapture, the pointer move events will not be called for the containing element once your pointer leaves its area. You can also release the capture by calling releasePointerCapture. The browser does this automatically when a pointerup or pointercancel event occurs.

    The Pointer Event interface

    The PointerEvent interface extends the MouseEvent interface and provides a few additional properties. These properties include pointerId, width, height, pressure, tiltX, tiltY, pointerType and isPrimary.

    The pointerId property provides a unique id for the pointer that initiates the event. The height and width properties provide respective values in CSS pixels for the contact geometry of the pointer. When the pointer happens to be a mouse, these values are set to 0. The pressure property contains a floating point value from 0 to 1 to indicate the amount of pressure applied by the pointer, where 0 is the lowest and 1 is the highest. For pointers that do not support pressure, the value is set to 0.5.

    The tiltY property contains the angle value between the X-Z planes of the pointer and the screen and ranges between -90 and 90 degrees. This property is most useful when using a stylus pen for pointer operations. A value of 0 degrees would indicate the pointer touched the surface at an exact perpendicular angle with respect to the Y-axis. Likewise the tiltX property contains the angle between the Y-Z planes.

    The pointType property contains the device type represented by the pointer. Currently this value will be set to mouse, touch, pen, unknown or an empty string.

    var myElement = document.getElementById("myelement");
    myElement.addEventListener("pointerdown", function(e) {
        switch(e.pointerType) {
            case "mouse":
                console.log("Mouse Pointer");
            case "pen":
                console.log("Pen Pointer");
            case "touch":
                console.log("Touch Pointer");
                console.log("Unknown Pointer");
    }, false);

    The isPrimary property is either true or false and indicates whether the pointer is the primary pointer. A primary pointer property is required when supporting multiple touch points to provide multi-touch input and gesture support. Currently this property will be set to true for each specific pointer type (mouse, touch, pen) when the pointer first makes contact with an element that is tracking pointer events. If you are using one touch point and a mouse pointer simultaneously both will be set to true. The isPrimary property will be set to false for an event if a different pointer is already active with the same pointerType.

    var myElement = document.getElementById("myelement");
    myelement.addEventListener("pointerdown", function(e) {
        if( e.pointerType == "touch" ){
             if( e.isPrimary ){
                 //first touch
                 //handle multi-touch
    }, false);

    Handling multi-touch

    As stated earlier, touch pointers are currently implemented only for Firefox Nightly running on Windows with layers.async-pan-zoom.enabled and dom.w3c_touch_events.enabled preferences enabled. You can check to see whether multi-touch is supported with the following code.

    if( window.maxTouchPoints && window.maxTouchPoints > 1 ){
    //supports multi-touch

    Some browsers provide default functionality for certain touch interactions such as scrolling with a swipe gesture, or using a pinch gesture for zoom control. When these default actions are used, the events for the pointer will not be fired. To better support different applications, Firefox Nightly supports the CSS property touch-action. This property can be set to auto, none, pan-x, pan-y, and manipulation. Setting this property to auto will not change any default behaviors of the browser when using touch events. To disable all of the default behaviors and allow your content to handle all touch input using pointer events instead, you can set this value to none. Setting this value to either pan-x or pan-y invokes all pointer events when not panning/scrolling in a given direction. For instance, pan-x will invoke pointer event handlers when not panning/scrolling in the horizontal direction. When the property is set to manipulation, pointer events are fired if panning/scrolling or manipulating the zoom are not occurring.

    This element receives pointer events when not panning in the horizontal direction.
    // Very Simplistic pinch detector with little error detection,
    // using only x coordinates of a pointer event
    // Currently active pointers
    var myPointers = [];
    var lastDif = -1;
    function myPointerDown(evt) {
        console.log("current pointers down = " + myPointers.length);
    //remove touch point from array when touch is released
    function myPointerUp(evt) {
        // Remove pointer from array
        for (var i = 0; i < myPointers.length; i++) {
            if (myPointers[i].pointerId == evt.pointerId) {
                myPointers.splice(i, 1);
        console.log("current pointers down = " + myPointers.length);
        if (myPointers.length < 2) {
            lastDif = -1;
    //check for a pinch using only the first two touchpoints
    function myPointerMove(evt) {
        // Update pointer position.
        for (var i = 0; i < myPointers.length; i++) {
            if (evt.pointerId = myPointers[i].pointerId) {
                myPointers[i] = evt;
        if (myPointers.length >= 2) {
            // Detect pinch gesture.
            var curDif = Math.abs(myPointers[0].clientX - myPointers[1].clientX);
            if (lastDif > 0) {
                if (curDif > lastDif) { console.log("Zoom in"); }
                if (curDif < lastDif) { console.log("Zoom out"); }
            lastDif = curDif;

    You can test the example code here. For some great examples of the Pointer Events API in action, see Patrick H. Lauke’s collection of Touch and Pointer Events experiments on GitHub. Patrick is a member of the W3C Pointer Events Working Group, the W3C Touch Events Community Group, and Senior Accessibility Consultant for The Paciello Group.


    In this post we covered some of the basics that are currently implemented in Firefox Nightly. To track the progress of this API, check out the Gecko Touch Wiki page. You can also follow along on the main feature bug and be sure to report any issues you find while testing the new Pointer API.

  10. ES6 In Depth: let and const

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    The feature I’d like to talk about today is at once humble and startlingly ambitious.

    When Brendan Eich designed the first version of JavaScript back in 1995, he got plenty of things wrong, including things that have been part of the language ever since, like the Date object and objects automatically converting to NaN when you accidentally multiply them. However, the things he got right are stunningly important things, in hindsight: objects; prototypes; first-class functions with lexical scoping; mutability by default. The language has good bones. It was better than anyone realized at first.

    Still, Brendan made one particular design decision that bears on today’s article—a decision that I think can be fairly characterized as a mistake. It’s a little thing. A subtle thing. You might use the language for years and not even notice it. But it matters, because this mistake is in the side of the language that we now think of as “the good parts”.

    It has to do with variables.

    Problem #1: Blocks are not scopes

    The rule sounds so innocent: The scope of a var declared in a JS function is the whole body of that function. But there are two ways this can have groan-inducing consequences.

    One is that the scope of variables declared in blocks is not just the block. It’s the entire function.

    You may never have noticed this before. I’m afraid it’s one of those things you won’t be able to un-see. Let’s walk through a scenario where it leads to a tricky bug.

    Say you have some existing code that uses a variable named t:

    function runTowerExperiment(tower, startTime) {
      var t = startTime;
      tower.on("tick", function () {
        ... code that uses t ...
      ... more code ...

    Everything works great, so far. Now you want to add bowling ball speed measurements, so you add a little if-statement to the inner callback function.

    function runTowerExperiment(tower, startTime) {
      var t = startTime;
      tower.on("tick", function () {
        ... code that uses t ...
        if (bowlingBall.altitude() <= 0) {
          var t = readTachymeter();
      ... more code ...

    Oh, dear. You’ve unwittingly added a second variable named t. Now, in the “code that uses t”, which was working fine before, t refers to the new inner variable t rather than the existing outer variable.

    The scope of a var in JavaScript is like the bucket-of-paint tool in Photoshop. It extends in both directions from the declaration, forwards and backwards, and it just keeps going until it reaches a function boundary. Since this variable t’s scope extends so far backwards, it has to be created as soon as we enter the function. This is called hoisting. I like to imagine the JS engine lifting each var and function to the top of the enclosing function with a tiny code crane.

    Now, hoisting has its good points. Without it, lots of perfectly cromulent techniques that work fine in the global scope wouldn’t work inside an IIFE. But in this case, hoisting is causing a nasty bug: all your calculations using t will start producing NaN. It’ll be hard to track down, too, especially if your code is larger than this toy example.

    Adding a new block of code caused a mysterious error in code before that block. Is it just me, or is that really weird? We don’t expect effects to precede causes.

    But this is a piece of cake compared to the second var problem.

    Problem #2: Variable oversharing in loops

    You can guess what happens when you run this code. It’s totally straightforward:

    var messages = ["Hi!", "I'm a web page!", "alert() is fun!"];
    for (var i = 0; i < messages.length; i++) {

    If you’ve been following this series, you know I like to use alert() for example code. Maybe you also know that alert() is a terrible API. It’s synchronous. So while an alert is visible, input events are not delivered. Your JS code—and in fact your whole UI—is basically paused until the user clicks OK.

    All of which makes alert() the wrong choice for almost anything you want to do in a web page. I use it because I think all those same things make alert() a great teaching tool.

    Still, I could be persuaded to give up all that clunkiness and bad behavior… if it means I can make a talking cat.

    var messages = ["Meow!", "I'm a talking cat!", "Callbacks are fun!"];
    for (var i = 0; i < messages.length; i++) {
      setTimeout(function () {
      }, i * 1500);

    See this code working incorrectly in action!

    But something’s wrong. Instead of saying all three messages in order, the cat says “undefined” three times.

    Can you spot the bug?

    (Photo of a caterpillar well camouflaged on the bark of a tree. Gujarat, India.)

    Photo credit: nevil saveri

    The problem here is that there is only one variable i. It’s shared by the loop itself and all three timeout callbacks. When the loop finishes running, the value of i is 3 (because messages.length is 3), and none of the callbacks have been called yet.

    So when the first timeout fires, and calls cat.say(messages[i]), it’s using messages[3]. Which of course is undefined.

    There are many ways to fix this (here’s one), but this is a second problem caused by the var scoping rules. It would be awfully nice never to have this kind of problem in the first place.

    let is the new var

    For the most part, design mistakes in JavaScript (other programming languages too, but especially JavaScript) can’t be fixed. Backwards compatibility means never changing the behavior of existing JS code on the Web. Even the standard committee has no power to, say, fix the weird quirks in JavaScript’s automatic semicolon insertion. Browser makers simply will not implement breaking changes, because that kind of change punishes their users.

    So about ten years ago, when Brendan Eich decided to fix this problem, there was really only one way to do it.

    He added a new keyword, let, that could be used to declare variables, just like var, but with better scoping rules.

    It looks like this:

    let t = readTachymeter();

    Or this:

    for (let i = 0; i < messages.length; i++) {

    let and var are different, so if you just do a global search-and-replace throughout your code, that could break parts of your code that (probably unintentionally) depend on the quirks of var. But for the most part, in new ES6 code, you should just stop using var and use let everywhere instead. Hence the slogan: “let is the new var”.

    What exactly are the differences between let and var? Glad you asked!

    • let variables are block-scoped. The scope of a variable declared with let is just the enclosing block, not the whole enclosing function.

      There’s still hoisting with let, but it’s not as indiscriminate. The runTowerExperiment example can be fixed simply by changing var to let. If you use let everywhere, you will never have that kind of bug.

    • Global let variables are not properties on the global object. That is, you won’t access them by writing window.variableName. Instead, they live in the scope of an invisible block that notionally encloses all JS code that runs in a web page.

    • Loops of the form for (let x...) create a fresh binding for x in each iteration.

      This is a very subtle difference. It means that if a for (let...) loop executes multiple times, and that loop contains a closure, as in our talking cat example, each closure will capture a different copy of the loop variable, rather than all closures capturing the same loop variable.

      So the talking cat example, too, can be fixed just by changing var to let.

      This applies to all three kinds of for loop: forof, forin, and the old-school C kind with semicolons.

    • It’s an error to try to use a let variable before its declaration is reached. The variable is uninitialized until control flow reaches the line of code where it’s declared. For example:

      function update() {
        console.log("current time:", t);  // ReferenceError
        let t = readTachymeter();

      This rule is there to help you catch bugs. Instead of NaN results, you’ll get an exception on the line of code where the problem is.

      This period when the variable is in scope, but uninitialized, is called the temporal dead zone. I keep waiting for this inspired bit of jargon to make the leap to science fiction. Nothing yet.

      (Crunchy performance details: In most cases, you can tell whether the declaration has run or not just by looking at the code, so the JavaScript engine does not actually need to perform an extra check every time the variable is accessed to make sure it’s been initialized. However, inside a closure, it sometimes isn’t clear. In those cases the JavaScript engine will do a run-time check. That means let can be a touch slower than var.)

      (Crunchy alternate-universe scoping details: In some programming languages, the scope of a variable starts at the point of the declaration, instead of reaching backwards to cover the whole enclosing block. The standard committee considered using that kind of scoping rule for let. That way, the use of t that causes a ReferenceError here simply wouldn’t be in the scope of the later let t, so it wouldn’t refer to that variable at all. It could refer to a t in an enclosing scope. But this approach did not work well with closures or with function hoisting, so it was eventually abandoned.)

    • Redeclaring a variable with let is a SyntaxError.

      This rule, too, is there to help you detect trivial mistakes. Still, this is the difference that is most likely to cause you some issues if you attempt a global let-to-var conversion, because it applies even to global let variables.

      If you have several scripts that all declare the same global variable, you’d better keep using var for that. If you switch to let, whichever script loads second will fail with an error.

      Or use ES6 modules. But that’s a story for another day.

    (Crunchy syntax details: let is a reserved word in strict mode code. In non-strict-mode code, for the sake of backward compatibility, you can still declare variables, functions, and arguments named let—you can write var let = 'q';! Not that you would do that. And let let; is not allowed at all.)

    Apart from those differences, let and var are pretty much the same. They both support declaring multiple variables separated by commas, for example, and they both support destructuring.

    Note that class declarations behave like let, not var. If you load a script containing a class multiple times, the second time you’ll get an error for redeclaring the class.


    Right, one more thing!

    ES6 also introduces a third keyword that you can use alongside let: const.

    Variables declared with const are just like let except that you can’t assign to them, except at the point where they’re declared. It’s a SyntaxError.

    const MAX_CAT_SIZE_KG = 3000; // 🙀
    MAX_CAT_SIZE_KG = 5000; // SyntaxError
    MAX_CAT_SIZE_KG++; // nice try, but still a SyntaxError

    Sensibly enough, you can’t declare a const without giving it a value.

    const theFairest;  // SyntaxError, you troublemaker

    Secret agent namespace

    “Namespaces are one honking great idea—let’s do more of those!” —Tim Peters, “The Zen of Python”

    Behind the scenes, nested scopes are one of the core concepts that programming languages are built around. It’s been this way since what, ALGOL? Something like 57 years. And it’s truer today than ever.

    Before ES3, JavaScript only had global scopes and function scopes. (Let’s ignore with statements.) ES3 introduced trycatch statements, which meant adding a new kind of scope, used only for the exception variable in catch blocks. ES5 added a scope used by strict eval(). ES6 adds block scopes, for-loop scopes, the new global let scope, module scopes, and additional scopes that are used when evaluating default values for arguments.

    All the extra scopes added from ES3 onward are necessary to make JavaScript’s procedural and object-oriented features work as smoothly, precisely, and intuitively as closures—and cooperate seamlessly with closures. Maybe you never noticed any of these scoping rules before today. If so, the language is doing its job.

    Can I use let and const now?

    Yes. To use them on the web, you’ll have to use an ES6 compiler such as Babel, Traceur, or TypeScript. (Babel and Traceur do not support the temporal dead zone yet.)

    io.js supports let and const, but only in strict-mode code. Node.js support is the same, but the --harmony option is also required.

    Brendan Eich implemented the first version of let in Firefox nine years ago. The feature was thoroughly redesigned during the standardization process. Shu-yu Guo is upgrading our implementation to match the standard, with code reviews by Jeff Walden and others.

    Well, we’re in the home stretch. The end of our epic tour of ES6 features is in sight. In two weeks, we’ll finish up with what’s probably the most eagerly awaited ES6 feature of them all. But first, next week we’ll have a post that extends our earlier coverage of a new feature that’s just super. So please join us as Eric Faust returns with a look at ES6 subclassing in depth.