Mozilla

Featured Articles

Sort by:

View:

  1. Developer Edition 42: Wifi Debugging, Win10, Multiprocess Firefox, ReactJS tools, and more

    Firefox 42 has arrived! In this release, we put a lot of effort into the quality and polish of the Developer Edition browser. Although many of the bugs resolved this release don’t feature in the Release Notes, these small fixes make the tools faster and more stable. But there’s still a lot to report, including a major change to how Firefox works.

    Debugging over wifi

    Now, with remote website debugging, you can debug Firefox for Android devices over wifi – no USB cable or ADB needed.

    Multiprocess is enabled by default

    Multiprocess Firefox (aka E10s) has been enabled by default in Developer Edition. When it’s enabled, Firefox renders and executes web-related content in a single background content process. If you experience any issues with addons after updating to Developer Edition 42, try disabling incompatible addons or reverting to a single process mode using about:preferences.

    Windows 10 theme support

    The Developer Edition theme has a new look in Windows 10 to match the OS styling. Take a look:

    Screenshot of the dark Developer Edition theme in Windows 10

    Dark Developer Edition theme – Windows 10

    Screenshot of the light Developer Edition theme in Windows 10

    Light Developer Edition theme – Windows 10

    React Developer Tools support for Firefox

    If you’re developing with ReactJS, you may have noticed that the React project recently released a beta for their developer tools extension, including initial support for Firefox. While there are no official builds yet of the Firefox version, the source is available on github.

    Other notable changes

    • Asynchronous call stacks now allow you to follow the code flow through setTimeout, DOM event handlers, and Promise handlers. (Bug 981514)
    • There is a new configurable Firefox OS simulator page in WebIDE. From here, you can change a simulator to run with a custom profile and screen size, using a list of presets from reference devices. (Bug 1156834)
    • CSS filter presets are now available in the inspector. (Bug 1153184)
    • The MDN tooltip now uses syntax highlighting for code samples. (Bug 1154469)
    • When using the “copy” keyboard shortcut in the inspector, the outerHTML of the selected node is now copied onto the clipboard. (Bug 968241)
    • New UX improvements have landed in the style editor’s search feature. (Bug 1159001, Bug 1153474)
    • CSS variables are now treated as normal declarations in the inspector. (Bug 1142206)
    • CSS autocomplete popup now supports pressing ‘down’ to list all results in an empty value field (Bug 1142206)

    Thanks to everyone who contributed time and energy to help the DevTools team in this release of Firefox Developer Edition 42! Each release takes a lot of effort from people writing patches, testing, documenting, reporting bugs, sending feedback, discussing features, etc. You can help set our priorities by sharing constructive feedback and letting us know what you’d like from Firefox Developer Tools.

    You can download Firefox Developer Edition now, for free.

  2. ES6 In Depth: The Future

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    Last week’s article on ES6 modules wrapped up a 4-month survey of the major new features in ES6.

    This post covers over a dozen more new features that we never got around to talking about at length. Consider it a fun tour of all the closets and oddly-shaped upstairs rooms in this mansion of a language. Maybe a vast underground cavern or two. If you haven’t read the other parts of the series, take a look; this installment may not be the best place to start!

    (a picture of the Batcave, inexplicably)

    “On your left, you can see typed arrays…”

    One more quick warning: Many of the features below are not widely implemented yet.

    OK. Let’s get started.

    Features you may already be using

    ES6 standardizes some features that were previously in other standards, or widely implemented but nonstandard.

    • Typed arrays, ArrayBuffer, and DataView. These were all standardized as part of WebGL, but they’ve been used in many other APIs since then, including Canvas, the Web Audio API, and WebRTC. They’re handy whenever you need to process large volumes of raw binary or numeric data.

      For example, if the Canvas rendering context is missing a feature you want, and if you’re feeling sufficiently hardcore about it, you can just implement it yourself:

      var context = canvas.getContext("2d");
      var image = context.getImageData(0, 0, canvas.width, canvas.height);
      var pixels = image.data;  // a Uint8ClampedArray object
      // ... Your code here!
      // ... Hack on the raw bits in `pixels`
      // ... and then write them back to the canvas:
      context.putImageData(image, 0, 0);
      

      During standardization, typed arrays picked up methods like .slice(), .map(), and .filter().

    • Promises. Writing just one paragraph about promises is like eating just one potato chip. Never mind how hard it is; it barely even makes sense as a thing to do. What to say? Promises are the building blocks of asynchronous JS programming. They represent values that will become available later. So for example, when you call fetch(), instead of blocking, it returns a Promise object immediately. The fetch goes on in the background, and it’ll call you back when the response arrives. Promises are better than callbacks alone, because they chain really nicely, they’re first-class values with interesting operations on them, and you can get error handling right with a lot less boilerplate. They’re polyfillable in the browser. If you don’t already know all about promises, check out Jake Archibald’s very in-depth article.

    • Functions in block scope. You shouldn’t be using this one, but it’s possible you have been. Maybe unintentionally.

      In ES1-5, this code was technically illegal:

      if (temperature > 100) {
        function chill() {
          return fan.switchOn().then(obtainLemonade);
        }
        chill();
      }
      

      That function declaration inside an if block was supposedly forbidden. They were only legal at toplevel, or inside the outermost block of a function.

      But it worked in all major browsers anyway. Sort of.

      Not compatibly. The details were a little different in each browser. But it sort of worked, and many web pages still use it.

      ES6 standardizes this, thank goodness. The function is hoisted to the top of the enclosing block.

      Unfortunately, Firefox and Safari don’t implement the new standard yet. So for now, use a function expression instead:

      if (temperature > 100) {
        var chill = function () {    
          return fan.switchOn().then(obtainLemonade);
        };
        chill();
      }
      

      The only reason block-scoped functions weren’t standardized years ago is that the backward-compatibility constraints were incredibly complicated. Nobody thought they could be solved. ES6 threads the needle by adding a very strange rule that only applies in non-strict code. I can’t explain it here. Trust me, use strict mode.

    • Function names. All the major JS engines have also long supported a nonstandard .name property on functions that have names. ES6 standardizes this, and makes it better by inferring a sensible .name for some functions that were heretofore considered nameless:

      > var lessThan = function (a, b) { return a < b; };
      > lessThan.name
          "lessThan"
      

      For other functions, such as callbacks that appear as arguments to .then methods, the spec still can’t figure out a name. fn.name is then the empty string.

    Nice things

    • Object.assign(target, ...sources). A new standard library function, similar to Underscore’s _.extend().

    • The spread operator for function calls. This is nothing to do with Nutella, even though Nutella is a tasty spread. But it is a delicious feature, and I think you'll like it.

      Back in May, we introduced rest parameters. They’re a way for functions to receive any number of arguments, a more civilized alternative to the random, clumsy arguments object.

      function log(...stuff) {  // stuff is the rest parameter.
        var rendered = stuff.map(renderStuff); // It's a real array.
        $("#log").add($(rendered));
      }
      

      What we didn’t say is that there’s matching syntax for passing any number of arguments to a function, a more civilized alternative to fn.apply():

      // log all the values from an array
      log(...myArray);
      

      Of course it works with any iterable object, so you can log all the stuff in a Set by writing log(...mySet).

      Unlike rest parameters, it makes sense to use the spread operator multiple times in a single argument list:

      // kicks are before trids
      log("Kicks:", ...kicks, "Trids:", ...trids);
      

      The spread operator is handy for flattening an array of arrays:

      > var smallArrays = [[], ["one"], ["two", "twos"]];
      > var oneBigArray = [].concat(...smallArrays);
      > oneBigArray
          ["one", "two", "twos"]
      

      ...but maybe this one of those pressing needs that only I have. If so, I blame Haskell.

    • The spread operator for building arrays. Also back in May, we talked about “rest” patterns in destructuring. They’re a way to get any number of elements out of an array:

      > var [head, ...tail] = [1, 2, 3, 4];
      > head
          1
      > tail
          [2, 3, 4]
      

      Guess what! There’s matching syntax for getting any number of elements into an array:

      > var reunited = [head, ...tail];
      > reunited
          [1, 2, 3, 4]
      

      This follows all the same rules as the spread operator for function calls: you can use the spread operator many times in the same array, and so on.

    • Proper tail calls. This one is too amazing for me to try to explain here.

      To understand this feature, there’s no better place to start than page 1 of Structure and Interpretation of Computer Programs. If you enjoy it, just keep reading. Tail calls are explained in section 1.2.1, “Linear Recursion and Iteration”. The ES6 standard requires that implementations be “tail-recursive”, as the term is defined there.

      None of the major JS engines have implemented this yet. It’s hard to implement. But all in good time.

    Text

    • Unicode version upgrade. ES5 required implementations to support at least all the characters in Unicode version 3.0. ES6 implementations must support at least Unicode 5.1.0. You can now use characters from Linear B in your function names!

      Linear A is still a bit risky, both because it was not added to Unicode until version 7.0 and because it might be hard to maintain code written in a language that has never been deciphered.

      (Even in JavaScript engines that support the emoji added in Unicode 6.1, you can’t use 😺 as a variable name. For some reason, the Unicode Consortium decided not to classify it as an identifier character. 😾)

    • Long Unicode escape sequences. ES6, like earlier versions, supports four-digit Unicode escape sequences. They look like this: \u212A. These are great. You can use them in strings. Or if you’re feeling playful and your project has no code review policy whatsoever, you can use them in variable names. But then, for a character like U+13021 (𓀡), the Egyptian hieroglyph of a guy standing on his head, there's a slight problem. The number 13021 has five digits. Five is more than four.

      In ES5, you had to write two escapes, a UTF-16 surrogate pair. This felt exactly like living in the Dark Ages: cold, miserable, barbaric. ES6, like the dawn of the Italian Renaissance, brings tremendous change: you can now write \u{13021}.

    • Better support for characters outside the BMP. The .toUpperCase() and .toLowerCase() methods now work on strings written in the Deseret alphabet!

      In the same vein, String.fromCodePoint(...codePoints) is a function very similar to the older String.fromCharCode(...codeUnits), but with support for code points beyond the BMP.

    • Unicode RegExps. ES6 regular expressions support a new flag, the u flag, which causes the regular expression to treat characters outside the BMP as single characters, not as two separate code units. For example, without the u, /./ only matches half of the character "😭". But /./u matches the whole thing.

      Putting the u flag on a RegExp also enables more Unicode-aware case-insensitive matching and long Unicode escape sequences. For the whole story, see Mathias Bynens’s very detailed post.

    • Sticky RegExps. A non-Unicode-related feature is the y flag, also known as the sticky flag. A sticky regular expression only looks for matches starting at the exact offset given by its .lastIndex property. If there isn’t a match there, rather than scanning forward in the string to find a match somewhere else, a sticky regexp immediately returns null.

    • An official internationalization spec. ES6 implementations that provide any internationalization features must support ECMA-402, the ECMAScript 2015 Internationalization API Specification. This separate standard specifies the Intl object. Firefox, Chrome, and IE11+ already fully support it. So does Node 0.12.

    Numbers

    • Binary and octal number literals. If you need a fancy way to write the number 8,675,309, and 0x845fed isn’t doing it for you, you can now write 0o41057755 (octal) or 0b100001000101111111101101 (binary).

      Number(str) also now recognizes strings in this format: Number("0b101010") returns 42.

      (Quick reminder: number.toString(base) and parseInt(string, base) are the original ways to convert numbers to and from arbitrary bases.)

    • New Number functions and constants. These are pretty niche. If you’re interested, you can browse the standard yourself, starting at Number.EPSILON.

      Maybe the most interesting new idea here is the “safe integer” range, from −(253 - 1) to +(253 - 1) inclusive. This special range of numbers has existed as long as JS. Every integer in this range can be represented exactly as a JS number, as can its nearest neighbors. In short, it’s the range where ++ and -- work as expected. Outside this range, odd integers aren’t representable as 64-bit floating-point numbers, so incrementing and decrementing the numbers that are representable (all of which are even) can’t give a correct result. In case this matters to your code, the standard now offers constants Number.MIN_SAFE_INTEGER and Number.MAX_SAFE_INTEGER, and a predicate Number.isSafeInteger(n).

    • New Math functions. ES6 adds hyperbolic trig functions and their inverses, Math.cbrt(x) for computing cube roots, Math.hypot(x, y) for computing the hypotenuse of a right triangle, Math.log2(x) and Math.log10(x) for computing logarithms in common bases, Math.clz32(x) to help compute integer logarithms, and a few others.

      Math.sign(x) gets the sign of a number.

      ES6 also adds Math.imul(x, y), which does signed multiplication modulo 232. This is a very strange thing to want... unless you are working around the fact that JS does not have 64-bit integers or big integers. In that case it’s very handy. This helps compilers. Emscripten uses this function to implement 64-bit integer multiplication in JS.

      Similarly Math.fround(x) is handy for compilers that need to support 32-bit floating-point numbers.

    The end

    Is this everything?

    Well, no. I didn’t even mention the object that’s the common prototype of all built-in iterators, the top-secret GeneratorFunction constructor, Object.is(v1, v2), how Symbol.species helps support subclassing builtins like Array and Promise, or how ES6 specifies details of how multiple globals work that have never been standardized before.

    I’m sure I missed a few things, too.

    But if you’ve been following along, you have a pretty good picture of where we’re going. You know you can use ES6 features today, and if you do, you’ll be opting in to a better language.

    A few days ago, Josh Mock remarked to me that he had just used eight different ES6 features in about 50 lines of code, without even really thinking about it. Modules, classes, argument defaults, Set, Map, template strings, arrow functions, and let. (He missed the for-of loop.)

    This has been my experience, too. The new features hang together very well. They end up affecting almost every line of JS code you write.

    Meanwhile, every JS engine is hurrying to implement and optimize the features we’ve been discussing for the past few months.

    Once we’re done, the language will be complete. We’ll never have to change anything again. I’ll have to find something else to work on.

    Just kidding. Proposals for ES7 are already picking up steam. Just to pick a few:

    • Exponentation operator. 2 ** 8 will return 256. Implemented in Firefox Nightly.

    • Array.prototype.includes(value). Returns true if this array contains the given value. Implemented in Firefox Nightly; polyfillable.

    • SIMD. Exposes 128-bit SIMD instructions provided by modern CPUs. These instructions do an arithmetic operation on 2, or 4, or 8 adjacent array elements at a time. They can dramatically speed up a wide variety of algorithms for streaming audio and video, cryptography, games, image processing, and more. Very low-level, very powerful. Implemented in Firefox Nightly; polyfillable.

    • Async functions. We hinted at this feature in the post on generators. Async functions are like generators, but specialized for asynchronous programming. When you call a generator, it returns an iterator. When you call an async function, it returns a promise. Generators use the yield keyword to pause and produce a value; async functions instead use the await keyword to pause and wait for a promise.

      It’s hard to describe them in a few sentences, but async functions will be the landmark feature in ES7.

    • Typed Objects. This is a follow-up to typed arrays. Typed arrays have elements that are typed. A typed object is simply an object whose properties are typed.

      // Create a new struct type. Every Point has two fields
      // named x and y.
      var Point = new TypedObject.StructType({
        x: TypedObject.int32,
        y: TypedObject.int32
      });
      
      // Now create an instance of that type.
      var p = new Point({x: 800, y: 600});
      console.log(p.x); // 800
      

      You would only do this for performance reasons. Like typed arrays, typed objects offer a few of the benefits of typing (compact memory usage and speed), but on a per-object, opt-in basis, in contrast to languages where everything is statically typed.

      They’re are also interesting for JS as a compilation target.

      Implemented in Firefox Nightly.

    • Class and property decorators. Decorators are tags you add to a property, class, or method. An example shows what this is about:

      import debug from "jsdebug";
      
      class Person {
        @debug.logWhenCalled
        hasRoundHead(assert) {
          return this.head instanceof Spheroid;
        }
        ...
      }
      

      @debug.logWhenCalled is the decorator here. You can imagine what it does to the method.

      The proposal explains how this would work in detail, with many examples.

    There’s one more exciting development I have to mention. This one is not a language feature.

    TC39, the ECMAScript standard committee, is moving toward more frequent releases and a more public process. Six years passed between ES5 and ES6. The committee aims to ship ES7 just 12 months after ES6. Subsequent editions of the standard will be released on a 12-month cadence. Some of the features listed above will be ready in time. They will “catch the train” and become part of ES7. Those that aren’t finished in that timeframe can catch the next train.

    It’s been great fun sharing the staggering amount of good stuff in ES6. It’s also a pleasure to be able to say that a feature dump of this size will probably never happen again.

    Thanks for joining us for ES6 In Depth! I hope you enjoyed it. Keep in touch.

  3. Flying a drone in your browser with WebBluetooth

    There are tons of devices around us, and the number is only growing. And more and more of these devices come with connectivity. From suitcases to plants to eggs. This brings new challenges: how can we discover devices around us, and how can we interact with them?

    Currently device interactions are handled by separate apps running on mobile phones. But this does not solve the discoverability issue. I need to know which devices are around me before I know which app to install. When I’m standing in front of a meeting room I don’t care about which app to install, or even what the name or ID of the meeting room is. I just want to make a booking or see availability, and as fast as possible.

    Bluetooth

    Scott Jenson from Google has been thinking about discoverability for a while, and came up with the Physical Web project, whose premise is:

    Walk up and use anything

    The idea is that you use Bluetooth Smart, the low energy variant of bluetooth, to broadcast URLs to the world. Your phone picks up the advertisment package, decodes it, and shows some information to the user. One click and the user is redirected to a web page with relevant content. This can be used for a variety of things:

    • A meeting room can broadcast a URL to its calendar for scheduling.
    • A movie poster can broadcast a URL to show viewing times and trailers.
    • A prescription medicine can broadcast a URL with information about the medication or how to refill it.
    • Look around you. Examples of other use cases are everywhere, waiting to be implemented.

    However, the material world is not a one-way street, and this presents a problem. Broadcasting a URL is great for informing me about things like movie times, but it does not allow me to interact more deeply with the device. If I want to fly a drone I don’t just want to discover that there’s a drone near me, I also want to interact with the drone straight away. We need to have a way for web pages to communicate back to devices.

    Enter the work of the Web Bluetooth W3C group, that includes representatives of Mozilla’s Bluetooth team, who are working on bringing bluetooth APIs to the browser. If the Physical Web allows us to walk up to any device and get the URL of a web app, then WebBluetooth allows the web app to connect to the device and talk back to it.

    At this point, there’s still a lot of work to be done. The bluetooth API is only exposed to certified content on Firefox OS, and thus is not currently accessible to ordinary web content. Until security issues have been cleared this will continue to be the case. A second issue is that Physical Web beacons broadcast a URL. How can a specific web resource know which specific device has broadcast the URL?

    As you can see, lots of work remains to be done, but this blog is called Mozilla Hacks for a reason. Let’s start hacking!

    Adding Physical Web support to Firefox OS

    Since most of the work around WebBluetooth has been done for Firefox OS, I’ve made it my weapon of choice. I want the process of discovering devices to be as painless and obvious as possible. I figured the lockscreen would be the best possible place. Whenever you have bluetooth enabled on your Firefox OS phone, a new notification would then pop up asking you to search for devices (tracking bug).

    Tap, tap, tap

    navigator.mozBluetooth.defaultAdapter.startLeScan([]).then(handle => {
      handle.ondevicefound = e => {
        console.log('Found', e.device, e.scanRecord);
      };
    
      setTimeout(() => {
        navigator.mozBluetooth.defaultAdapter.stopLeScan(handle)
      }, 5000);
    }, err => console.error(err));
    

    As you can see on the third line, we have a scanRecord. This is the advertisement package that the device broadcasts. It’s nothing more than a set of bytes, and you are free to declare your own protocol. For our purpose—broadcasting URLs over bluetooth—Google has already developed two ways of encoding: UriBeacon and EddyStone, both of which can be found in the wild today.

    Parsing the advertisement package is pretty straightforward. Here’s some code I wrote to parse UriBeacons. Parsing the UriBeacon will give you a URL, which is often shortened, because of limited bytes in the advertisement package —this makes for an uninformative UI:

    So what the hack (pun intended) is this device?

    To get some information about the web page behind the beacon we can do an AJAX request and parse the content of the page to enhance the information displayed on the lockscreen:

    function resolveURI(uri, ele) {
    var x = new XMLHttpRequest({ mozSystem: true });
    x.onload = e => {
      var h = document.createElement('html');
      h.innerHTML = x.responseText;
    
      // After following 301/302s, this contains the last resolved URL
      console.log('url is', x.responseURL);
    
      var titleEl = h.querySelector('title');
      var metaEl = h.querySelector('meta[name="description"]');
      var bodyEl = h.querySelector('body');
    
      if (titleEl && titleEl.textContent) {
        console.log('title is', titleEl.textContent);
      }
    
      if (metaEl && metaEl.content) {
        console.log('description is', metaEl.content);
      }
      else if (bodyEl && bodyEl.textContent) {
        console.log('description is', bodyEl.textContent);
      }
    };
    x.onerror = err => console.error('Loading', uri, 'failed', err);
    x.open('GET', uri);
    x.send();
    };
    

    This yields a nicer notification that actually describes the beacon.

    Much nicer

    A drone that doesn’t broadcast a URL

    Unfortunately not all BLE devices broadcast URLs at this point. All of this new technology is experimental and very cool, but not yet fully implemented. We’ve got high hopes that this will change in the near future. Because I still want to be able to fly my drone now, I added some code that transforms the data a drone broadcasts into a URL.

    The web application

    Now that we’ve solved the issue of discoverability, we need a way to control the drone from the browser. Since bluetooth access is not available for web content, we need to make some changes to Gecko, where the Firefox OS security model is implemented. If you are interested in the changes, here’s the commit. We also needed a sneaky hack to make sure the tab’s process is run with the right Linux permissions.

    With these changes in place, we open up navigator.mozBluetooth to all content, and run every tab in Firefox in a process that is part of the ‘bluetooth’ Linux group, ensuring access to the hardware. If you’re playing around with this build later, please note that with my “sneaky” hack implemented, you are now running a build where no security is guaranteed. Using a build hack like this, with security disabled, is fine for IoT experimentation, but is definitely not recommended as a production solution. When the Web Bluetooth spec is finalized, and official support lands in Gecko, proper security will be implemented.

    With the API in place, we can start writing the application. When you tap on the Physical Web notification on the lockscreen, we pass the device address in as a parameter. This is subject to change. For the ongoing discussion take a look at the Eddystone -> Web Bluetooth handoff.

    var address = 'aa:bb:cc:dd:ee'; // parsed from URL
    var counter = 0;
    navigator.mozBluetooth.defaultAdapter.startLeScan([]).then(handle => {
      handle.ondevicefound = e => {
        if (e.device.address !== address) return;
    
        navigator.mozBluetooth.defaultAdapter.stopLeScan(handle);
    
        // write some code to fly the drone
      };
    }, err => console.error(err));
    

    Now that we have a reference to the device address, we can set up a connection. The protocol we use to talk back and forth to the device is called GATT, the Generic Attribute Profile. The idea behind GATT is that a device can have multiple standard services. For example, a heart rate sensor can implement the battery service and the heart rate service. Because these services are standardized, a consuming application only needs to write the implementation logic once, and can talk to any heart rate monitor.

    Characteristics are aspects of a given service. For example, a heart rate service will implement heart rate measurement and heart rate max. Characteristics can be readable and writeable depending on how they are defined. This goes the same with the drone. It has a service for flying the drone and characteristics to let you control the drone from your phone.

    Luckily Martin Dlouhý (as far as I can tell, he was the first) has already decoded the communication protocol for the Rolling Spider drone, so we can use his work and the new Bluetooth API to start flying…

    // Have a way of knowing when the connection drops
    e.device.gatt.onconnectionstatechanged = cse => {
      console.log('connectionStateChanged', cse);
    };
    // Receive events (battery change f.e.) from device
    e.device.gatt.oncharacteristicchanged = cce => {
      console.log('characteristicChanged', cce);
    };
    
    // Set up the connection
    e.device.gatt.connect().then(() => {
      return e.device.gatt.discoverServices();
    }).then(() => {
      // devices have services, and services have characteristics
      var services = e.device.gatt.services;
      console.log('services', services);
    
      // find the characteristic that handles flying the drone
      var c = services.reduce((curr, f) => curr.concat(f.characteristics), [])
        .filter(c => c.uuid === '9a66fa0b-0800-9191-11e4-012d1540cb8e')[0];
    
      // take off instruction!
      var buffer = new Uint8Array(0x04, counter++, 0x02, 0x00, 0x01, 0x00]);
      c.writeValue(buffer).then(() => {
        console.log('take off successful!');
      });
    });
    

    The Mozilla team in Taipei used this to create a demo application for Firefox OS, demonstrating the capabilities of the new API during the Mozilla Work Week in Whistler last June. With the API now available in the browser, we can take that work, host it as a web page, beef up the graphics a bit, and have a web site flying a drone!

    Such amaze

    Such amaze. Much drone.

    Conclusion

    It’s an exciting time for the Web! With more and more devices coming online we need a way to discover and interact with them without much hassle. The combination of Physical Web and WebBluetooth allows us to create frictionless experiences for users willing to interact with real-world appliances and new devices. Although we’re a long way off, we’re heading in the right direction. Google and Mozilla are actively developing the technology; I’ve got high hopes that everything in this blog post will be common knowledge in a year!

    If that’s not fast enough for you, you can play around with an experimental build of Firefox OS which enables everything seen in this post. This build runs on the Flame developer device. First, upgrade to nightly_v3 base image, then flash this build.

    Attributions

    Thanks to Tzu-Lin Huang and Sean Lee for building the initial drone code; the WebBluetooth team in Mozilla Taipei (especially Jocelyn Liu) for quick feedback and patches when I complained about the API; Chris Williams for putting the drone in my JSConf.us gift bag; Scott Jenson for answering my numerous questions about the Physical Web; and Telenor Digital for letting me play with drones for two weeks.

  4. ES6 In Depth: Modules

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    When I started on Mozilla’s JavaScript team back in 2007, the joke was that the length of a typical JavaScript program was one line.

    This was two years after Google Maps launched. Not long before that, the predominant use of JavaScript had been form validation, and sure enough, your average <input onchange=> handler would be… one line of code.

    Things have changed. JavaScript projects have grown to jaw-dropping sizes, and the community has developed tools for working at scale. One of the most basic things you need is a module system, a way to spread your work across multiple files and directories—but still make sure all your bits of code can access one another as needed—but also be able to load all that code efficiently. So naturally, JavaScript has a module system. Several, actually. There are also several package managers, tools for installing all that software and coping with high-level dependencies. You might think ES6, with its new module syntax, is a little late to the party.

    Well, today we’ll see whether ES6 adds anything to these existing systems, and whether or not future standards and tools will be able to build on it. But first, let’s just dive in and see what ES6 modules look like.

    Module basics

    An ES6 module is a file containing JS code. There’s no special module keyword; a module mostly reads just like a script. There are two differences.

    • ES6 modules are automatically strict-mode code, even if you don’t write "use strict"; in them.

    • You can use import and export in modules.

    Let’s talk about export first. Everything declared inside a module is local to the module, by default. If you want something declared in a module to be public, so that other modules can use it, you must export that feature. There are a few ways to do this. The simplest way is to add the export keyword.

    // kittydar.js - Find the locations of all the cats in an image.
    // (Heather Arthur wrote this library for real)
    // (but she didn't use modules, because it was 2013)
    
    export function detectCats(canvas, options) {
      var kittydar = new Kittydar(options);
      return kittydar.detectCats(canvas);
    }
    
    export class Kittydar {
      ... several methods doing image processing ...
    }
    
    // This helper function isn't exported.
    function resizeCanvas() {
      ...
    }
    ...
    

    You can export any top-level function, class, var, let, or const.

    And that’s really all you need to know to write a module! You don’t have to put everything in an IIFE or a callback. Just go ahead and declare everything you need. Since the code is a module, not a script, all the declarations will be scoped to that module, not globally visible across all scripts and modules. Export the declarations that make up the module’s public API, and you’re done.

    Apart from exports, the code in a module is pretty much just normal code. It can use globals like Object and Array. If your module runs in a web browser, it can use document and XMLHttpRequest.

    In a separate file, we can import and use the detectCats() function:

    // demo.js - Kittydar demo program
    
    import {detectCats} from "kittydar.js";
    
    function go() {
        var canvas = document.getElementById("catpix");
        var cats = detectCats(canvas);
        drawRectangles(canvas, cats);
    }
    

    To import multiple names from a module, you would write:

    import {detectCats, Kittydar} from "kittydar.js";
    

    When you run a module containing an import declaration, the modules it imports are loaded first, then each module body is executed in a depth-first traversal of the dependency graph, avoiding cycles by skipping anything already executed.

    And those are the basics of modules. It’s really quite simple. ;-)

    Export lists

    Rather than tagging each exported feature, you can write out a single list of all the names you want to export, wrapped in curly braces:

    export {detectCats, Kittydar};
    
    // no `export` keyword required here
    function detectCats(canvas, options) { ... }
    class Kittydar { ... }
    

    An export list doesn’t have to be the first thing in the file; it can appear anywhere in a module file’s top-level scope. You can have multiple export lists, or mix export lists with other export declarations, as long as no name is exported more than once.

    Renaming imports and exports

    Once in a while, an imported name happens to collide with some other name that you also need to use. So ES6 lets you rename things when you import them:

    // suburbia.js
    
    // Both these modules export something named `flip`.
    // To import them both, we must rename at least one.
    import {flip as flipOmelet} from "eggs.js";
    import {flip as flipHouse} from "real-estate.js";
    ...
    

    Similarly, you can rename things when you export them. This is handy if you want to export the same value under two different names, which occasionally happens:

    // unlicensed_nuclear_accelerator.js - media streaming without drm
    // (not a real library, but maybe it should be)
    
    function v1() { ... }
    function v2() { ... }
    
    export {
      v1 as streamV1,
      v2 as streamV2,
      v2 as streamLatestVersion
    };
    

    Default exports

    The new standard is designed to interoperate with existing CommonJS and AMD modules. So suppose you have a Node project and you’ve done npm install lodash. Your ES6 code can import individual functions from Lodash:

    import {each, map} from "lodash";
    
    each([3, 2, 1], x => console.log(x));
    

    But perhaps you’ve gotten used to seeing _.each rather than each and you still want to write things that way. Or maybe you want to use _ as a function, since that’s a useful thing to do in Lodash.

    For that, you can use a slightly different syntax: import the module without curly braces.

    import _ from "lodash";
    

    This shorthand is equivalent to import {default as _} from "lodash";. All CommonJS and AMD modules are presented to ES6 as having a default export, which is the same thing that you would get if you asked require() for that module—that is, the exports object.

    ES6 modules were designed to let you export multiple things, but for existing CommonJS modules, the default export is all you get. For example, as of this writing, the famous colors package doesn’t have any special ES6 support as far as I can tell. It’s a collection of CommonJS modules, like most packages on npm. But you can import it right into your ES6 code.

    // ES6 equivalent of `var colors = require("colors/safe");`
    import colors from "colors/safe";
    

    If you’d like your own ES6 module to have a default export, that’s easy to do. There’s nothing magic about a default export; it’s just like any other export, except it’s named "default". You can use the renaming syntax we already talked about:

    let myObject = {
      field1: value1,
      field2: value2
    };
    export {myObject as default};
    

    Or better yet, use this shorthand:

    export default {
      field1: value1,
      field2: value2
    };
    

    The keywords export default can be followed by any value: a function, a class, an object literal, you name it.

    Module objects

    Sorry this is so long. But JavaScript is not alone: for some reason, module systems in all languages tend to have a ton of individually small, boring convenience features. Fortunately, there’s just one thing left. Well, two things.

    import * as cows from "cows";
    

    When you import *, what’s imported is a module namespace object. Its properties are the module’s exports. So if the “cows” module exports a function named moo(), then after importing “cows” this way, you can write: cows.moo().

    Aggregating modules

    Sometimes the main module of a package is little more than importing all the package’s other modules and exporting them in a unified way. To simplify this kind of code, there’s an all-in-one import-and-export shorthand:

    // world-foods.js - good stuff from all over
    
    // import "sri-lanka" and re-export some of its exports
    export {Tea, Cinnamon} from "sri-lanka";
    
    // import "equatorial-guinea" and re-export some of its exports
    export {Coffee, Cocoa} from "equatorial-guinea";
    
    // import "singapore" and export ALL of its exports
    export * from "singapore";
    

    Each one of these export-from statements is similar to an import-from statement followed by an export. Unlike a real import, this doesn’t add the re-exported bindings to your scope. So don’t use this shorthand if you plan to write some code in world-foods.js that makes use of Tea. You’ll find that it’s not there.

    If any name exported by “singapore” happened to collide with the other exports, that would be an error, so use export * with care.

    Whew! We’re done with syntax! On to the interesting parts.

    What does import actually do?

    Would you believe… nothing?

    Oh, you’re not that gullible. Well, would you believe the standard mostly doesn’t say what import does? And that this is a good thing?

    ES6 leaves the details of module loading entirely up to the implementation. The rest of module execution is specified in detail.

    Roughly speaking, when you tell the JS engine to run a module, it has to behave as though these four steps are happening:

    1. Parsing: The implementation reads the source code of the module and checks for syntax errors.

    2. Loading: The implementation loads all imported modules (recursively). This is the part that isn’t standardized yet.

    3. Linking: For each newly loaded module, the implementation creates a module scope and fills it with all the bindings declared in that module, including things imported from other modules.

      This is the part where if you try to import {cake} from "paleo", but the “paleo” module doesn’t actually export anything named cake, you’ll get an error. And that’s too bad, because you were so close to actually running some JS code. And having cake!

    4. Run time: Finally, the implementation runs the statements in the body of each newly-loaded module. By this time, import processing is already finished, so when execution reaches a line of code where there’s an import declaration… nothing happens!

    See? I told you the answer was “nothing”. I don’t lie about programming languages.

    But now we get to the fun part of this system. There’s a cool trick. Because the system doesn’t specify how loading works, and because you can figure out all the dependencies ahead of time by looking at the import declarations in the source code, an implementation of ES6 is free to do all the work at compile time and bundle all your modules into a single file to ship them over the network! And tools like webpack actually do this.

    This is a big deal, because loading scripts over the network takes time, and every time you fetch one, you may find that it contains import declarations that require you to load dozens more. A naive loader would require a lot of network round trips. But with webpack, not only can you use ES6 with modules today, you get all the software engineering benefits with no run-time performance hit.

    A detailed specification of module loading in ES6 was originally planned—and built. One reason it isn’t in the final standard is that there wasn’t consensus on how to achieve this bundling feature. I hope someone figures it out, because as we’ll see, module loading really should be standardized. And bundling is too good to give up.

    Static vs. dynamic, or: rules and how to break them

    For a dynamic language, JavaScript has gotten itself a surprisingly static module system.

    • All flavors of import and export are allowed only at toplevel in a module. There are no conditional imports or exports, and you can’t use import in function scope.

    • All exported identifiers must be explicitly exported by name in the source code. You can’t programmatically loop through an array and export a bunch of names in a data-driven way.

    • Module objects are frozen. There is no way to hack a new feature into a module object, polyfill style.

    • All of a module’s dependencies must be loaded, parsed, and linked eagerly, before any module code runs. There’s no syntax for an import that can be loaded lazily, on demand.

    • There is no error recovery for import errors. An app may have hundreds of modules in it, and if anything fails to load or link, nothing runs. You can’t import in a try/catch block. (The upside here is that because the system is so static, webpack can detect those errors for you at compile time.)

    • There is no hook allowing a module to run some code before its dependencies load. This means that modules have no control over how their dependencies are loaded.

    The system is quite nice as long as your needs are static. But you can imagine needing a little hack sometimes, right?

    That’s why whatever module-loading system you use will have a programmatic API to go alongside ES6’s static import/export syntax. For example, webpack includes an API that you can use for “code splitting”, loading some bundles of modules lazily on demand. The same API can help you break most of the other rules listed above.

    The ES6 module syntax is very static, and that’s good—it’s paying off in the form of powerful compile-time tools. But the static syntax was designed to work alongside a rich dynamic, programmatic loader API.

    When can I use ES6 modules?

    To use modules today, you’ll need a compiler such as Traceur or Babel. Earlier in this series, Gastón I. Silva showed how to use Babel and Broccoli to compile ES6 code for the web; building on that article, Gastón has a working example with support for ES6 modules. This post by Axel Rauschmayer contains an example using Babel and webpack.

    The ES6 module system was designed mainly by Dave Herman and Sam Tobin-Hochstadt, who defended the static parts of the system against all comers (including me) through years of controversy. Jon Coppeard is implementing modules in Firefox. Additional work on a JavaScript Loader Standard is underway. Work to add something like <script type=module> to HTML is expected to follow.

    And that’s ES6.

    This has been so much fun that I don’t want it to end. Maybe we should do just one more episode. We could talk about odds and ends in the ES6 spec that weren’t big enough to merit their own article. And maybe a little bit about what the future holds. Please join me next week for the stunning conclusion of ES6 In Depth.

  5. Trainspotting: Firefox 40

    Trainspotting is a series of articles highlighting features in the lastest version of Firefox. A new version of Firefox is shipped every six weeks – we at Mozilla call this pattern “release trains.”

    Firefox keeps on shippin' shippin' shippin' /
    Into the future…
    —Steve Miller Band, probably

    Like a big ol’ jet airliner, a new version of Firefox has been cleared for takeoff! Let’s take a look at some of the snazzy new things in store for both users and developers.

    For a full list of changes and additions, take a look at the Firefox 40 release notes.

    Developer Tools

    Find what you’re looking for in the Inspector, but don’t know where it is on the page? You can now scroll an element into view via the Markup View in the Inspector:

    scroll-into-view

    Sift through complex stylesheets more easily by filtering CSS rules:

    You can now toggle how colors are represented by Shift+clicking on them in the Rules view:

    color-rotate

    The Web Console will now warn of code that is unreachable because it comes after a return statement:

    unreachable

    The Developer Tools have also gained a powerful new set of Performance analysis tools, which are demonstrated along with all the other Firefox 40 Developer Tools changes in this in-depth blog post.

    Signed Add-ons

    extension-warning

    Malicious extensions are a growing problem in all browsers. Because Firefox Add-ons have tremendous power, there needs to be a better way to protect users from malicious code running wild. Starting in Firefox 42, all Firefox add-ons will be required to be signed in order to be able to be installed by end-users. In Firefox 40, users will be warned about un-signed extensions, but can opt to install them anyway. You can read more about why extension signing is needed, and also check out the overall plan for the roll-out of signed extensions.

    Event offsetX and offsetY

    Sometimes a good idea is a good idea, even if it takes 14 years! Firefox now supports the offsetX and offsetY properties for MouseEvents. This makes it much easier for code to track mouse events on an element within a page, without needing to know where in the page the element is. As always, perform capability checks to ensure that your code works across browsers:

    el.addEventListener(function (e) {
      var x, y;
      if ('offsetX' in e) {
         x = e.offsetX;
         y = e.offsetY;
      } else {
        // addition needed for every offsetParent up the chain
        x = e.clientX + e.target.offsetLeft /* ... */;
        y = e.clientY + e.target.offsetTop /* ... */;
      }
      addGlitterMouseTrails(x, y);
    }
    

    But Wait, There’s More!

    Every new version of Firefox has dozens of bug fixes and changes to make browsing and web development better- I’ve only touched upon a few. Finally, it’s well worth noting that 55 developers contributed their first code change to Firefox in this release, and 49 of them were brand new volunteers. Shipping would not be the same without these awesome contributions! Thank you!

    For all the rest of the details, check out the Developer Release Notes or even the full list of fixed bugs. Happy Browsing!

  6. ES6 In Depth: Subclassing

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    Two weeks ago, we described the new classes system added in ES6 for handling trivial cases of object constructor creation. We showed how you can use it to write code that looks like this:

    class Circle {
        constructor(radius) {
            this.radius = radius;
            Circle.circlesMade++;
        };
    
        static draw(circle, canvas) {
            // Canvas drawing code
        };
    
        static get circlesMade() {
            return !this._count ? 0 : this._count;
        };
        static set circlesMade(val) {
            this._count = val;
        };
    
        area() {
            return Math.pow(this.radius, 2) * Math.PI;
        };
    
        get radius() {
            return this._radius;
        };
        set radius(radius) {
            if (!Number.isInteger(radius))
                throw new Error("Circle radius must be an integer.");
            this._radius = radius;
        };
    }
    

    Unfortunately, as some people pointed out, there wasn’t time to talk then about the rest of the power of classes in ES6. Like traditional class systems (C++ or Java, for example), ES6 allows for inheritance, where one class uses another as a base, and then extends it by adding more features of its own. Let’s take a closer look at the possibilities of this new feature.

    Before we get started talking about subclassing, it will be useful to spend a moment reviewing property inheritance and the dynamic prototype chain.

    JavaScript Inheritance

    When we create an object, we get the chance to put properties on it, but it also inherits the properties of its prototype objects. JavaScript programmers will be familiar with the existing Object.create API which allows us to do this easily:

    var proto = {
        value: 4,
        method() { return 14; }
    }
    
    var obj = Object.create(proto);
    
    obj.value; // 4
    obj.method(); // 14
    

    Further, when we add properties to obj with the same name as ones on proto, the properties on obj shadow those on proto.

    obj.value = 5;
    obj.value; // 5
    proto.value; // 4
    

    Basic Subclassing

    With this in mind, we can now see how we should hook up the prototype chains of the objects created by a class. Recall that when we create a class, we make a new function corresponding to the constructor method in the class definition which holds all the static methods. We also create an object to be the prototype property of that created function, which will hold all the instance methods. To create a new class which inherits all the static properties, we will have to make the new function object inherit from the function object of the superclass. Similarly, we will have to make the prototype object of the new function inherit from the prototype object of the superclass, for the instance methods.

    That description is very dense. Let’s try an example, showing how we could hook this up without new syntax, and then adding a trivial extension to make things more aesthetically pleasing.

    Continuing with our previous example, suppose we have a class Shape that we want to subclass:

    class Shape {
        get color() {
            return this._color;
        }
        set color(c) {
            this._color = parseColorAsRGB(c);
            this.markChanged();  // repaint the canvas later
        }
    }
    

    When we try to write code that does this, we have the same problem we had in the previous post with static properties: there’s no syntactic way to change the prototype of a function as you define it. While you can get around this with Object.setPrototypeOf, the approach is generally less performant and less optimizable for engines than having a way to create a function with the intended prototype.

    class Circle {
        // As above
    }
    
    // Hook up the instance properties
    Object.setPrototypeOf(Circle.prototype, Shape.prototype);
    
    // Hook up the static properties
    Object.setPrototypeOf(Circle, Shape);
    

    This is pretty ugly. We added the classes syntax so that we could encapsulate all of the logic about how the final object would look in one place, rather than having other “hooking things up” logic afterwards. Java, Ruby, and other object-oriented languages have a way of declaring that a class declaration is a subclass of another, and we should too. We use the keyword extends, so we can write:

    class Circle extends Shape {
        // As above
    }
    

    You can put any expression you want after extends, as long as it’s a valid constructor with a prototype property. For example:

    • Another class
    • Class-like functions from existing inheritance frameworks
    • A normal function
    • A variable that contains a function or class
    • A property access on an object
    • A function call

    You can even use null, if you don’t want instances to inherit from Object.prototype.

    Super Properties

    So we can make subclasses, and we can inherit properties, and sometimes our methods will even shadow (think override) the methods we inherit. But what if you want to circumvent this shadowing mechanic?

    Suppose we want to write a subclass of our Circle class that handles scaling the circle by some factor. To do this, we could write the following somewhat contrived class:

    class ScalableCircle extends Circle {
        get radius() {
            return this.scalingFactor * super.radius;
        }
        set radius() {
            throw new Error("ScalableCircle radius is constant." +
                            "Set scaling factor instead.");
        }
    
        // Code to handle scalingFactor
    }
    

    Notice that the radius getter uses super.radius. This new super keyword allows us to bypass our own properties, and look for the property starting with our prototype, thus bypassing any shadowing we may have done.

    Super property accesses (super[expr] works too, by the way) can be used in any function defined with method definition syntax. While these functions can be pulled off of the original object, the accesses are tied to the object on which the method was first defined. This means that pulling the method off into a local variable will not change the behavior of the super access.

    var obj = {
        toString() {
            return "MyObject: " + super.toString();
        }
    }
    
    obj.toString(); // MyObject: [object Object]
    var a = obj.toString;
    a(); // MyObject: [object Object]
    

    Subclassing Builtins

    Another thing you might want to do is write extensions to the JavaScript language builtins. The builtin data structures add a huge amount of power to the language, and being able to create new types that leverage that power is amazingly useful, and was a foundational part of the design of subclassing. Suppose you want to write a versioned array. (I know. Trust me, I know.) You should be able to make changes and then commit them, or roll back to previously committed changes. One way to write a quick version of this is by subclassing Array.

    class VersionedArray extends Array {
        constructor() {
            super();
            this.history = [[]];
        }
        commit() {
            // Save changes to history.
            this.history.push(this.slice());
        }
        revert() {
            this.splice(0, this.length, this.history[this.history.length - 1]);
        }
    }
    

    Instances of VersionedArray retain a few important properties. They’re bonafide instances of Array, complete with map, filter, and sort. Array.isArray() will treat them like arrays, and they will even get the auto-updating array length property. Even further, functions that would return a new array (like Array.prototype.slice()) will return a VersionedArray!

    Derived Class Constructors

    You may have noticed the super() in the constructor method of that last example. What gives?

    In traditional class models, constructors are used to initalize any internal state for instances of the class. Each consecutive subclass is responsible for initializing the state associated with that specific subclass. We want to chain these calls, so that subclasses share the same initialization code with the class they are extending.

    To call a super constructor, we use the super keyword again, this time as if it were a function. This syntax is only valid inside constructor methods of classes that use extends. With super, we can rewrite our Shape class.

    class Shape {
        constructor(color) {
            this._color = color;
        }
    }
    
    class Circle extends Shape {
        constructor(color, radius) {
            super(color);
    
            this.radius = radius;
        }
    
        // As from above
    }
    

    In JavaScript, we tend to write constructors that operate on the this object, installing properties and initializing internal state. Normally, the this object is created when we invoke the constructor with new, as if with Object.create() on the constructor’s prototype property. However, some builtins have different internal object layouts. Arrays, for example, are laid out differently than regular objects in memory. Because we want to be able to subclass builtins, we let the basemost constructor allocate the this object. If it’s a builtin, we will get the object layout we want, and if it’s a normal constructor, we will get the default this object we expect.

    Probably the strangest consequence is the way this is bound in subclass constructors. Until we run the base constructor, and allow it to allocate the this object, we don’t have a this value. Consequently, all accesses to this in subclass constructors that occur before the call to the super constructor will result in a ReferenceError.

    As we saw in the last post, where you could omit the constructor method, derived class constructors can be omitted, and it is as if you had written:

    constructor(...args) {
        super(...args);
    }
    

    Sometimes, constructors do not interact with the this object. Instead, they create an object some other way, initalize it, and return it directly. If this is the case, it is not necessary to use super. Any constructor may directly return an object, independent of whether super constructors were ever invoked.

    new.target

    Another strange side effect of having the basemost class allocate the this object is that sometimes the basemost class doesn’t know what kind of object to allocate. Suppose you were writing an object framework library, and you wanted a base class Collection, some subclasses of which were arrays, and some of which were maps. Then, by the time you ran the Collection constructor, you wouldn’t be able to tell which kind of object to make!

    Since we’re able to subclass builtins, when we run the builtin constructor, internally we already have to know about the prototype of the original class. Without it, we wouldn’t be able to create an object with the proper instance methods. To combat this strange Collection case, we’ve added syntax to expose that information to JavaScript code. We’ve added a new Meta Property new.target, which corresponds to the constructor that was directly invoked with new. Calling a function with new sets new.target to be the called function, and calling super within that function forwards the new.target value.

    This is hard to understand, so I’ll just show you what I mean:

    class foo {
        constructor() {
            return new.target;
        }
    }
    
    class bar extends foo {
        // This is included explicitly for clarity. It is not necessary
        // to get these results.
        constructor() {
            super();
        }
    }
    
    // foo directly invoked, so new.target is foo
    new foo(); // foo
    
    // 1) bar directly invoked, so new.target is bar
    // 2) bar invokes foo via super(), so new.target is still bar
    new bar(); // bar
    

    We’ve solved the problem with Collection described above, because the Collection constructor can just check new.target and use it to derive the class lineage, and determine which builtin to use.

    new.target is valid inside any function, and if the function is not invoked with new, it will be set to undefined.

    The Best of Both Worlds

    Hope you’ve survived this brain dump of new features. Thanks for hanging on. Let’s now take a moment to talk about whether they solve problems well. Many people have been quite outspoken about whether inheritance is even a good thing to codify in a language feature. You may believe that inheritance is never as good as composition for making objects, or that the cleanliness of new syntax isn’t worth the resulting lack of design flexibility, as compared with the old prototypal model. It’s undeniable that mixins have become a dominant idiom for creating objects that share code in an extensible way, and for good reason: They provide an easy way to share unrelated code to the same object without having to understand how those two unrelated pieces should fit into the same inheritance structure.

    There are many vehemently held beliefs on this topic, but I think there are a few things worth noting. First, the addition of classes as a language feature does not make their use mandatory. Second, and equally important, the addition of classes as a language feature doesn’t mean they are always the best way to solve inheritance problems! In fact, some problems are better suited to modeling with prototypal inheritance. At the end of the day, classes are just another tool that you can use; not the only tool nor necessarily the best.

    If you want to continue to use mixins, you may wish that you could reach for classes that inherit from several things, so that you could just inherit from each mixin, and have everything be great. Unfortunately, it would be quite jarring to change the inheritance model now, so JavaScript does not implement multiple inheritance for classes. That being said, there is a hybrid solution to allow mixins inside a class-based framework. Consider the following functions, based on the well-known extend mixin idiom.

    function mix(...mixins) {
        class Mix {}
    
        // Programmatically add all the methods and accessors
        // of the mixins to class Mix.
        for (let mixin of mixins) {
            copyProperties(Mix, mixin);
            copyProperties(Mix.prototype, mixin.prototype);
        }
        
        return Mix;
    }
    
    function copyProperties(target, source) {
        for (let key of Reflect.ownKeys(source)) {
            if (key !== "constructor" && key !== "prototype" && key !== "name") {
                let desc = Object.getOwnPropertyDescriptor(source, key);
                Object.defineProperty(target, key, desc);
            }
        }
    }
    

    We can now use this function mix to create a composed superclass, without ever having to create an explicit inheritance relationship between the various mixins. Imagine writing a collaborative editing tool in which editing actions are logged, and their content needs to be serialized. You can use the mix function to write a class DistributedEdit:

    class DistributedEdit extends mix(Loggable, Serializable) {
        // Event methods
    }
    

    It’s the best of both worlds. It’s also easy to see how to extend this model to handle mixin classes that themselves have superclasses: we can simply pass the superclass to mix and have the return class extend it.

    Current Availability

    OK, we’ve talked a lot about subclassing builtins and all these new things, but can you use any of it now?

    Well, sort of. Of the major browser vendors, Chrome has shipped the most of what we’ve talked about today. When in strict mode, you should be able to do just about everything we discussed, except subclass Array. Other builtin types will work, but Array poses some extra challenges, so it’s not surprising that it’s not finished yet. I am writing the implementation for Firefox, and aim to hit the same target (everything but Array) very soon. Check out bug 1141863 for more information, but it should land in the Nightly version of Firefox in a few weeks.

    Further off, Edge has support for super, but not for subclassing builtins, and Safari does not support any of this functionality.

    Transpilers are at a disadvantage here. While they are able to create classes, and to do super, there’s basically no way to fake subclassing builtins, because you need engine support to get instances of the base class back from builtin methods (think Array.prototype.splice).

    Phew! That was a long one. Next week, Jason Orendorff will be back to discuss the ES6 module system.

  7. Pointer Events now in Firefox Nightly

    This past February Pointer Events became a W3C Recommendation. In the intervening time Microsoft Open Tech and Mozilla have been working together to implement the specification. As consumers continue to expand the range of devices that are used to explore the web with different input mechanisms such as touch, pen or mouse, it is important to provide a unified API that developers can use within their applications. In this effort we have just reached a major milestone: Pointer events are now available in Firefox Nightly. We are very excited about this effort which represents a great deal of cooperation across several browser vendors in an effort to produce a high quality industry standard API with growing support.

    Be sure to download Firefox Nightly and give it a try and give us your feedback on the implementation either using the dev-platform mailing list or the mozilla.dev.platform group. If you have feedback on the specification please send those to public-pointer-events@w3.org.

    The intent of this specification is to expand the open web to support a variety of input mechanisms beyond the mouse, while maintaining compatibility with most web-based content, which is built around mouse events. The API is designed to create one solution that will handle a variety of input devices, with a focus on pointing devices (mouse, pens, and touch). The pointer is defined in the spec as a hardware-agnostic device that can target a specific set of screen coordinates. Pointer events are intentionally similar to the current set of events associated with mouse events.

    In the current Nightly build, pointer events for mouse input are now supported. Additionally, if you’re using Windows, once you’ve set two preferences, touch events can be enabled now. The first property, Async Pan & Zoom (APZ) is enabled by setting the layers.async-pan-zoom.enabled Firefox configuration preference to true. The dom.w3c_touch_events.enabled should also be enabled by setting this value to 1 in your preferences.

    This post covers some of the basic features of the new API.

    Using the Pointer API

    Before getting started with the Pointer API, it’s important to test whether your current browser supports the API. This can be done with code similar to this example:

    if (window.PointerEvent) {
      .....
    }else{
      // use mouse events
    }

    The Pointer API provides support for pointerdown, pointerup, pointercancel, pointermove, pointerover, pointerout, gotpointercapture, and lostpointercapture events. Most of these should be familiar to you if you have coded event handling for mouse input before. For example, if you need a web app to move an image around a canvas when touched or clicked on, you can use the following code:

    function DragImage() {
        var imageGrabbed = false;
        var ctx;
        var cnv;
        var myImage;
        var x = 0;
        var y = 0;
        var rect;
        this.imgMoveEvent = function(evt) {
            if (imageGrabbed) {
                ctx.clearRect(0, 0, cnv.width, cnv.height);
                x = evt.clientX - rect.left;
                y = evt.clientY - rect.top;
                ctx.drawImage(myImage, x, y, 30, 30);
     
            }
        }
        this.imgDownEvent = function(evt) {
            //Could use canvas hit regions
            var xcl = evt.clientX - rect.left;
            var ycl = evt.clientY - rect.top;
            if (xcl > x && xcl < x + 30 && ycl > y && ycl < y + 30) {
                imageGrabbed = true;
            }
        }
        this.imgUpEvent = function(evt) {
            imageGrabbed = false;
        }
        this.initDragExample = function() {
            if (window.PointerEvent) {
                cnv = document.getElementById("myCanvas");
                ctx = cnv.getContext('2d');
                rect = cnv.getBoundingClientRect();
                x = 0;
                y = 0;
                myImage = new Image();
                myImage.onload = function() {
                    ctx.drawImage(myImage, 0, 0, 30, 30);
                };
                myImage.src = 'images/ff.jpg';
                cnv.addEventListener("pointermove", this.imgMoveEvent, false);
                cnv.addEventListener("pointerdown", this.imgDownEvent, false);
                cnv.addEventListener("pointerup", this.imgUpEvent, false);
            }
        }
    }

    PointerCapture events are used when there’s the possibility that a pointer device could leave the region of an existing element while tracking the event. For example, suppose you’re using a slider and your finger slips off the actual element –you’ll want to continue to track the pointer movements. You can set PointerCapture by using code similar to this:

    var myElement = document.getElementById("myelement");
    myelement.addEventListener("pointerdown", function(e) {
        if (this.setPointerCapture) {
        //specify the id of the point to capture
            this.setPointerCapture(e.pointerId);
        }
    }, false);

    This code guarantees that you still get pointermove events, even if you leave the region of myelement. If you do not set the PointerCapture, the pointer move events will not be called for the containing element once your pointer leaves its area. You can also release the capture by calling releasePointerCapture. The browser does this automatically when a pointerup or pointercancel event occurs.

    The Pointer Event interface

    The PointerEvent interface extends the MouseEvent interface and provides a few additional properties. These properties include pointerId, width, height, pressure, tiltX, tiltY, pointerType and isPrimary.

    The pointerId property provides a unique id for the pointer that initiates the event. The height and width properties provide respective values in CSS pixels for the contact geometry of the pointer. When the pointer happens to be a mouse, these values are set to 0. The pressure property contains a floating point value from 0 to 1 to indicate the amount of pressure applied by the pointer, where 0 is the lowest and 1 is the highest. For pointers that do not support pressure, the value is set to 0.5.

    The tiltY property contains the angle value between the X-Z planes of the pointer and the screen and ranges between -90 and 90 degrees. This property is most useful when using a stylus pen for pointer operations. A value of 0 degrees would indicate the pointer touched the surface at an exact perpendicular angle with respect to the Y-axis. Likewise the tiltX property contains the angle between the Y-Z planes.

    The pointType property contains the device type represented by the pointer. Currently this value will be set to mouse, touch, pen, unknown or an empty string.

    var myElement = document.getElementById("myelement");
    myElement.addEventListener("pointerdown", function(e) {
        switch(e.pointerType) {
            case "mouse":
                console.log("Mouse Pointer");
                break;
            case "pen":
                console.log("Pen Pointer");
                break;
            case "touch":
                console.log("Touch Pointer");
                break;
            default:
                console.log("Unknown Pointer");
        }
    }, false);

    The isPrimary property is either true or false and indicates whether the pointer is the primary pointer. A primary pointer property is required when supporting multiple touch points to provide multi-touch input and gesture support. Currently this property will be set to true for each specific pointer type (mouse, touch, pen) when the pointer first makes contact with an element that is tracking pointer events. If you are using one touch point and a mouse pointer simultaneously both will be set to true. The isPrimary property will be set to false for an event if a different pointer is already active with the same pointerType.

    var myElement = document.getElementById("myelement");
    myelement.addEventListener("pointerdown", function(e) {
        if( e.pointerType == "touch" ){
             if( e.isPrimary ){
                 //first touch
             }else{
                 //handle multi-touch
             }
        }
     
    }, false);

    Handling multi-touch

    As stated earlier, touch pointers are currently implemented only for Firefox Nightly running on Windows with layers.async-pan-zoom.enabled and dom.w3c_touch_events.enabled preferences enabled. You can check to see whether multi-touch is supported with the following code.

    if( window.maxTouchPoints && window.maxTouchPoints > 1 ){
    //supports multi-touch
    }

    Some browsers provide default functionality for certain touch interactions such as scrolling with a swipe gesture, or using a pinch gesture for zoom control. When these default actions are used, the events for the pointer will not be fired. To better support different applications, Firefox Nightly supports the CSS property touch-action. This property can be set to auto, none, pan-x, pan-y, and manipulation. Setting this property to auto will not change any default behaviors of the browser when using touch events. To disable all of the default behaviors and allow your content to handle all touch input using pointer events instead, you can set this value to none. Setting this value to either pan-x or pan-y invokes all pointer events when not panning/scrolling in a given direction. For instance, pan-x will invoke pointer event handlers when not panning/scrolling in the horizontal direction. When the property is set to manipulation, pointer events are fired if panning/scrolling or manipulating the zoom are not occurring.

    This element receives pointer events when not panning in the horizontal direction.
    // Very Simplistic pinch detector with little error detection,
    // using only x coordinates of a pointer event
     
    // Currently active pointers
    var myPointers = [];
    var lastDif = -1;
     
    function myPointerDown(evt) {
        myPointers.push(evt);
        this.setPointerCapture(evt.pointerId);
        console.log("current pointers down = " + myPointers.length);
    }
     
    //remove touch point from array when touch is released
    function myPointerUp(evt) {
        // Remove pointer from array
        for (var i = 0; i < myPointers.length; i++) {
            if (myPointers[i].pointerId == evt.pointerId) {
                myPointers.splice(i, 1);
                break;
            }
        }
        console.log("current pointers down = " + myPointers.length);
     
        if (myPointers.length < 2) {
            lastDif = -1;
        }
    }
     
    //check for a pinch using only the first two touchpoints
    function myPointerMove(evt) {
        // Update pointer position.
        for (var i = 0; i < myPointers.length; i++) {
            if (evt.pointerId = myPointers[i].pointerId) {
                myPointers[i] = evt;
                break;
            }
        }
     
        if (myPointers.length >= 2) {
            // Detect pinch gesture.
            var curDif = Math.abs(myPointers[0].clientX - myPointers[1].clientX);
            if (lastDif > 0) {
                if (curDif > lastDif) { console.log("Zoom in"); }
                if (curDif < lastDif) { console.log("Zoom out"); }
            }
            lastDif = curDif;
        }
    }

    You can test the example code here. For some great examples of the Pointer Events API in action, see Patrick H. Lauke’s collection of Touch and Pointer Events experiments on GitHub. Patrick is a member of the W3C Pointer Events Working Group, the W3C Touch Events Community Group, and Senior Accessibility Consultant for The Paciello Group.

    Conclusion

    In this post we covered some of the basics that are currently implemented in Firefox Nightly. To track the progress of this API, check out the Gecko Touch Wiki page. You can also follow along on the main feature bug and be sure to report any issues you find while testing the new Pointer API.

  8. ES6 In Depth: let and const

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    The feature I’d like to talk about today is at once humble and startlingly ambitious.

    When Brendan Eich designed the first version of JavaScript back in 1995, he got plenty of things wrong, including things that have been part of the language ever since, like the Date object and objects automatically converting to NaN when you accidentally multiply them. However, the things he got right are stunningly important things, in hindsight: objects; prototypes; first-class functions with lexical scoping; mutability by default. The language has good bones. It was better than anyone realized at first.

    Still, Brendan made one particular design decision that bears on today’s article—a decision that I think can be fairly characterized as a mistake. It’s a little thing. A subtle thing. You might use the language for years and not even notice it. But it matters, because this mistake is in the side of the language that we now think of as “the good parts”.

    It has to do with variables.

    Problem #1: Blocks are not scopes

    The rule sounds so innocent: The scope of a var declared in a JS function is the whole body of that function. But there are two ways this can have groan-inducing consequences.

    One is that the scope of variables declared in blocks is not just the block. It’s the entire function.

    You may never have noticed this before. I’m afraid it’s one of those things you won’t be able to un-see. Let’s walk through a scenario where it leads to a tricky bug.

    Say you have some existing code that uses a variable named t:

    function runTowerExperiment(tower, startTime) {
      var t = startTime;
    
      tower.on("tick", function () {
        ... code that uses t ...
      });
      ... more code ...
    }
    

    Everything works great, so far. Now you want to add bowling ball speed measurements, so you add a little if-statement to the inner callback function.

    function runTowerExperiment(tower, startTime) {
      var t = startTime;
    
      tower.on("tick", function () {
        ... code that uses t ...
        if (bowlingBall.altitude() <= 0) {
          var t = readTachymeter();
          ...
        }
      });
      ... more code ...
    }
    

    Oh, dear. You’ve unwittingly added a second variable named t. Now, in the “code that uses t”, which was working fine before, t refers to the new inner variable t rather than the existing outer variable.

    The scope of a var in JavaScript is like the bucket-of-paint tool in Photoshop. It extends in both directions from the declaration, forwards and backwards, and it just keeps going until it reaches a function boundary. Since this variable t’s scope extends so far backwards, it has to be created as soon as we enter the function. This is called hoisting. I like to imagine the JS engine lifting each var and function to the top of the enclosing function with a tiny code crane.

    Now, hoisting has its good points. Without it, lots of perfectly cromulent techniques that work fine in the global scope wouldn’t work inside an IIFE. But in this case, hoisting is causing a nasty bug: all your calculations using t will start producing NaN. It’ll be hard to track down, too, especially if your code is larger than this toy example.

    Adding a new block of code caused a mysterious error in code before that block. Is it just me, or is that really weird? We don’t expect effects to precede causes.

    But this is a piece of cake compared to the second var problem.

    Problem #2: Variable oversharing in loops

    You can guess what happens when you run this code. It’s totally straightforward:

    var messages = ["Hi!", "I'm a web page!", "alert() is fun!"];
    
    for (var i = 0; i < messages.length; i++) {
      alert(messages[i]);
    }
    

    If you’ve been following this series, you know I like to use alert() for example code. Maybe you also know that alert() is a terrible API. It’s synchronous. So while an alert is visible, input events are not delivered. Your JS code—and in fact your whole UI—is basically paused until the user clicks OK.

    All of which makes alert() the wrong choice for almost anything you want to do in a web page. I use it because I think all those same things make alert() a great teaching tool.

    Still, I could be persuaded to give up all that clunkiness and bad behavior… if it means I can make a talking cat.

    var messages = ["Meow!", "I'm a talking cat!", "Callbacks are fun!"];
    
    for (var i = 0; i < messages.length; i++) {
      setTimeout(function () {
        cat.say(messages[i]);
      }, i * 1500);
    }
    

    See this code working incorrectly in action!

    But something’s wrong. Instead of saying all three messages in order, the cat says “undefined” three times.

    Can you spot the bug?

    (Photo of a caterpillar well camouflaged on the bark of a tree. Gujarat, India.)

    Photo credit: nevil saveri

    The problem here is that there is only one variable i. It’s shared by the loop itself and all three timeout callbacks. When the loop finishes running, the value of i is 3 (because messages.length is 3), and none of the callbacks have been called yet.

    So when the first timeout fires, and calls cat.say(messages[i]), it’s using messages[3]. Which of course is undefined.

    There are many ways to fix this (here’s one), but this is a second problem caused by the var scoping rules. It would be awfully nice never to have this kind of problem in the first place.

    let is the new var

    For the most part, design mistakes in JavaScript (other programming languages too, but especially JavaScript) can’t be fixed. Backwards compatibility means never changing the behavior of existing JS code on the Web. Even the standard committee has no power to, say, fix the weird quirks in JavaScript’s automatic semicolon insertion. Browser makers simply will not implement breaking changes, because that kind of change punishes their users.

    So about ten years ago, when Brendan Eich decided to fix this problem, there was really only one way to do it.

    He added a new keyword, let, that could be used to declare variables, just like var, but with better scoping rules.

    It looks like this:

    let t = readTachymeter();
    

    Or this:

    for (let i = 0; i < messages.length; i++) {
      ...
    }
    

    let and var are different, so if you just do a global search-and-replace throughout your code, that could break parts of your code that (probably unintentionally) depend on the quirks of var. But for the most part, in new ES6 code, you should just stop using var and use let everywhere instead. Hence the slogan: “let is the new var”.

    What exactly are the differences between let and var? Glad you asked!

    • let variables are block-scoped. The scope of a variable declared with let is just the enclosing block, not the whole enclosing function.

      There’s still hoisting with let, but it’s not as indiscriminate. The runTowerExperiment example can be fixed simply by changing var to let. If you use let everywhere, you will never have that kind of bug.

    • Global let variables are not properties on the global object. That is, you won’t access them by writing window.variableName. Instead, they live in the scope of an invisible block that notionally encloses all JS code that runs in a web page.

    • Loops of the form for (let x...) create a fresh binding for x in each iteration.

      This is a very subtle difference. It means that if a for (let...) loop executes multiple times, and that loop contains a closure, as in our talking cat example, each closure will capture a different copy of the loop variable, rather than all closures capturing the same loop variable.

      So the talking cat example, too, can be fixed just by changing var to let.

      This applies to all three kinds of for loop: forof, forin, and the old-school C kind with semicolons.

    • It’s an error to try to use a let variable before its declaration is reached. The variable is uninitialized until control flow reaches the line of code where it’s declared. For example:

      function update() {
        console.log("current time:", t);  // ReferenceError
        ...
        let t = readTachymeter();
      }
      

      This rule is there to help you catch bugs. Instead of NaN results, you’ll get an exception on the line of code where the problem is.

      This period when the variable is in scope, but uninitialized, is called the temporal dead zone. I keep waiting for this inspired bit of jargon to make the leap to science fiction. Nothing yet.

      (Crunchy performance details: In most cases, you can tell whether the declaration has run or not just by looking at the code, so the JavaScript engine does not actually need to perform an extra check every time the variable is accessed to make sure it’s been initialized. However, inside a closure, it sometimes isn’t clear. In those cases the JavaScript engine will do a run-time check. That means let can be a touch slower than var.)

      (Crunchy alternate-universe scoping details: In some programming languages, the scope of a variable starts at the point of the declaration, instead of reaching backwards to cover the whole enclosing block. The standard committee considered using that kind of scoping rule for let. That way, the use of t that causes a ReferenceError here simply wouldn’t be in the scope of the later let t, so it wouldn’t refer to that variable at all. It could refer to a t in an enclosing scope. But this approach did not work well with closures or with function hoisting, so it was eventually abandoned.)

    • Redeclaring a variable with let is a SyntaxError.

      This rule, too, is there to help you detect trivial mistakes. Still, this is the difference that is most likely to cause you some issues if you attempt a global let-to-var conversion, because it applies even to global let variables.

      If you have several scripts that all declare the same global variable, you’d better keep using var for that. If you switch to let, whichever script loads second will fail with an error.

      Or use ES6 modules. But that’s a story for another day.

    (Crunchy syntax details: let is a reserved word in strict mode code. In non-strict-mode code, for the sake of backward compatibility, you can still declare variables, functions, and arguments named let—you can write var let = 'q';! Not that you would do that. And let let; is not allowed at all.)

    Apart from those differences, let and var are pretty much the same. They both support declaring multiple variables separated by commas, for example, and they both support destructuring.

    Note that class declarations behave like let, not var. If you load a script containing a class multiple times, the second time you’ll get an error for redeclaring the class.

    const

    Right, one more thing!

    ES6 also introduces a third keyword that you can use alongside let: const.

    Variables declared with const are just like let except that you can’t assign to them, except at the point where they’re declared. It’s a SyntaxError.

    const MAX_CAT_SIZE_KG = 3000; // 🙀
    
    MAX_CAT_SIZE_KG = 5000; // SyntaxError
    MAX_CAT_SIZE_KG++; // nice try, but still a SyntaxError
    

    Sensibly enough, you can’t declare a const without giving it a value.

    const theFairest;  // SyntaxError, you troublemaker
    

    Secret agent namespace

    “Namespaces are one honking great idea—let’s do more of those!” —Tim Peters, “The Zen of Python”

    Behind the scenes, nested scopes are one of the core concepts that programming languages are built around. It’s been this way since what, ALGOL? Something like 57 years. And it’s truer today than ever.

    Before ES3, JavaScript only had global scopes and function scopes. (Let’s ignore with statements.) ES3 introduced trycatch statements, which meant adding a new kind of scope, used only for the exception variable in catch blocks. ES5 added a scope used by strict eval(). ES6 adds block scopes, for-loop scopes, the new global let scope, module scopes, and additional scopes that are used when evaluating default values for arguments.

    All the extra scopes added from ES3 onward are necessary to make JavaScript’s procedural and object-oriented features work as smoothly, precisely, and intuitively as closures—and cooperate seamlessly with closures. Maybe you never noticed any of these scoping rules before today. If so, the language is doing its job.

    Can I use let and const now?

    Yes. To use them on the web, you’ll have to use an ES6 compiler such as Babel, Traceur, or TypeScript. (Babel and Traceur do not support the temporal dead zone yet.)

    io.js supports let and const, but only in strict-mode code. Node.js support is the same, but the --harmony option is also required.

    Brendan Eich implemented the first version of let in Firefox nine years ago. The feature was thoroughly redesigned during the standardization process. Shu-yu Guo is upgrading our implementation to match the standard, with code reviews by Jeff Walden and others.

    Well, we’re in the home stretch. The end of our epic tour of ES6 features is in sight. In two weeks, we’ll finish up with what’s probably the most eagerly awaited ES6 feature of them all. But first, next week we’ll have a post that extends our earlier coverage of a new feature that’s just super. So please join us as Eric Faust returns with a look at ES6 subclassing in depth.

  9. ES6 In Depth: Classes

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    Today, we get a bit of a respite from the complexity that we’ve seen in previous posts in this series. There are no new never-before-seen ways of writing code with Generators; no all-powerful Proxy objects which provide hooks into the inner algorithmic workings of the JavaScript language; no new data structures that obviate the need for roll-your-own solutions. Instead, we get to talk about syntactic and idiomatic cleanups for an old problem: object constructor creation in JavaScript.

    The Problem

    Say we want to create the most quintessential example of object-oriented design principles: the Circle class. Imagine we are writing a Circle for a simple Canvas library. Among other things, we might want to know how to do the following:

    • Draw a given Circle to a given Canvas.
    • Keep track of the total number of Circles ever made.
    • Keep track of the radius of a given Circle, and how to enforce invariants on its value.
    • Calculate the area of a given Circle.

    Current JS idioms say that we should first create the constructor as a function, then add any properties we might want to the function itself, then replace the prototype property of that constructor with an object. This prototype object will contain all of the properties that instance objects created by our constructor should start with. For even a simple example, by the time you get it all typed out, this ends up being a lot of boilerplate:

    function Circle(radius) {
        this.radius = radius;
        Circle.circlesMade++;
    }
    
    Circle.draw = function draw(circle, canvas) { /* Canvas drawing code */ }
    
    Object.defineProperty(Circle, "circlesMade", {
        get: function() {
            return !this._count ? 0 : this._count;
        },
    
        set: function(val) {
            this._count = val;
        }
    });
    
    Circle.prototype = {
        area: function area() {
            return Math.pow(this.radius, 2) * Math.PI;
        }
    };
    
    Object.defineProperty(Circle.prototype, "radius", {
        get: function() {
            return this._radius;
        },
    
        set: function(radius) {
            if (!Number.isInteger(radius))
                throw new Error("Circle radius must be an integer.");
            this._radius = radius;
        }
    });
    

    Not only is the code cumbersome, it’s also far from intuitive. It requires having a non-trivial understanding of the way functions work, and how various installed properties make their way onto created instance objects. If this approach seems complicated, don’t worry. The whole point of this post is to show off a much simpler way of writing code that does all of this.

    Method Definition Syntax

    In a first attempt to clean this up, ES6 offered a new syntax for adding special properties to an object. While it was easy to add the area method to Circle.prototype above, it felt much heavier to add the getter/setter pair for radius. As JS moved towards a more object-oriented approach, people became interested in designing cleaner ways to add accessors to objects. We needed a new way of adding “methods” to an object exactly as if they had been added with obj.prop = method, without the weight of Object.defineProperty. People wanted to be able to do the following things easily:

    1. Add normal function properties to an object.
    2. Add generator function properties to an object.
    3. Add normal accessor function properties to an object.
    4. Add any of the above as if you had done it with [] syntax on the finished object. We’ll call these Computed property names.

    Some of these things couldn’t be done before. For example, there is no way to define a getter or setter with assignments to obj.prop. Accordingly, new syntax had to be added. You can now write code that looks like this:

    var obj = {
        // Methods are now added without a function keyword, using the name of the
        // property as the name of the function.
        method(args) { ... },
    
        // To make a method that's a generator instead, just add a '*', as normal.
        *genMethod(args) { ... },
    
        // Accessors can now go inline, with the help of |get| and |set|. You can
        // just define the functions inline. No generators, though.
    
        // Note that a getter installed this way must have no arguments
        get propName() { ... },
    
        // Note that a setter installed this way must have exactly one argument
        set propName(arg) { ... },
    
        // To handle case (4) above, [] syntax is now allowed anywhere a name would
        // have gone! This can use symbols, call functions, concatenate strings, or
        // any other expression that evaluates to a property id. Though I've shown
        // it here as a method, this syntax also works for accessors or generators.
        [functionThatReturnsPropertyName()] (args) { ... }
    };
    

    Using this new syntax, we can now rewrite our snippet above:

    function Circle(radius) {
        this.radius = radius;
        Circle.circlesMade++;
    }
    
    Circle.draw = function draw(circle, canvas) { /* Canvas drawing code */ }
    
    Object.defineProperty(Circle, "circlesMade", {
        get: function() {
            return !this._count ? 0 : this._count;
        },
    
        set: function(val) {
            this._count = val;
        }
    });
    
    Circle.prototype = {
        area() {
            return Math.pow(this.radius, 2) * Math.PI;
        },
    
        get radius() {
            return this._radius;
        },
        set radius(radius) {
            if (!Number.isInteger(radius))
                throw new Error("Circle radius must be an integer.");
            this._radius = radius;
        }
    };
    

    Pedantically, this code isn’t exactly identical to the snippet above. Method definitions in object literals are installed as configurable and enumerable, while the accessors installed in the first snippet will be non-configurable and non-enumerable. In practice, this is rarely noticed, and I decided to elide enumerability and configurability above for brevity.

    Still, it’s getting better, right? Unfortunately, even armed with this new method definition syntax, there’s not much we can do for the definition of Circle, as we have yet to define the function. There’s no way to get properties onto a function as you’re defining it.

    Class Definition Syntax

    Though this was better, it still didn’t satisfy people who wanted a cleaner solution to object-oriented design in JavaScript. Other languages have a construct for handling object-oriented design, they argued, and that construct is called a class.

    Fair enough. Let’s add classes, then.

    We want a system that will allow us to add methods to a named constructor, and add methods to its .prototype as well, so that they will appear on constructed instances of the class. Since we have our fancy new method definition syntax, we should definitely use it. Then, we only need a way to differentiate between what is generalized over all instances of the class, and what functions are specific to a given instance. In C++ or Java, the keyword for that is static. Seems as good as any. Let’s use it.

    Now it would be useful to have a way to designate one of the methods of the bunch to be the function that gets called as the constructor. In C++ or Java, that would be named the same as the class, with no return type. Since JS doesn’t have return types, and we need a .constructor property anyway, for backwards compatibility, let’s call that method constructor.

    Putting it together, we can rewrite our Circle class as it was always meant to be:

    class Circle {
        constructor(radius) {
            this.radius = radius;
            Circle.circlesMade++;
        };
    
        static draw(circle, canvas) {
            // Canvas drawing code
        };
    
        static get circlesMade() {
            return !this._count ? 0 : this._count;
        };
        static set circlesMade(val) {
            this._count = val;
        };
    
        area() {
            return Math.pow(this.radius, 2) * Math.PI;
        };
    
        get radius() {
            return this._radius;
        };
        set radius(radius) {
            if (!Number.isInteger(radius))
                throw new Error("Circle radius must be an integer.");
            this._radius = radius;
        };
    }
    

    Wow! Not only can we group everything related to a Circle together, but everything looks so… clean. This is definitely better than what we started with.

    Even so, some of you are likely to have questions or to find edge cases. I’ll try to anticipate and address some of these below:

    • What’s with the semicolons? – In an attempt to “make things look more like traditional classes,” we decided to go with a more traditional separator. Don’t like it? It’s optional. No delimiter is required.

    • What if I don’t want a constructor, but still want to put methods on created objects? – That’s fine. The constructor method is totally optional. If you don’t supply one, the default is as if you had typed constructor() {}.

    • Can constructor be a generator? – Nope! Adding a constructor that’s not a normal method will result in a TypeError. This includes both generators and accessors.

    • Can I define constructor with a computed property name? – Unfortunately not. That would be really hard to detect, so we don’t try. If you define a method with a computed property name that ends up being named constructor, you will still get a method named constructor, it just won’t be the class’s constructor function.

    • What if I change the value of Circle? Will that cause new Circle to misbehave? – Nope! Much like function expressions, classes get an internal binding of their given name. This binding cannot be changed by external forces, so no matter what you set the Circle variable to in the enclosing scope, Circle.circlesMade++ in the constructor will function as expected.

    • OK, but I could pass an object literal directly as a function argument. This new class thing looks like it won’t work anymore. – Luckily, ES6 also adds class expressions! They can be either named or unnamed, and will behave exactly the same way as described above, except they won’t create a variable in the scope in which you declare them.

    • What about those shenanigans above with enumerability and so on? – People wanted to make it so that you could install methods on objects, but that when you enumerated the object’s properties, you only got the added data properties of the object. Makes sense. Because of this, installed methods in classes are configurable, but not enumerable.

    • Hey, wait… what..? Where are my instance variables? What about static constants? – You caught me. They currently don’t exist in class definitions in ES6. Good news, though! Along with others involved in the spec process, I am a strong proponent of both static and const values being installable in class syntax. In fact, it’s already come up in spec meetings! I think we can look forward to more discussion of this in the future.

    • OK, even still, these are awesome! Can I use them yet? – Not exactly. There are polyfill options (especially Babel) so that you can play around with them today. Unfortunately, it’s going to be a little while before they are natively implemented in all major browsers. I’ve implemented everything we discussed here today in the Nightly version of Firefox, and it’s implemented but not enabled by default in Edge and Chrome. Unfortunately, it looks like there’s no current implementation in Safari.

    • Java and C++ have subclassing and a super keyword, but there’s nothing mentioned here. Does JS have that? – It does! However, that’s a whole other post’s worth of discussion. Check back with us later for an update about subclassing, where we’ll discuss more about the power of JavaScript classes.

    I would not have been able to implement classes without the guidance and enormous code review responsiblity of Jason Orendorff and Jeff Walden.

    Next week, Jason Orendorff returns from a week’s vacation and takes up the subject of let and const.

  10. ES6 In Depth: Proxies

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    Here is the sort of thing we are going to do today.

    var obj = new Proxy({}, {
      get: function (target, key, receiver) {
        console.log(`getting ${key}!`);
        return Reflect.get(target, key, receiver);
      },
      set: function (target, key, value, receiver) {
        console.log(`setting ${key}!`);
        return Reflect.set(target, key, value, receiver);
      }
    });
    

    That’s a little complicated for a first example. I’ll explain all the parts later. For now, check out the object we created:

    > obj.count = 1;
        setting count!
    > ++obj.count;
        getting count!
        setting count!
        2
    

    What’s going on here? We are intercepting property accesses on this object. We are overloading the "." operator.

    How it’s done

    The best trick in computing is called virtualization. It’s a very general-purpose technique for doing astonishing things. Here’s how it works.

    1. Take any picture.

      (picture of a coal power plant)

      Photo credit: Martin Nikolaj Bech
    2. Draw an outline around something in the picture.

      (same photo, with the power plant circled)
    3. Now replace either everything inside the outline, or everything outside the outline, with something totally unexpected. There is just one rule, the Rule of Backwards Compatibility. Your replacement must behave enough like what was there before that nobody on the other side of the line notices that anything has changed.

      (the circled part is replaced with a wind farm)

      Photo credit: Beverley Goodwin.

    You’ll be familiar with this kind of hack from classic computer science films such as The Truman Show and The Matrix, where a person is inside the outline, and the rest of the world has been replaced with an elaborate illusion of normalcy.

    In order to satisfy the Rule of Backwards Compatibility, your replacement may need to be cunningly designed. But the real trick is in drawing the right outline.

    By outline, I mean an API boundary. An interface. Interfaces specify how two bits of code interact and what each part expects of the other. So if an interface is designed into the system, the outline is already drawn for you. You know you can replace either side, and the other side won’t care.

    It’s when there’s not an existing interface that you have to get creative. Some of the coolest software hacks of all time have involved drawing an API boundary where previously there was none, and bringing that interface into existence via a prodigious engineering effort.

    Virtual memory, Hardware virtualization, Docker, Valgrind, rr—to various degrees all of these projects involved driving new and rather unexpected interfaces into existing systems. In some cases, it took years and new operating system features and even new hardware to make the new boundary work well.

    The best virtualization hacks bring with them a new understanding of whatever’s being virtualized. To write an API for something, you have to understand it. Once you understand, you can do amazing things.

    ES6 introduces virtualization support for JavaScript’s most fundamental concept: the object.

    What is an object?

    No, really. Take a moment. Think it over. Scroll down when you know what an object is.

    (picture of Auguste Rodin’s sculpture, The Thinker)

    Photo credit: Joe deSousa.

    This question is too hard for me! I’ve never heard a really satisfying definition.

    Is that surprising? Defining fundamental concepts is always hard—check out the first few definitions in Euclid’s Elements sometime. The ECMAScript language specification is in good company, therefore, when it unhelpfully defines an object as a “member of the type Object.”

    Later, the spec adds that “An Object is a collection of properties.” That’s not bad. If you want a definition, that will do for now. We’ll come back to it later.

    I said before that to write an API for something, you have to understand it. So in a way, I’ve promised that if we get through all this, we’re going to understand objects better, and we’ll be able to do amazing things.

    So let’s follow in the footsteps of the ECMAScript standard committee and see what it would take to define an API, an interface, for JavaScript objects. What sort of methods do we need? What can objects do?

    That depends somewhat on the object. DOM Element objects can do certain things; AudioNode objects do other things. But there are a few fundamental abilities all objects share:

    • Objects have properties. You can get and set properties, delete them, and so on.
    • Objects have prototypes. This is how inheritance works in JS.
    • Some objects are functions or constructors. You can call them.

    Almost everything JS programs do with objects is done using properties, prototypes, and functions. Even the special behavior of an Element or AudioNode object is accessed by calling methods, which are just inherited function properties.

    So when the ECMAScript standard committee defined a set of 14 internal methods, the common interface for all objects, it should come as no surprise that they ended up focusing on these three fundamental things.

    The full list can be found in tables 5 and 6 of the ES6 standard. Here I’ll just describe a few. The weird double brackets, [[ ]], emphasize that these are internal methods, hidden from ordinary JS code. You can’t call, delete, or overwrite these like ordinary methods.

    • obj.[[Get]](key, receiver) – Get the value of a property.

      Called when JS code does: obj.prop or obj[key].

      obj is the object currently being searched; receiver is the object where we first started searching for this property. Sometimes we have to search several objects. obj might be an object on receiver’s prototype chain.

    • obj.[[Set]](key, value, receiver) – Assign to a property of an object.

      Called when JS code does: obj.prop = value or obj[key] = value.

      In an assignment like obj.prop += 2, the [[Get]] method is called first, and the [[Set]] method afterwards. Same goes for ++ and --.

    • obj.[[HasProperty]](key) – Test whether a property exists.

      Called when JS code does: key in obj.

    • obj.[[Enumerate]]() – List obj’s enumerable properties.

      Called when JS code does: for (key in obj) ....

      This returns an iterator object, and that’s how a forin loop gets an object’s property names.

    • obj.[[GetPrototypeOf]]() – Return obj’s prototype.

      Called when JS code does: obj.__proto__ or Object.getPrototypeOf(obj).

    • functionObj.[[Call]](thisValue, arguments) – Call a function.

      Called when JS code does: functionObj() or x.method().

      Optional. Not every object is a function.

    • constructorObj.[[Construct]](arguments, newTarget) – Invoke a constructor.

      Called when JS code does: new Date(2890, 6, 2), for example.

      Optional. Not every object is a constructor.

      The newTarget argument plays a role in subclassing. We’ll cover it in a future post.

    Maybe you can guess at some of the other seven.

    Throughout the ES6 standard, wherever possible, any bit of syntax or builtin function that does anything with objects is specified in terms of the 14 internal methods. ES6 drew a clear boundary around the brains of an object. What proxies let you do is replace the standard kind of brains with arbitrary JS code.

    When we start talking about overriding these internal methods in a moment, remember, we’re talking about overriding the behavior of core syntax like obj.prop, builtin functions like Object.keys(), and more.

    Proxy

    ES6 defines a new global constructor, Proxy. It takes two arguments: a target object and a handler object. So a simple example would look like this:

    var target = {}, handler = {};
    var proxy = new Proxy(target, handler);
    

    Let’s set aside the handler object for a moment and focus on how proxy and target are related.

    I can tell you how proxy is going to behave in one sentence. All of proxy’s internal methods are forwarded to target. That is, if something calls proxy.[[Enumerate]](), it’ll just return target.[[Enumerate]]().

    Let’s try it out. We’ll do something that causes proxy.[[Set]]() to be called.

    proxy.color = "pink";
    

    OK, what just happened? proxy.[[Set]]() should have called target.[[Set]](), so that should have made a new property on target. Did it?

    > target.color
        "pink"
    

    It did. And the same goes for all the other internal methods. This proxy will, for the most part, behave exactly the same as its target.

    There are limits to the fidelity of the illusion. You’ll find that proxy !== target. And a proxy will sometimes flunk type checks that the target would pass. Even if a proxy’s target is a DOM Element, for example, the proxy isn’t really an Element; so something like document.body.appendChild(proxy) will fail with a TypeError.

    Proxy handlers

    Now let’s return to the handler object. This is what makes proxies useful.

    The handler object’s methods can override any of the proxy’s internal methods.

    For example, if you’d like to intercept all attempts to assign to an object’s properties, you can do that by defining a handler.set() method:

    var target = {};
    var handler = {
      set: function (target, key, value, receiver) {
        throw new Error("Please don't set properties on this object.");
      }
    };
    var proxy = new Proxy(target, handler);
    
    > proxy.name = "angelina";
        Error: Please don't set properties on this object.
    

    The full list of handler methods is documented on the MDN page for Proxy. There are 14 methods, and they line up with the 14 internal methods defined in ES6.

    All handler methods are optional. If an internal method is not intercepted by the handler, then it’s forwarded to the target, as we saw before.

    Example: “Impossible” auto-populating objects

    We now know enough about proxies to try using them for something really weird, something that’s impossible without proxies.

    Here’s our first exercise. Make a function Tree() that can do this:

    > var tree = Tree();
    > tree
        { }
    > tree.branch1.branch2.twig = "green";
    > tree
        { branch1: { branch2: { twig: "green" } } }
    > tree.branch1.branch3.twig = "yellow";
        { branch1: { branch2: { twig: "green" },
                     branch3: { twig: "yellow" }}}
    

    Note how all the intermediate objects branch1, branch2, and branch3, are magically autocreated when they’re needed. Convenient, right? How could it possibly work?

    Until now, there’s no way it could work. But with proxies this is only a few lines of code. We just need to tap into tree.[[Get]](). If you like a challenge, you might want to try implementing this yourself before reading on.

    (picture of a tap in a maple tree)

    Not the right way to tap into a tree in JS. Photo credit: Chiot’s Run.

    Here’s my solution:

    function Tree() {
      return new Proxy({}, handler);
    }
    
    var handler = {
      get: function (target, key, receiver) {
        if (!(key in target)) {
          target[key] = Tree();  // auto-create a sub-Tree
        }
        return Reflect.get(target, key, receiver);
      }
    };
    

    Note the call to Reflect.get() at the end. It turns out there’s an extremely common need, in proxy handler methods, to be able to say “now just do the default behavior of delegating to target.” So ES6 defines a new Reflect object with 14 methods on it that you can use to do exactly that.

    Example: A read-only view

    I think I may have given the false impression that proxies are easy to use. Let’s do one more example to see if that’s true.

    This time our assignment is more complex: we have to implement a function, readOnlyView(object), that takes any object and returns a proxy that behaves just like that object, except without the ability to mutate it. So, for example, it should behave like this:

    > var newMath = readOnlyView(Math);
    > newMath.min(54, 40);
        40
    > newMath.max = Math.min;
        Error: can't modify read-only view
    > delete newMath.sin;
        Error: can't modify read-only view
    

    How can we implement this?

    The first step is to intercept all internal methods that would modify the target object if we let them through. There are five of those.

    function NOPE() {
      throw new Error("can't modify read-only view");
    }
    
    var handler = {
      // Override all five mutating methods.
      set: NOPE,
      defineProperty: NOPE,
      deleteProperty: NOPE,
      preventExtensions: NOPE,
      setPrototypeOf: NOPE
    };
    
    function readOnlyView(target) {
      return new Proxy(target, handler);
    }
    

    This works. It prevents assignment, property definition, and so on via the read-only view.

    Are there any loopholes in this scheme?

    The biggest problem is that the [[Get]] method, and others, may still return mutable objects. So even if some object x is a read-only view, x.prop may be mutable! That’s a huge hole.

    To plug it, we must add a handler.get() method:

    var handler = {
      ...
    
      // Wrap other results in read-only views.
      get: function (target, key, receiver) {
        // Start by just doing the default behavior.
        var result = Reflect.get(target, key, receiver);
    
        // Make sure not to return a mutable object!
        if (Object(result) === result) {
          // result is an object.
          return readOnlyView(result);
        }
        // result is a primitive, so already immutable.
        return result;
      },
    
      ...
    };
    

    This is not sufficient either. Similar code is needed for other methods, including getPrototypeOf and getOwnPropertyDescriptor.

    Then there are further problems. When a getter or method is called via this kind of proxy, the this value passed to the getter or method will typically be the proxy itself. But as we saw earlier, many accessors and methods perform a type check that the proxy won’t pass. It would be better to substitute the target object for the proxy here. Can you figure out how to do it?

    The lesson to draw from this is that creating a proxy is easy, but creating a proxy with intuitive behavior is quite hard.

    Odds and ends

    • What are proxies really good for?

      They’re certainly useful whenever you want to observe or log accesses to an object. They’ll be handy for debugging. Testing frameworks could use them to create mock objects.

      Proxies are useful if you need behavior that’s just slightly past what an ordinary object can do: lazily populating properties, for example.

      I almost hate to bring this up, but one of the best ways to see what’s going on in code that uses proxies… is to wrap a proxy’s handler object in another proxy that logs to the console every time a handler method is accessed.

      Proxies can be used to restrict access to an object, as we did with readOnlyView. That sort of use case is rare in application code, but Firefox uses proxies internally to implement security boundaries between different domains. They’re a key part of our security model.

    • Proxies ♥ WeakMaps. In our readOnlyView example, we create a new proxy every time an object is accessed. It could save a lot of memory to cache every proxy we create in a WeakMap, so that however many times an object is passed to readOnlyView, only a single proxy is created for it.

      This is one of the motivating use cases for WeakMap.

    • Revocable proxies. ES6 also defines another function, Proxy.revocable(target, handler), that creates a proxy, just like new Proxy(target, handler), except this proxy can be revoked later. (Proxy.revocable returns an object with a .proxy property and a .revoke method.) Once a proxy is revoked, it simply doesn’t work anymore; all its internal methods throw.

    • Object invariants. In certain situations, ES6 requires proxy handler methods to report results that are consistent with the target object’s state. It does this in order to enforce rules about immutability across all objects, even proxies. For example, a proxy can’t claim to be inextensible unless its target really is inextensible.

      The exact rules are too complex to go into here, but if you ever see an error message like "proxy can't report a non-existent property as non-configurable", this is the cause. The most likely remedy is to change what the proxy is reporting about itself. Another possibility is to mutate the target on the fly to reflect whatever the proxy is reporting.

    What is an object now?

    I think where we left it was: “An Object is a collection of properties.”

    I’m not totally happy with this definition, even taking for granted that we throw in prototypes and callability as well. I think the word “collection” is too generous, given how poorly defined a proxy can be. Its handler methods could do anything. They could return random results.

    By figuring out what an object can do, standardizing those methods, and adding virtualization as a first-class feature that everyone can use, the ECMAScript standard committee has expanded the realm of possibilities.

    Objects can be almost anything now.

    Maybe the most honest answer to the question “What is an object?” now is to take the 12 required internal methods as a definition. An object is something in a JS program that has a [[Get]] operation, a [[Set]] operation, and so on.


    Do we understand objects better after all that? I’m not sure! Did we do amazing things? Yeah. We did things that were never possible in JS before.

    Can I use Proxies today?

    Nope! Not on the Web, anyway. Only Firefox and Microsoft Edge support proxies, and there is no polyfill.

    Using proxies in Node.js or io.js requires both an off-by-default option (--harmony_proxies) and the harmony-reflect polyfill, since V8 implements an older version of the Proxy specification. (A previous version of this article had incorrect information about this. Thanks to Mörre and Aaron Powell for correcting my mistakes in the comments.)

    So feel free to experiment with proxies! Create a hall of mirrors where there seem to be thousands of copies of every object, all alike, and it’s impossible to debug anything! Now is the time. There’s little danger of your ill-advised proxy code escaping into production… yet.

    Proxies were first implemented in 2010, by Andreas Gal, with code reviews by Blake Kaplan. The standard committee then completely redesigned the feature. Eddy Bruel implemented the new spec in 2012.

    I implemented Reflect, with code reviews by Jeff Walden. It’ll be in Firefox Nightly starting this weekend—all except Reflect.enumerate(), which is not implemented yet.

    Next up, we’ll be talking about the most controversial feature in ES6, and who better to present it than the person who’s implementing it in Firefox? So please join us next week as Mozilla engineer Eric Faust presents ES6 classes in depth.