Mozilla

Featured Articles

Sort by:

View:

  1. ES6 In Depth: An Introduction

    Welcome to ES6 In Depth! In this new weekly series, we’ll be exploring ECMAScript 6, the upcoming new edition of the JavaScript language. ES6 contains many new language features that will make JS more powerful and expressive, and we’ll visit them one by one in weeks to come. But before we start in on the details, maybe it’s worth taking a minute to talk about what ES6 is and what you can expect.

    What falls under the scope of ECMAScript?

    The JavaScript programming language is standardized by ECMA (a standards body like W3C) under the name ECMAScript. Among other things, ECMAScript defines:

    What it doesn’t define is anything to do with HTML or CSS, or the Web APIs, such as the DOM (Document Object Model). Those are defined in separate standards. ECMAScript covers the aspects of JS that are present not only in the browser, but also in non-browser environments such as node.js.

    The new standard

    Last week, the final draft of the ECMAScript Language Specification, Edition 6, was submitted to the Ecma General Assembly for review. What does that mean?

    It means that this summer, we’ll have a new standard for the core JavaScript programming language.

    This is big news. A new JS language standard doesn’t drop every day. The last one, ES5, happened back in 2009. The ES standards committee has been working on ES6 ever since.

    ES6 is a major upgrade to the language. At the same time, your JS code will continue to work. ES6 was designed for maximum compatibility with existing code. In fact, many browsers already support various ES6 features, and implementation efforts are ongoing. This means all your JS code has already been running in browsers that implement some ES6 features! If you haven’t seen any compatibility issues by now, you probably never will.

    Counting to 6

    The previous editions of the ECMAScript standard were numbered 1, 2, 3, and 5.

    What happened to Edition 4? An ECMAScript Edition 4 was once planned—and in fact a ton of work was done on it—but it was eventually scrapped as too ambitious. (It had, for example, a sophisticated opt-in static type system with generics and type inference.)

    ES4 was contentious. When the standards committee finally stopped work on it, the committee members agreed to publish a relatively modest ES5 and then proceed to work on more substantial new features. This explicit, negotiated agreement was called “Harmony,” and it’s why the ES5 spec contains these two sentences:

    ECMAScript is a vibrant language and the evolution of the language is not complete. Significant technical enhancement will continue with future editions of this specification.

    This statement could be seen as something of a promise.

    Promises resolved

    ES5, the 2009 update to the language, introduced Object.create(), Object.defineProperty(), getters and setters, strict mode, and the JSON object. I’ve used all these features, and I like what ES5 did for the language. But it would be too much to say any of these features had a dramatic impact on the way I write JS code. The most important innovation, for me, was probably the new Array methods: .map(), .filter(), and so on.

    Well, ES6 is different. It’s the product of years of harmonious work. And it’s a treasure trove of new language and library features, the most substantial upgrade for JS ever. The new features range from welcome conveniences, like arrow functions and simple string interpolation, to brain-melting new concepts like proxies and generators.

    ES6 will change the way you write JS code.

    This series aims to show you how, by examining the new features ES6 offers to JavaScript programmers.

    We’ll start with a classic “missing feature” that I’ve been eager to see in JavaScript for the better part of a decade. So join us next week for a look at ES6 iterators and the new for-of loop.

  2. Drag Elements, Console History, and more – Firefox Developer Edition 39

    Quite a few big new features, improvements, and bug fixes made their way into Firefox Developer Edition 39. Update your Firefox Developer Edition, or Nightly builds to try them out!

    Inspector

    The Inspector now allows you to move elements around via drag and drop. Click and hold on an element and then drag it to where you want it to go. This feature was added by contributor Mahdi Dibaiee.

    Back in Firefox 33, a tooltip was added to the rule view to allow editing curves for cubic bezier CSS animations. In Developer Edition 39, we’ve greatly enhanced the tooltip’s UX by adding various standard curves you can try right away, as well as cleaning up the overall appearance. This enhancement was added by new contributor John Giannakos.

    cubic

    The CSS animations panel we debuted in Developer Edition 37 now includes a time machine. You can rewind, fast forward, and set the current time of your animations.

    Console

    Previously, when the DevTools console closed, your past Console history was lost. Now, Console history is persisted across sessions. The recent commands you’ve entered will remain accessible in the next toolbox you open, whether it’s in another tab or after restarting Firefox. Additionally, we’ve added a clearHistory console command to reset the stored list of commands.

    The shorthand $_ has been added as an alias for the last result evaluated in the Console. If you evaluated an expression without storing the result to a variable (for example), you can use this as a quick way to grab the last result.

    We now format pseudo-array-like objects as if they were arrays in the Console output. This makes a pseudo-array-like object easier to reason about and inspect, just like a real array. This feature was added by contributor Johan K. Jensen.

    pseudo-array

    WebIDE and Mobile

    WiFi debugging for Firefox OS has landed. WiFi debugging allows WebIDE to connect to your Firefox OS device via your local WiFi network instead of a USB cable. We’ll discuss this feature in more detail in a future post.

    WebIDE gained support for Cordova-based projects. If you’re working on a mobile app project using Cordova, WebIDE now knows how to build the project for devices it supports without any extra configuration.

    Other Changes

    • Attribute changes only flash the changed attribute in the Markup View, instead of the whole element.
    • Canvas Debugger now supports setTimeout for animations.
    • Inline box model highlighting.
    • Browser Toolbox can now be opened from a shortcut: Cmd-Opt-Shit-I / Ctrl-Alt-Shift-I.
    • Network Monitor now shows the remote server’s IP address and port.
    • When an element’s highlighted in the Inspector, you can now use the arrow keys to highlight the current element’s parent (left key), or its first child, or its next sibling if it has no children, or the next node in the tree if it has no siblings (right key). This is especially useful when an element and its parent occupy the same space on the screen, making it difficult to select one of them using only the mouse.

    For an even more complete list, check out all 200 bugs resolved during the Firefox 39 development cycle.

    Thanks to all the new developers who made their first DevTools contribution this release:

    • Anush
    • Brandon Max
    • Geoffroy Planquart
    • Johan K. Jensen
    • John Giannakos
    • Mahdi Dibaiee
    • Nounours Heureux
    • Wickie Lee
    • Willian Gustavo Veiga

    Do you have feedback, bug reports, feature requests, or questions? As always, you can comment here, add/vote for ideas on UserVoice, or get in touch with the team at @FirefoxDevTools on Twitter.

  3. Trainspotting: Firefox 37, Developer Edition and More

    Welcome to Trainspotting, a new series on Mozilla Hacks designed to help the busy Web developer keep up with what’s new, what’s changed and what is coming soon in all of the Firefoxes, the Web platform, and the tools for building the Web!

    Mozilla develops Gecko and Firefox on a “train model” – we branch the code and ship a release on a time-based schedule (every six weeks). If a feature is not finished, it’s reverted or disabled and has to, as we say, ride the next train. This means we ship new features, performance improvements, and bug fixes to users every six weeks, instead of having to wait up to 18 months.

    Trainspotting will publish every six weeks when we branch the code – and we’ll point out any changes that might require action or cause compatibility issues, as well as new features that Web developers would want to take advantage of.

    Firefox 37

    For a detailed list of all noted changes in the release, check out the official release notes for Firefox 37. Below is a list of notable features, links to documentation, and anything that requires action by developers or site operators.

    Desktop Features

    • A subset of the Media Source Extensions (MSE) API has been implemented and is enabled to allow native playback of HTML5 video on websites such as YouTube. You can read about the various MSE APIs and their usage on MDN.
    • Bing search now uses HTTPS. Hurray for private and secure searching!
    • Heartbeat is a new feedback feature in Firefox desktop: Each day a random subset of Firefox users will be shown a notification bar with an opportunity to provide feedback on their experience. You can see screenshots and read more about how it works on the Heartbeat project page.

    Firefox for Android

    This time around, Firefox for Android is getting a security and stability release. The biggest user-facing changes—improved performance of file downloads, and the addition of some new locales: Albanian, Burmese, Lower Sorbian, Songhai, Upper Sorbian, Uzbek. Read the full list of security fixes for Firefox Android 37 and the full release notes.

    HTML5 & Web Platform

    There are a bunch of new Web platform features that you can now use in production content in Firefox 37, and here are a few examples:

    Keep reading the Firefox 37 for Developers article on MDN for a detailed look at all the rest.

    Developer Tools

    If you’re using Firefox Developer Edition these additions won’t be news, but if you’ve been doing your development in the release build, there are a few new features to note:

    Security

    Firefox 37 has a bunch of security changes. Most of these do not require any action, however if you are a site operator you should definitely look to see if any of these changes impact you. Big thanks to David Keeler, security engineer for Firefox, for his help deciphering this section of the release notes.

    • We removed support for DSA in certificates and TLS, because we found that almost nobody was using these. If you’re a site operator and your certificate was signed with a DSA algorithm, contact your CA and get a new certificate. You can check with `openssl x509 -in {certificate file} -text -noout` and search for “Signature Algorithm.” If you do have one of these certificates and do not change it, users will see an override-able error.
    • HTTP/2 AltSvc is temporarily disabled due to a bug. We implemented HTTP/2 AltSvc support for opportunistic encryption. This feature allows encryption over TLS for unauthenticated connections that would otherwise be clear text. Configuration is very simple, and Patrick McManus has written instructions for how to set it up on your server.
    • We have disabled insecure TLS version fallback. If a secure site isn’t working, you can try setting the “security.tls.version.fallback-limit” preference in about:config to 1 and see if it works then. If you see this anywhere, please file a Tech Evangelism bug, noting the URL of the site, so we can work with the operators to update it. Site operators should make sure their servers aren’t TLS-intolerant, which you can do with the SSL Labs tool.
    • Users can now report SSL connection problems for a variety of non-certificate-related errors. For example, if a user encounters a non-override-able TLS error, they can now send a report to Mozilla directly from the error page. The information in the report consists of the domain you were trying to reach, the certificates the server sent, the time, which error was encountered, and some user agent information. We use this information to work with site operators to fix their configurations, and to improve our software that detects these issues, so please do send reports. Check out a screenshot of what this looks like.
    • TLS False Start optimization now requires a cipher suite using AEAD construction. If you’re running a server and false start isn’t working as expected, try using an AEAD cipher suite. Learn more about AEAD at Wikipedia and in RFC 5288. The only AEAD cipher suite that Firefox supports at this time is AES-GCM.
    • We now log usage of weak ciphers to the web console. For example, if you visit a site with a SHA-1 certificate with the web console open you’ll now see a message like, “This site makes use of a SHA-1 Certificate; it’s recommended you use certificates with signature algorithms that use hash functions stronger than SHA-1. Learn more…” If you run a site and your certificate was signed with SHA-1, get a new certificate from your CA.

    Firefox Beta

    In six short weeks Beta will be available for general release, becoming Firefox 38. Here’s some of what’s coming your way:

    • All Firefox preferences are now in a new tab-based UI.
    • Improved page-load times with speculative connection warmup.
    • MSE support on Mac OS X.
    • EME support for encrypted media playback.
    • Web platform features: WebSockets in Web Workers, KeyboardEvent.code, and the BroadcastChannel API.

    Read the Firefox 38 Beta release notes for the full list.

    Firefox Developer Edition

    This release of Firefox Developer edition (which will be Firefox 39) is frankly kind of ridiculous. The developer tools are getting precariously close to something indistinguishable from magic. Huge props to the developer tools team for what you’re about to see.

    • Developer Tools: Wi-Fi debugging of Firefox OS devices from WebIDE, drag and drop of nodes in the Inspector’s markup view, Web Console input history persistence, localhost works with WebSocket connections when you’re offline, and the cubic bezier tooltip now shows a gallery of pre-sets you can choose from to make your CSS animations super slick.
    • Web platform features include the “switch” role in Aria 1.1, CSS scroll snap points, Cache API, Fetch API, <link rel=”preconnect”> and more.

    Read the full release notes for this release of Firefox Developer Edition.

    Nightly

    The nightly branch of Firefox is where a lot of features are in active development. It’s a place where you can test experimental Web APIs and see user-facing browser features that are not yet ready for hundreds of millions of users. You might see a crash or two, you might lose you session data, but you also might experience the vision and wonder of what the future holds. It’s the Mos Eisley of browsers – a dangerous place, but we stick around for the action.

    There are a number of features that have landed and are shipping nightly builds either enabled by default or by pref:

    • E10s – Web content runs in a separate process from the browser UI.
    • Partial implementation of Service Workers.
    • Shumway – Flash implemented in JavaScript is enabled for some sites.

    Thus concludes the first edition of Trainspotting. Let us know what you think in the comments, and what you’d like to see more of!

  4. Peering Through the WebRTC Fog with SocketPeer

    WebRTC allows browsers to do things they never could before, but a soup of unfamiliar terminology and the complexity of the API makes for a steep learning curve. After spending several weeks neck-deep in example code and cargo-culting several libraries, I have emerged with a workable understanding and a nifty library that helps hide some of the complexity of WebRTC for its simplest use case of two way peer-to-peer communication.

    But before I start talking about the library, it helps to understand a bit of what it abstracts away.

    Alice and Bob (and ICE and NAT and STUN and SDP)

    When two devices want to speak to each other, we need a way for them to exchange contact information such as their IP addresses. In the same way DNS servers help browsers locate remote machines, we can set up a mutually known server to help potential peers locate each other. In WebRTC this is known as a signalling server.

    In our simple case, each peer contacts the signalling server. The server provides the peers with each other’s addresses, and then the peers can begin exchanging data.

    The plot thickens

    Today’s Internet doesn’t quite work that way. Most computers online don’t have unique IP addresses; they share a public address with all the other devices on their local network. Most local networks also have firewalls that protect machines from unwanted inbound traffic. Swapping IP addresses isn’t enough.

    ICE the wound

    The telecommunications industry has been dealing with this problem for a while now*. There are a number of different ways a connection can pass back through a firewall and router to an internal network. ICE is a kitchen-sink protocol that uses the tried-and-true technique “Try Everything and See What Works™.” A server, called a STUN server, uses the ICE protocol to find out and inform a peer of all the possible ways it can be reached from the public Internet. These potential connection methods are called ICE candidates.

    When a machine receives a candidate, it needs to communicate that to its peer. Here the signalling server comes to the rescue again — relaying each peer’s candidates to the other.

    Call and response

    When a device wants to establish a connection, it generates a message called an offer and sends it to a peer using the signalling server. The offer, encoded in the SDP format, consists of information about the device — such as which protocols it knows how to speak and which communication methods it has. When the remote peer receives the offer, it responds with its own SDP message called an answer. Then, the two potential peers send each other possible ICE candidates. Once the peers find a connection method that works for both of them, the connection is established!

    Let’s take it from the top

    Let’s review the connection process, incorporating all the techniques from above:

    1. A device uses a signalling server to relay an SDP offer to a potential peer. In the meantime, it asks an ICE server to find out how it can be reached from the public Internet.
    2. The potential peer replies with its own connection offer, and also gathers ICE candidates.
    3. The peers use the signalling server to exchange ICE candidates, attempting to use them to open a connection.
    4. Once each peer finds a working candidate, the connection is established.

    Let’s give them something to talk about with

    Once a PeerConnection is established, it can be used to carry audio and video MediaStreams as well as a more general DataChannel. People are excited about browser-to-browser video conferencing (with good reason!), but WebRTC isn’t just about voice and video. RTCDataChannel provides a general data pipeline that works similarly to a WebSocket. It’s immensely powerful and useful, but under-promoted.

    Boilerplate the ocean

    WebRTC is a low level, nuts-and-bolts API. It’s made of multiple small, composable APIs that offer a large amount of flexibility and extensibility. The downside of this is that accomplishing even a basic task can require significant boilerplate code. The notion is that JS libraries and frameworks will step in to abstract away this complexity, and indeed, there are a large number of WebRTC libraries out there already.

    Many of these libraries fall short due to a lack of signalling server and fallback support for browsers with no WebRTC support. To address this, Chris Van and I built SocketPeer.

    Using SocketPeer

    SocketPeer is, as its name suggests, a combination of WebSockets and RTCPeerConnection. This node.js library abstracts away the common pattern of using WebSockets as a signalling server to instantiate a DataChannel over WebRTC. The WebSocket server will also serve as a fallback message relay if the peers cannot establish a peer-to-peer connection.

    Server code

    // In your server module alongside existing application code.
    var SocketPeer = require('socketpeer');
    new SocketPeer();

    I’ll admit that’s a bit facile, but SocketPeer is designed to simplify the process of running a signalling server — at its simplest, it does everything for you. The server is fairly configurable, but by default it takes care of itself, by:

    • Creating an http.Server if one is not supplied.
    • Attaching logic to serve the client-side socketpeer.js library.
    • Starting a WebSocket server if one is not supplied.
    • Starting to listen on a default port.

    If all you need is a basic signalling server, then this is all the code you need. But let’s say you want to integrate SocketPeer with an existing Express app:

    // `app` is an existing Express Application
    var server = http.createServer(app);
    new SocketPeer({
      httpServer: server
    });
    server.listen(/* port */);

    Client code

    As described above, when SocketPeer is running on a server it will serve the client library at a known URL. By default, this can be found at /socketpeer/socketpeer.js. The client library exposes an event interface and a series of events to keep you informed of the state of the connection. The three most important events are:

    • connect – Fires when peers are paired. Data can be exchanged using WebSockets at this point.
    • upgrade – Fires when an RTCPeerConnection is established. Data can now be exchanged peer-to-peer.
    • data – Fires when data arrives from the remote peer.

    The last important piece of information is called pairCode. This is an arbitrary string used by the two ends of a peer-to-peer connection to identify each other. It can be any mutually agreed-upon value.

    var peer = new SocketPeer({
      pairCode: 'foobar'
    });
     
    peer.on('connect', function () {
      // connected via WebSockets
      peer.send('Hello via WebSockets.');
    });
     
    peer.on('upgrade', function () {
      // now using RTCPeerConnection
      peer.send('Hello via WebRTC!');
    });
     
    peer.on('data', function (data) {
      // do something awesome
    });

    Chat is the new todo

    What’s sample code without a working demo? I built a super simple example of peer-to-peer chat using SocketPeer. Try it out, and check out the code on GitHub to see how it works.

    Depiction of the SocketPeer demo chat app.

    The library is still a work in progress, but all the core functionality is in place and tested. Please try it out and let us know of any issues or feedback on the API.

    * The author of this post is not a telecommunications engineer. He read a lot of Wikipedia and skimmed a few RFCs so you don’t have to.

  5. What do you want from your DevTools?

    Every tool starts with an idea: “if I had this, then I could do that faster, better, or more easily.” The Firefox Developer Tools are built atop hundreds of those ideas, and many of them come from people in the web development community just like you.

    We collect those ideas in our UserVoice forum, and use your votes to help prioritize our work.

    Last April, @hopefulcyborg requested the ability to “Connect Firefox Developer Tools to [Insert Browser Here].” The following November, we launched Firefox Developer Edition which includes a preliminary version of just that: the ability to debug Chrome and Safari, even on mobile, from Firefox on your desktop. We called it Valence.

    Good ideas don’t have to be big or even particularly innovative as long as they help you accomplish your goals more effectively. For instance, all of the following features also came directly from feedback on our UserVoice forum:

    The color picker introduced in Firefox 31

    The color picker introduced in Firefox 31

    Though somewhat small in scope, each of those ideas made the DevTools more efficient, more helpful, and more usable. And that’s our mission: we make Firefox for you, and we want you to know that we’re listening. If you have ideas for the Firefox Developer Tools, submit them on UserVoice. If you don’t have ideas, vote on the ones that are already there. The right people will see it, and it will help us build the best tools for you.

    If you want to go a step further and contribute to the tools yourself, we’re always happy to help people get involved. After all, Mozilla was founded on the open source ideal of empowering communities to collaborate and improve the Internet for all of us.

    These are your tools. How could they be better?

  6. Understanding the CSS box model for inline elements

    In a web page, every element is rendered as a rectangular box. The box model describes how the element’s content, padding, border, and margin determine the space occupied by the element and its relation to other elements in the page.

    Depending on the element’s display property, its box may be one of two types: a block box or an inline box. The box model is applied differently to these two types. In this article we’ll see how the box model is applied to inline boxes, and how a new feature in the Firefox Developer Tools can help you visualize the box model for inline elements.

    Inline boxes and line boxes

    Inline boxes are laid out horizontally in a box called a line box:

    2-inline-boxes

    If there isn’t enough horizontal space to fit all elements into a single line, another line box is created under the first one. A single inline element may then be split across lines:

    line-boxes

    The box model for inline boxes

    When an inline box is split across more than one line, it’s still logically a single box. This means that any horizontal padding, border, or margin is only applied to the start of the first line occupied by the box, and the end of the last line.

    For example, in the screenshot below, the highlighted span is split across 2 lines. Its horizontal padding doesn’t apply to the end of the first line or the beginning of the second line:

    horizontal-padding

    Also, any vertical padding, border, or margin applied to an element will not push away elements above or below it:

    vertical-padding

    However, note that vertical padding and border are still applied, and the padding still pushes out the border:

    vertical-border-padding

    If you need to adjust the height between lines, use line-height instead.

    Inspecting inline elements with devtools

    When debugging layout issues, it’s important to have tools that give you the complete picture. One of these tools is the highlighter, which all browsers include in their devtools. You can use it to select elements for closer inspection, but it also gives you information about layout.

    layout-info

    In the example above, the highlighter in Firefox 39 is used to highlight an inline element split across several lines.

    The highlighter displays guides to help you align elements, gives the complete dimensions of the node, and displays an overlay showing the box model.

    From Firefox 39 onwards, the box model overlay for split inline elements shows each individual line occupied by the element. In this example the content region is displayed in light blue and is split over four lines. A right padding is also defined for the node, and the highlighter shows the padding region in purple.

    Highlighting each individual line box is important for understanding how the box model is applied to inline elements, and also helps you check the space between lines and the vertical alignment of inline boxes.

     

  7. Using the Firefox DevTools to Debug fetch() on GitHub

    Firefox Nightly recently added preliminary support for Fetch, a modern, Promise-based replacement for XMLHttpRequest (XHR). Our initial work supported most of the Fetch Specification, but not quite all of it. Specifically, when Fetch first appeared in Nightly, we hadn’t yet implemented serializing and de-serializing of FormData objects.

    GitHub was already using Fetch in production with a home-grown polyfill, and required support for serializing FormData in order to upload images to GitHub Issues. Thus, when our early, incomplete implementation of Fetch landed in Nightly, the GitHub polyfill stepped out of the way, and image uploads from Firefox broke.

    In the 15-minute video below, Dan Callahan shows a real-world instance of using the Firefox Developer Tools to help find, file, and fix Bug 1143857: “Fetch does not serialize FormData body; breaks GitHub.” This isn’t a canned presentation, but rather a comprehensive, practical demonstration of actually debugging minified JavaScript and broken event handlers using the Firefox DevTools, reporting a Gecko bug in Bugzilla, and ultimately testing a patched build of Firefox.

    Use the following links to jump to a specific section of the video on YouTube:

    • 0:13 – The error
    • 0:50 – Using the Network Panel
    • 1:30 – Editing and Resending HTTP Requests
    • 2:02 – Hypothesis: FormData was coerced to a String, not serialized
    • 2:40 – Prettifying minified JavaScript
    • 3:10 – Setting breakpoints on event handlers
    • 4:57 – Navigating the call stack
    • 7:54 – Setting breakpoints on lines
    • 8:56 – GitHub’s FormData constructor
    • 10:48 – Invoking fetch()
    • 11:53 – Verifying the bug by testing fetch() on another domain
    • 12:52 – Checking the docs for fetch()
    • 13:42 – Filing a Gecko bug in Bugzilla
    • 14:42 – The lifecycle of Bug 1143857: New, Duplicate, Reopened, Resolved
    • 15:41 – Verifying a fixed build of Firefox

    We expect Firefox Developer Edition version 39 to ship later this month with full support for the Fetch API.

  8. This API is so Fetching!

    For more than a decade the Web has used XMLHttpRequest (XHR) to achieve asynchronous requests in JavaScript. While very useful, XHR is not a very nice API. It suffers from lack of separation of concerns. The input, output and state are all managed by interacting with one object, and state is tracked using events. Also, the event-based model doesn’t play well with JavaScript’s recent focus on Promise- and generator-based asynchronous programming.

    The Fetch API intends to fix most of these problems. It does this by introducing the same primitives to JS that are used in the HTTP protocol. In addition, it introduces a utility function fetch() that succinctly captures the intention of retrieving a resource from the network.

    The Fetch specification, which defines the API, nails down the semantics of a user agent fetching a resource. This, combined with ServiceWorkers, is an attempt to:

    1. Improve the offline experience.
    2. Expose the building blocks of the Web to the platform as part of the extensible web movement.

    As of this writing, the Fetch API is available in Firefox 39 (currently Nightly) and Chrome 42 (currently dev). Github has a Fetch polyfill.

    Feature detection

    Fetch API support can be detected by checking for Headers,Request, Response or fetch on the window or worker scope.

    Simple fetching

    The most useful, high-level part of the Fetch API is the fetch() function. In its simplest form it takes a URL and returns a promise that resolves to the response. The response is captured as a Response object.

    fetch("/data.json").then(function(res) {
      // res instanceof Response == true.
      if (res.ok) {
        res.json().then(function(data) {
          console.log(data.entries);
        });
      } else {
        console.log("Looks like the response wasn't perfect, got status", res.status);
      }
    }, function(e) {
      console.log("Fetch failed!", e);
    });

    Submitting some parameters, it would look like this:

    fetch("http://www.example.org/submit.php", {
      method: "POST",
      headers: {
        "Content-Type": "application/x-www-form-urlencoded"
      },
      body: "firstName=Nikhil&favColor=blue&password=easytoguess"
    }).then(function(res) {
      if (res.ok) {
        alert("Perfect! Your settings are saved.");
      } else if (res.status == 401) {
        alert("Oops! You are not authorized.");
      }
    }, function(e) {
      alert("Error submitting form!");
    });

    The fetch() function’s arguments are the same as those passed to the
    Request() constructor, so you may directly pass arbitrarily complex requests to fetch() as discussed below.

    Headers

    Fetch introduces 3 interfaces. These are Headers, Request and
    Response. They map directly to the underlying HTTP concepts, but have
    certain visibility filters in place for privacy and security reasons, such as
    supporting CORS rules and ensuring cookies aren’t readable by third parties.

    The Headers interface is a simple multi-map of names to values:

    var content = "Hello World";
    var reqHeaders = new Headers();
    reqHeaders.append("Content-Type", "text/plain"
    reqHeaders.append("Content-Length", content.length.toString());
    reqHeaders.append("X-Custom-Header", "ProcessThisImmediately");

    The same can be achieved by passing an array of arrays or a JS object literal
    to the constructor:

    reqHeaders = new Headers({
      "Content-Type": "text/plain",
      "Content-Length": content.length.toString(),
      "X-Custom-Header": "ProcessThisImmediately",
    });

    The contents can be queried and retrieved:

    console.log(reqHeaders.has("Content-Type")); // true
    console.log(reqHeaders.has("Set-Cookie")); // false
    reqHeaders.set("Content-Type", "text/html");
    reqHeaders.append("X-Custom-Header", "AnotherValue");
     
    console.log(reqHeaders.get("Content-Length")); // 11
    console.log(reqHeaders.getAll("X-Custom-Header")); // ["ProcessThisImmediately", "AnotherValue"]
     
    reqHeaders.delete("X-Custom-Header");
    console.log(reqHeaders.getAll("X-Custom-Header")); // []

    Some of these operations are only useful in ServiceWorkers, but they provide
    a much nicer API to Headers.

    Since Headers can be sent in requests, or received in responses, and have various limitations about what information can and should be mutable, Headers objects have a guard property. This is not exposed to the Web, but it affects which mutation operations are allowed on the Headers object.
    Possible values are:

    • “none”: default.
    • “request”: guard for a Headers object obtained from a Request (Request.headers).
    • “request-no-cors”: guard for a Headers object obtained from a Request created
      with mode “no-cors”.
    • “response”: naturally, for Headers obtained from Response (Response.headers).
    • “immutable”: Mostly used for ServiceWorkers, renders a Headers object
      read-only.

    The details of how each guard affects the behaviors of the Headers object are
    in the specification. For example, you may not append or set a “request” guarded Headers’ “Content-Length” header. Similarly, inserting “Set-Cookie” into a Response header is not allowed so that ServiceWorkers may not set cookies via synthesized Responses.

    All of the Headers methods throw TypeError if name is not a valid HTTP Header name. The mutation operations will throw TypeError if there is an immutable guard. Otherwise they fail silently. For example:

    var res = Response.error();
    try {
      res.headers.set("Origin", "http://mybank.com");
    } catch(e) {
      console.log("Cannot pretend to be a bank!");
    }

    Request

    The Request interface defines a request to fetch a resource over HTTP. URL, method and headers are expected, but the Request also allows specifying a body, a request mode, credentials and cache hints.

    The simplest Request is of course, just a URL, as you may do to GET a resource.

    var req = new Request("/index.html");
    console.log(req.method); // "GET"
    console.log(req.url); // "http://example.com/index.html"

    You may also pass a Request to the Request() constructor to create a copy.
    (This is not the same as calling the clone() method, which is covered in
    the “Reading bodies” section.).

    var copy = new Request(req);
    console.log(copy.method); // "GET"
    console.log(copy.url); // "http://example.com/index.html"

    Again, this form is probably only useful in ServiceWorkers.

    The non-URL attributes of the Request can only be set by passing initial
    values as a second argument to the constructor. This argument is a dictionary.

    var uploadReq = new Request("/uploadImage", {
      method: "POST",
      headers: {
        "Content-Type": "image/png",
      },
      body: "image data"
    });

    The Request’s mode is used to determine if cross-origin requests lead to valid responses, and which properties on the response are readable. Legal mode values are "same-origin", "no-cors" (default) and "cors".

    The "same-origin" mode is simple, if a request is made to another origin with this mode set, the result is simply an error. You could use this to ensure that
    a request is always being made to your origin.

    var arbitraryUrl = document.getElementById("url-input").value;
    fetch(arbitraryUrl, { mode: "same-origin" }).then(function(res) {
      console.log("Response succeeded?", res.ok);
    }, function(e) {
      console.log("Please enter a same-origin URL!");
    });

    The "no-cors" mode captures what the web platform does by default for scripts you import from CDNs, images hosted on other domains, and so on. First, it prevents the method from being anything other than “HEAD”, “GET” or “POST”. Second, if any ServiceWorkers intercept these requests, they may not add or override any headers except for these. Third, JavaScript may not access any properties of the resulting Response. This ensures that ServiceWorkers do not affect the semantics of the Web and prevents security and privacy issues that could arise from leaking data across domains.

    "cors" mode is what you’ll usually use to make known cross-origin requests to access various APIs offered by other vendors. These are expected to adhere to
    the CORS protocol. Only a limited set of headers is exposed in the Response, but the body is readable. For example, you could get a list of Flickr’s most interesting photos today like this:

    var u = new URLSearchParams();
    u.append('method', 'flickr.interestingness.getList');
    u.append('api_key', '<insert api key here>');
    u.append('format', 'json');
    u.append('nojsoncallback', '1');
     
    var apiCall = fetch('https://api.flickr.com/services/rest?' + u);
     
    apiCall.then(function(response) {
      return response.json().then(function(json) {
        // photo is a list of photos.
        return json.photos.photo;
      });
    }).then(function(photos) {
      photos.forEach(function(photo) {
        console.log(photo.title);
      });
    });

    You may not read out the “Date” header since Flickr does not allow it via
    Access-Control-Expose-Headers.

    response.headers.get("Date"); // null

    The credentials enumeration determines if cookies for the other domain are
    sent to cross-origin requests. This is similar to XHR’s withCredentials
    flag, but tri-valued as "omit" (default), "same-origin" and "include".

    The Request object will also give the ability to offer caching hints to the user-agent. This is currently undergoing some security review. Firefox exposes the attribute, but it has no effect.

    Requests have two read-only attributes that are relevant to ServiceWorkers
    intercepting them. There is the string referrer, which is set by the UA to be
    the referrer of the Request. This may be an empty string. The other is
    context which is a rather large enumeration defining what sort of resource is being fetched. This could be “image” if the request is from an tag in the controlled document, “worker” if it is an attempt to load a worker script, and so on. When used with the fetch() function, it is “fetch”.

    Response

    Response instances are returned by calls to fetch(). They can also be created by JS, but this is only useful in ServiceWorkers.

    We have already seen some attributes of Response when we looked at fetch(). The most obvious candidates are status, an integer (default value 200) and statusText (default value “OK”), which correspond to the HTTP status code and reason. The ok attribute is just a shorthand for checking that status is in the range 200-299 inclusive.

    headers is the Response’s Headers object, with guard “response”. The url attribute reflects the URL of the corresponding request.

    Response also has a type, which is “basic”, “cors”, “default”, “error” or
    “opaque”.

    • "basic": normal, same origin response, with all headers exposed except
      “Set-Cookie” and “Set-Cookie2″.
    • "cors": response was received from a valid cross-origin request. Certain headers and the body may be accessed.
    • "error": network error. No useful information describing the error is available. The Response’s status is 0, headers are empty and immutable. This is the type for a Response obtained from Response.error().
    • "opaque": response for “no-cors” request to cross-origin resource. Severely
      restricted

    The “error” type results in the fetch() Promise rejecting with TypeError.

    There are certain attributes that are useful only in a ServiceWorker scope. The
    idiomatic way to return a Response to an intercepted request in ServiceWorkers is:

    addEventListener('fetch', function(event) {
      event.respondWith(new Response("Response body", {
        headers: { "Content-Type" : "text/plain" }
      });
    });

    As you can see, Response has a two argument constructor, where both arguments are optional. The first argument is a body initializer, and the second is a dictionary to set the status, statusText and headers.

    The static method Response.error() simply returns an error response. Similarly, Response.redirect(url, status) returns a Response resulting in
    a redirect to url.

    Dealing with bodies

    Both Requests and Responses may contain body data. We’ve been glossing over it because of the various data types body may contain, but we will cover it in detail now.

    A body is an instance of any of the following types.

    In addition, Request and Response both offer the following methods to extract their body. These all return a Promise that is eventually resolved with the actual content.

    • arrayBuffer()
    • blob()
    • json()
    • text()
    • formData()

    This is a significant improvement over XHR in terms of ease of use of non-text data!

    Request bodies can be set by passing body parameters:

    var form = new FormData(document.getElementById('login-form'));
    fetch("/login", {
      method: "POST",
      body: form
    })

    Responses take the first argument as the body.

    var res = new Response(new File(["chunk", "chunk"], "archive.zip",
                           { type: "application/zip" }));

    Both Request and Response (and by extension the fetch() function), will try to intelligently determine the content type. Request will also automatically set a “Content-Type” header if none is set in the dictionary.

    Streams and cloning

    It is important to realise that Request and Response bodies can only be read once! Both interfaces have a boolean attribute bodyUsed to determine if it is safe to read or not.

    var res = new Response("one time use");
    console.log(res.bodyUsed); // false
    res.text().then(function(v) {
      console.log(res.bodyUsed); // true
    });
    console.log(res.bodyUsed); // true
     
    res.text().catch(function(e) {
      console.log("Tried to read already consumed Response");
    });

    This decision allows easing the transition to an eventual stream-based Fetch API. The intention is to let applications consume data as it arrives, allowing for JavaScript to deal with larger files like videos, and perform things like compression and editing on the fly.

    Often, you’ll want access to the body multiple times. For example, you can use the upcoming Cache API to store Requests and Responses for offline use, and Cache requires bodies to be available for reading.

    So how do you read out the body multiple times within such constraints? The API provides a clone() method on the two interfaces. This will return a clone of the object, with a ‘new’ body. clone() MUST be called before the body of the corresponding object has been used. That is, clone() first, read later.

    addEventListener('fetch', function(evt) {
      var sheep = new Response("Dolly");
      console.log(sheep.bodyUsed); // false
      var clone = sheep.clone();
      console.log(clone.bodyUsed); // false
     
      clone.text();
      console.log(sheep.bodyUsed); // false
      console.log(clone.bodyUsed); // true
     
      evt.respondWith(cache.add(sheep.clone()).then(function(e) {
        return sheep;
      });
    });

    Future improvements

    Along with the transition to streams, Fetch will eventually have the ability to abort running fetch()es and some way to report the progress of a fetch. These are provided by XHR, but are a little tricky to fit in the Promise-based nature of the Fetch API.

    You can contribute to the evolution of this API by participating in discussions on the WHATWG mailing list and in the issues in the Fetch and ServiceWorker specifications.

    For a better web!

    The author would like to thank Andrea Marchesini, Anne van Kesteren and Ben
    Kelly for helping with the specification and implementation.

  9. asm.js Speedups Everywhere

    asm.js is an easy-to-optimize subset of JavaScript. It runs in all browsers without plugins, and is a good target for porting C/C++ codebases such as game engines – which have in fact been the biggest adopters of this approach, for example Unity 3D and Unreal Engine.

    Obviously, developers porting games using asm.js would like them to run well across all browsers. However, each browser has different performance characteristics, because each has a different JavaScript engine, different graphics implementation, and so forth. In this post, we’ll focus on JavaScript execution speed and see the significant progress towards fast asm.js execution that has been happening across the board. Let’s go over each of the four major browsers now.

    Chrome

    Already in 2013, Google released Octane 2.0, a new version of their primary JavaScript benchmark suite, which contained a new asm.js benchmark, zlib. Benchmarks define what browsers optimize: things that matter are included in benchmarks, and browsers then compete to achieve the best scores. Therefore, adding an asm.js benchmark to Octane clearly signaled Google’s belief that asm.js content is important to optimize for.

    A further major development happened more recently, when Google landed TurboFan, a new work-in-progress optimizing compiler for Chrome’s JavaScript engine, v8. TurboFan has a “sea of nodes” architecture (which is new in the JavaScript space, and has been used very successfully elsewhere, for example in the Java server virtual machine), and aims to reach even higher speeds than CrankShaft, the first optimizing compiler for v8.

    While TurboFan is not yet ready to be enabled on all JavaScript content, as of Chrome 41 it is enabled on asm.js. Getting the benefits of TurboFan early on asm.js shows the importance of optimizing asm.js for the Chrome team. And the benefits can be quite substantial: For example, TurboFan speeds up Emscripten‘s zlib benchmark by 13%, and fasta by 24%.

    Safari

    During the last year, Safari’s JavaScript Engine, JavaScriptCore, introduced a new JIT (Just In Time compiler) called FTL. FTL stands for “Fourth Tier LLVM,” as it adds a fourth level of optimization above the three previously-existing ones, and it is based on LLVM, a powerful open source compiler framework. This is exciting because LLVM is a top-tier general-purpose compiler, with many years of optimizations put into it, and Safari gets to reuse all those efforts. As shown in the blogposts linked to earlier, the speedups that FTL provides can be very substantial.

    Another interesting development from Apple this year was the introduction of a new JavaScript benchmark, JetStream. JetStream contains several asm.js benchmarks, an indication that Apple believes asm.js content is important to optimize for, just as when Google added an asm.js benchmark to Octane.

    Internet Explorer

    The JavaScript engine inside Internet Explorer is named Chakra. Last year, the Chakra team blogged about a suite of optimizations coming to IE in Windows 10 and pointed to significant improvements in the scores on asm.js workloads in Octane and JetStream. This is yet another example of how having asm.js workloads in common benchmarks drives measurement and optimization.

    The big news, however, is the recent announcement by the Chakra team that they are working on adding specific asm.js optimizations, to arrive in Windows 10 together with the other optimizations mentioned earlier. These optimizations haven’t made it to the Preview channel yet, so we can’t measure and report on them here. However, we can speculate on the improvements based on the initial impact of landing asm.js optimizations in Firefox. As shown in this benchmark comparisons slide containing measurements from right after the landing, asm.js optimizations immediately brought Firefox to around 2x slower than native performance (from 5-12x native before). Why should these wins translate to Chakra? Because, as explained in our previous post, the asm.js spec provides a predictable way to validate asm.js code and generate high-quality code based on the results.

    So, here’s looking forward to good asm.js performance in Windows 10!

    Firefox

    As we mentioned before, the initial landing of asm.js optimizations in Firefox generally put Firefox within 2x of native in terms of raw throughput. By the end of 2013, we were able to report that the gap had shrunk to around 1.5x native – which is close to the amount of variability that different native compilers have between each other anyhow, so comparisons to “native speed” start to be less meaningful.

    At a high-level, this progress comes from two kinds of improvements: compiler backend optimizations and new JavaScript features. In the area of compiler backend optimizations, there has been a stream of tiny wins (specific to particular code patterns or hardware) making it difficult to point to any one thing. Two significant improvements stand out, though:

    Along with backend optimization work, two new JavaScript features have been incorporated into asm.js which unlock new performance capabilities in the hardware. The first feature, Math.fround, may look simple but it enables the compiler backend to generate single-precision floating-point arithmetic when used carefully in JS. As described in this post, the switch can result in anywhere from a 5% – 60% speedup, depending on the workload. The second feature is much bigger: SIMD.js. This is still a stage 1 proposal for ES7 so the new SIMD operations and the associated asm.js extensions are only available in Firefox Nightly. Initial results are promising though.

    Separate from all these throughput optimizations, there have also been a set of load time optimizations in Firefox: off-main-thread and parallel compilation of asm.js code as well as caching of the compiled machine code. As described in this post, these optimizations significantly improve the experience of starting a Unity- or Epic-sized asm.js application. Existing asm.js workloads in the benchmarks mentioned above do not test this aspect of asm.js performance so we put together a new benchmark suite named Massive that does. Looking at Firefox’s Massive score over time, we can see the load-time optimizations contributing to a more than 6x improvement (more details in the Hacks post introducing the Massive benchmark).

    The Bottom Line

    What is most important, in the end, are not the underlying implementation details, nor even specific performance numbers on this benchmark or that. What really matters is that applications run well. The best way to check that is to actually run real-world games! A nice example of an asm.js-using game is Dead Trigger 2, a Unity 3D game:

    The video shows the game running on Firefox, but as it uses only standard web APIs, it should work in any browser. We tried it now, and it renders quite smoothly on Firefox, Chrome and Safari. We are looking forward to testing it on the next Preview version of Internet Explorer as well.

    Another example is Cloud Raiders:

    As with Unity, the developers of Cloud Raiders were able to compile their existing C++ codebase (using Emscripten) to run on the web without relying on plugins. The result runs well in all four of the major browsers.

    In conclusion, asm.js performance has made great strides over the last year. There is still room for improvement – sometimes performance is not perfect, or a particular API is missing, in one browser or another – but all major browsers are working to make sure that asm.js runs quickly. We can see that by looking at the benchmarks they are optimizing on, which contain asm.js, and in the new improvements they are implementing in their JavaScript engines, which are often motivated by asm.js. As a result, games that not long ago would have required plugins are quickly getting to the point where they can run well without them, in modern browsers across the web.