Mozilla

Articles

Sort by:

View:

  1. Using the Firefox DevTools to Debug fetch() on GitHub

    Firefox Nightly recently added preliminary support for Fetch, a modern, Promise-based replacement for XMLHttpRequest (XHR). Our initial work supported most of the Fetch Specification, but not quite all of it. Specifically, when Fetch first appeared in Nightly, we hadn’t yet implemented serializing and de-serializing of FormData objects.

    GitHub was already using Fetch in production with a home-grown polyfill, and required support for serializing FormData in order to upload images to GitHub Issues. Thus, when our early, incomplete implementation of Fetch landed in Nightly, the GitHub polyfill stepped out of the way, and image uploads from Firefox broke.

    In the 15-minute video below, Dan Callahan shows a real-world instance of using the Firefox Developer Tools to help find, file, and fix Bug 1143857: “Fetch does not serialize FormData body; breaks GitHub.” This isn’t a canned presentation, but rather a comprehensive, practical demonstration of actually debugging minified JavaScript and broken event handlers using the Firefox DevTools, reporting a Gecko bug in Bugzilla, and ultimately testing a patched build of Firefox.

    Use the following links to jump to a specific section of the video on YouTube:

    • 0:13 – The error
    • 0:50 – Using the Network Panel
    • 1:30 – Editing and Resending HTTP Requests
    • 2:02 – Hypothesis: FormData was coerced to a String, not serialized
    • 2:40 – Prettifying minified JavaScript
    • 3:10 – Setting breakpoints on event handlers
    • 4:57 – Navigating the call stack
    • 7:54 – Setting breakpoints on lines
    • 8:56 – GitHub’s FormData constructor
    • 10:48 – Invoking fetch()
    • 11:53 – Verifying the bug by testing fetch() on another domain
    • 12:52 – Checking the docs for fetch()
    • 13:42 – Filing a Gecko bug in Bugzilla
    • 14:42 – The lifecycle of Bug 1143857: New, Duplicate, Reopened, Resolved
    • 15:41 – Verifying a fixed build of Firefox

    We expect Firefox Developer Edition version 39 to ship later this month with full support for the Fetch API.

  2. An analytics primer for developers

    There are three kinds of lies: lies, damned lies, and statistics – Mark Twain

    Deciding what to track (all the things)

    When you are adding analytics to a system you should try to log everything. At some point in the future if you need to pull information out of a system it’s much better to have every piece of information to hand, rather than realising that you need some data that you don’t yet track. Here are some guidelines and suggestion for collecting and analysing information about how people are interacting with your website or app.

    Grouping your stats as a best practice

    Most analytics platforms allow you to tag an event with metadata. This lets you analyse stats against each other and makes it easier to compare elements in a user interaction.

    For example, if you are logging clicks on a menu, you could track each menu item differently e.g.:

    track("Home pressed");
    track("Cart pressed");
    track("Logout pressed");

    Doing this makes it harder to answer questions such as which button is most popular etc. Using metadata you can make most analytics platforms perform calculations like this for you:

    track("Menu pressed","Home button");
    track("Menu pressed","Cart button");
    track("Menu pressed","Logout button");

    The analytics above mean you now have a total of all menu presses, and you can find the most/least popular of the menu items with no extra effort.

    Optimising your funnel

    A conversion funnel is a term of art derived from a consumer marketing model. The metaphor of the funnel describes the flow of steps a user goes through as they engage more deeply with your software. Imagine you want to know how many users clicked log in and then paid at the checkout? If you track events such as “Checkout complete” and “User logged in” you can then ask your analytics platform what percentage of users did both within a certain time frame (a day for instance).

    Imagine the answer comes out to be 10%, this tells you useful information about the behaviour of your users (bear in mind this funnel is not order sensitive, i.e., it does not matter in which order the events happen in login -> cart -> pay or cart -> login -> pay). Thus, you can start to optimise parts of your app and use this value to determine whether or not you are converting more of your users to make a purchase or otherwise engage more deeply.

    Deciding what to measure

    Depending on your business, different stats will have different levels of importance. Here are some common stats of interest to developers of apps or online services:

    Number of sessions
    The total number of sessions on your product (the user opening your product, using it, then closing it = 1 session)
    Session length
    How long each session lasts (can be mode, mean, median)
    Retention
    How many people come back to your product having used it before (there are a variety of metrics such as rolling retention, 30 day retention etc)
    MAU
    Monthly active users: how may users use the app once a month
    DAU
    Daily active users: how may users use the app once a day
    ARPU
    Average revenue per user: how much money you make per person
    ATV
    Average transaction value: how much money you make per sale
    CAC
    Customer acquisition cost: how much it costs to get one extra user (normally specified by the channel for getting them)
    CLV
    Customer lifetime value: total profit made from a user (usually projected)
    Churn
    The number of people who leave your product in a given time (usually given as a percentage of total user base)
    Cycle time
    The the time it takes for one user to refer another

    Choosing an analytics tool or platform

    There are plenty of analytics providers, listed below are some of the best known and most widely used:

    Google Analytics

    Website
    Developer documentation

    Quick event log example:

    ga('send', 'event', 'button', 'click');

    Pros:

    • Free
    • Easy to set up

    Cons:

    • Steep learning curve for using the platform
    • Specialist training can be required to get the most out of the platform

    Single page apps:

    If you are making a single page app/website, you need to keep Google informed that the user is still on your page and hasn’t bounced (gone to your page/app and left without doing anything):

    ga('set' , 'page', location.pathname + location.search + location.hash);
    ga('send', 'pageview');

    Use the above code every time a user navigates to a new section of your app/website to let Google know the user is still browsing your site/app.

    Flurry

    Website
    Developer documentation

    Quick event log example:

    FlurryAgent.logEvent("Button clicked");
    FlurryAgent.logEvent("Button clicked",{more : 'data'});

    Pros:

    • Free
    • Easy to set up

    Cons:

    • Data normally 24 hours behind real time
    • Takes ages to load the data

    Mixpanel

    Website
    Developer documentation

    Quick event log example:

    mixpanel.track("Button clicked");
    mixpanel.track("Button clicked",{more : 'data'});

    Pros:

    • Free trial
    • Easy to set up
    • Real-time data

    Cons:

    • Gets expensive after the free trial
    • If you are tracking a lot of points, the interface can get cluttered

    Speeding up requests

    When you are loading an external JS file you want to do it asynchronously if possible to speed up the page load.

    <script type="text/javascript" async> ... </script>

    The above code will cause the JavaScript to load asynchronously but assumes the user has a browser supporting HTML5.

    //jQuery example
    $.getScript('https://cdn.flurry.com/js/flurry.js', 
    function(){
       ...
    });

    This code will load the JavaScript asynchronously with greater browser support.

    The next problem is that you could try to add an analytic even though the framework does not exist yet, so you need to check to see if the variable framework first:

    if(typeof FlurryAgent != "undefined"){
       ...
    }

    This will prevent errors and will also allow you to easily disable analytics during testing. (You can just stop the script from being loaded – and the variable will never be defined.)

    The problem here is that you might be missing analytics whilst waiting for the script to load. Instead, you can make a queue to store the events and then post them all when the script loads:

    var queue = [];
     
    if(typeof FlurryAgent != "undefined"){
       ...
    }else{
       queue.push(["data",{more : data}]);
    }
     
    ...
     
    //jQuery example
    $.getScript('https://cdn.flurry.com/js/flurry.js', 
    function(){
       ...
     
       for(var i = 0;i < queue.length;i++)
       {
          FlurryAgent.logEvent(queue[i][0],queue[i][1]);
       }
       queue = [];
    });

    Analytics for your Firefox App

    You can use any of the above providers above with Firefox OS, but remember when you paste a script into your code they generally are protocol agnostic: they start //myjs.com/analytics.js and you need to choose either http: or https:https://myjs.com/analytics.js (This is required only if you are making a packaged app.)

    Let us know how it goes.

  3. Optimising SVG images

    SVG is a vector image format based on XML. It has great advantages, most notably it is lightweight. Since SVG is a text format, it can be viewed and modified using a simple text editor, and applying GZIP compression produces excellent results.

    It’s critical for a website to provide assets that are as lightweight as possible, especially on mobile where bandwidth can be very limited. You want to optimise your SVG files to have your app load and display as quickly as possible.

    This article will show how to use dedicated tools to optimise SVG images. You will also learn how the markup works so you can go the extra mile to produce the lightest possible images.

    Introducing svgo

    Optimising SVG is very similar to minifying CSS or other text-based formats such as JavaScript or HTML. It is mainly about removing useless whitespace and redundant characters.

    The tool I recommend to reduce the size of SVG images is svgo. It is written for node.js. To install it, just do:

    $ npm install -g svgo

    In its basic form, you’ll use a command line like this:

    $ svgo --input img/graph.svg --output img/optimised-graph.svg

    Please make sure to specify an --output parameter if you want to keep the original image. Otherwise svgo will replace it with the optimised version.

    svgo will apply several changes to the original file—stripping out useless comments, tags, and attributes, reducing the precision of numbers in path definitions, or sorting attributes for better GZIP compression.

    This works with no surprises for simple images. However, in more complex cases, the image manipulation can result in a garbled file.

    svgo plugins

    svgo is very modular thanks to a plugin-based architecture.

    When optimising complex images, I’ve noticed that the main issues are caused by two svgo plugins:

    • convertPathData
    • mergePaths

    Deactivating these will ensure you get a correct result in most cases:

    $ svgo --disable=convertPathData --disable=mergePaths -i img/a.svg

    convertPathData will convert the path data using relative and shorthand notations. Unfortunately, some environments won’t fully recognise this syntax and you’ll get something like:

    Screenshot of Gnome Image Viewer displaying an original SVG image (left) and a version optimised via svgo on (right)

    Screenshot of Gnome Image Viewer displaying an original SVG image (left) and a version optimised via svgo on (right)



    Please note that the optimised image will display correctly in all browsers. So you may still want to use this plugin.

    The other plugin that can cause you trouble—mergePaths—will merge together shapes of the same style to reduce the number of <path> tags in the source. However, this might create issues if two paths overlap.

    Merge paths issue

    In the image on the right, please note the rendering differences around the character’s neck and hand, also note the Twitter logo. The outline view shows 3 overlapping paths that make up the character’s head.

    My suggestion is to first try svgo with all plugins activated, then if anything is wrong, deactivate the two mentioned above.

    If the result is still very different from your original image, then you’ll have to deactivate the plugins one by one to detect the one which causes the issue. Here is a list of svgo plugins.

    Optimising even further

    svgo is a great tool, but in some specific cases, you’ll want to compress your SVG images even further. To do so, you have to dig into the file format and do some manual optimisations.

    In these cases, my favourite tool is Inkscape: it is free, open source and available on most platforms.

    If you want to use the mergePaths plugin of svgo, you must combine overlapping paths yourself. Here’s how to do it:

    Open your image in Inkscape and identify the path with the same style (fill and stroke). Select them all (maintain shift pressed for multiple selection). Click on the Path menu and select Union. You’re done—all three paths have been merged into a single one.

    Merge paths technique

    The 3 different paths that create the character’s head are merged, as shown by the outline view on the right.



    Repeat this operation for all paths of the same style that are overlapping and then you’re ready to use svgo again, keeping the mergePaths plugin.

    There are all sorts of different optimisations you can apply manually:

    • Convert strokes to paths so they can be merged with paths of similar style.
    • Cut paths manually to avoid using clip-path.
    • Exclude an underneath path from a overlapping path and merge with a similar path to avoid layer issues. (In the image above, see the character’s hair—the side hair path is under his head, but the top hair is above it—so you can’t merge the 3 hair paths as is.)

    Final considerations

    These manual optimisations can take a lot of time for meagre results, so think twice before starting!

    A good rule of thumb when optimising SVG images is to make sure the final file has only one path per style (same fill and stroke style) and uses no <g> tags to group path to objects.

    In Firefox OS, we use an icon font, gaia-icons, generated from SVG glyphs. I noticed that optimising them resulted in a significantly lighter font file, with no visual differences.

    Whether you use SVG for embedding images on an app or to create a font file, always remember to optimise. It will make your users happier!

  4. This API is so Fetching!

    For more than a decade the Web has used XMLHttpRequest (XHR) to achieve asynchronous requests in JavaScript. While very useful, XHR is not a very nice API. It suffers from lack of separation of concerns. The input, output and state are all managed by interacting with one object, and state is tracked using events. Also, the event-based model doesn’t play well with JavaScript’s recent focus on Promise- and generator-based asynchronous programming.

    The Fetch API intends to fix most of these problems. It does this by introducing the same primitives to JS that are used in the HTTP protocol. In addition, it introduces a utility function fetch() that succinctly captures the intention of retrieving a resource from the network.

    The Fetch specification, which defines the API, nails down the semantics of a user agent fetching a resource. This, combined with ServiceWorkers, is an attempt to:

    1. Improve the offline experience.
    2. Expose the building blocks of the Web to the platform as part of the extensible web movement.

    As of this writing, the Fetch API is available in Firefox 39 (currently Nightly) and Chrome 42 (currently dev). Github has a Fetch polyfill.

    Feature detection

    Fetch API support can be detected by checking for Headers,Request, Response or fetch on the window or worker scope.

    Simple fetching

    The most useful, high-level part of the Fetch API is the fetch() function. In its simplest form it takes a URL and returns a promise that resolves to the response. The response is captured as a Response object.

    fetch("/data.json").then(function(res) {
      // res instanceof Response == true.
      if (res.ok) {
        res.json().then(function(data) {
          console.log(data.entries);
        });
      } else {
        console.log("Looks like the response wasn't perfect, got status", res.status);
      }
    }, function(e) {
      console.log("Fetch failed!", e);
    });

    Submitting some parameters, it would look like this:

    fetch("http://www.example.org/submit.php", {
      method: "POST",
      headers: {
        "Content-Type": "application/x-www-form-urlencoded"
      },
      body: "firstName=Nikhil&favColor=blue&password=easytoguess"
    }).then(function(res) {
      if (res.ok) {
        alert("Perfect! Your settings are saved.");
      } else if (res.status == 401) {
        alert("Oops! You are not authorized.");
      }
    }, function(e) {
      alert("Error submitting form!");
    });

    The fetch() function’s arguments are the same as those passed to the
    Request() constructor, so you may directly pass arbitrarily complex requests to fetch() as discussed below.

    Headers

    Fetch introduces 3 interfaces. These are Headers, Request and
    Response. They map directly to the underlying HTTP concepts, but have
    certain visibility filters in place for privacy and security reasons, such as
    supporting CORS rules and ensuring cookies aren’t readable by third parties.

    The Headers interface is a simple multi-map of names to values:

    var content = "Hello World";
    var reqHeaders = new Headers();
    reqHeaders.append("Content-Type", "text/plain"
    reqHeaders.append("Content-Length", content.length.toString());
    reqHeaders.append("X-Custom-Header", "ProcessThisImmediately");

    The same can be achieved by passing an array of arrays or a JS object literal
    to the constructor:

    reqHeaders = new Headers({
      "Content-Type": "text/plain",
      "Content-Length": content.length.toString(),
      "X-Custom-Header": "ProcessThisImmediately",
    });

    The contents can be queried and retrieved:

    console.log(reqHeaders.has("Content-Type")); // true
    console.log(reqHeaders.has("Set-Cookie")); // false
    reqHeaders.set("Content-Type", "text/html");
    reqHeaders.append("X-Custom-Header", "AnotherValue");
     
    console.log(reqHeaders.get("Content-Length")); // 11
    console.log(reqHeaders.getAll("X-Custom-Header")); // ["ProcessThisImmediately", "AnotherValue"]
     
    reqHeaders.delete("X-Custom-Header");
    console.log(reqHeaders.getAll("X-Custom-Header")); // []

    Some of these operations are only useful in ServiceWorkers, but they provide
    a much nicer API to Headers.

    Since Headers can be sent in requests, or received in responses, and have various limitations about what information can and should be mutable, Headers objects have a guard property. This is not exposed to the Web, but it affects which mutation operations are allowed on the Headers object.
    Possible values are:

    • “none”: default.
    • “request”: guard for a Headers object obtained from a Request (Request.headers).
    • “request-no-cors”: guard for a Headers object obtained from a Request created
      with mode “no-cors”.
    • “response”: naturally, for Headers obtained from Response (Response.headers).
    • “immutable”: Mostly used for ServiceWorkers, renders a Headers object
      read-only.

    The details of how each guard affects the behaviors of the Headers object are
    in the specification. For example, you may not append or set a “request” guarded Headers’ “Content-Length” header. Similarly, inserting “Set-Cookie” into a Response header is not allowed so that ServiceWorkers may not set cookies via synthesized Responses.

    All of the Headers methods throw TypeError if name is not a valid HTTP Header name. The mutation operations will throw TypeError if there is an immutable guard. Otherwise they fail silently. For example:

    var res = Response.error();
    try {
      res.headers.set("Origin", "http://mybank.com");
    } catch(e) {
      console.log("Cannot pretend to be a bank!");
    }

    Request

    The Request interface defines a request to fetch a resource over HTTP. URL, method and headers are expected, but the Request also allows specifying a body, a request mode, credentials and cache hints.

    The simplest Request is of course, just a URL, as you may do to GET a resource.

    var req = new Request("/index.html");
    console.log(req.method); // "GET"
    console.log(req.url); // "http://example.com/index.html"

    You may also pass a Request to the Request() constructor to create a copy.
    (This is not the same as calling the clone() method, which is covered in
    the “Reading bodies” section.).

    var copy = new Request(req);
    console.log(copy.method); // "GET"
    console.log(copy.url); // "http://example.com/index.html"

    Again, this form is probably only useful in ServiceWorkers.

    The non-URL attributes of the Request can only be set by passing initial
    values as a second argument to the constructor. This argument is a dictionary.

    var uploadReq = new Request("/uploadImage", {
      method: "POST",
      headers: {
        "Content-Type": "image/png",
      },
      body: "image data"
    });

    The Request’s mode is used to determine if cross-origin requests lead to valid responses, and which properties on the response are readable. Legal mode values are "same-origin", "no-cors" (default) and "cors".

    The "same-origin" mode is simple, if a request is made to another origin with this mode set, the result is simply an error. You could use this to ensure that
    a request is always being made to your origin.

    var arbitraryUrl = document.getElementById("url-input").value;
    fetch(arbitraryUrl, { mode: "same-origin" }).then(function(res) {
      console.log("Response succeeded?", res.ok);
    }, function(e) {
      console.log("Please enter a same-origin URL!");
    });

    The "no-cors" mode captures what the web platform does by default for scripts you import from CDNs, images hosted on other domains, and so on. First, it prevents the method from being anything other than “HEAD”, “GET” or “POST”. Second, if any ServiceWorkers intercept these requests, they may not add or override any headers except for these. Third, JavaScript may not access any properties of the resulting Response. This ensures that ServiceWorkers do not affect the semantics of the Web and prevents security and privacy issues that could arise from leaking data across domains.

    "cors" mode is what you’ll usually use to make known cross-origin requests to access various APIs offered by other vendors. These are expected to adhere to
    the CORS protocol. Only a limited set of headers is exposed in the Response, but the body is readable. For example, you could get a list of Flickr’s most interesting photos today like this:

    var u = new URLSearchParams();
    u.append('method', 'flickr.interestingness.getList');
    u.append('api_key', '<insert api key here>');
    u.append('format', 'json');
    u.append('nojsoncallback', '1');
     
    var apiCall = fetch('https://api.flickr.com/services/rest?' + u);
     
    apiCall.then(function(response) {
      return response.json().then(function(json) {
        // photo is a list of photos.
        return json.photos.photo;
      });
    }).then(function(photos) {
      photos.forEach(function(photo) {
        console.log(photo.title);
      });
    });

    You may not read out the “Date” header since Flickr does not allow it via
    Access-Control-Expose-Headers.

    response.headers.get("Date"); // null

    The credentials enumeration determines if cookies for the other domain are
    sent to cross-origin requests. This is similar to XHR’s withCredentials
    flag, but tri-valued as "omit" (default), "same-origin" and "include".

    The Request object will also give the ability to offer caching hints to the user-agent. This is currently undergoing some security review. Firefox exposes the attribute, but it has no effect.

    Requests have two read-only attributes that are relevant to ServiceWorkers
    intercepting them. There is the string referrer, which is set by the UA to be
    the referrer of the Request. This may be an empty string. The other is
    context which is a rather large enumeration defining what sort of resource is being fetched. This could be “image” if the request is from an tag in the controlled document, “worker” if it is an attempt to load a worker script, and so on. When used with the fetch() function, it is “fetch”.

    Response

    Response instances are returned by calls to fetch(). They can also be created by JS, but this is only useful in ServiceWorkers.

    We have already seen some attributes of Response when we looked at fetch(). The most obvious candidates are status, an integer (default value 200) and statusText (default value “OK”), which correspond to the HTTP status code and reason. The ok attribute is just a shorthand for checking that status is in the range 200-299 inclusive.

    headers is the Response’s Headers object, with guard “response”. The url attribute reflects the URL of the corresponding request.

    Response also has a type, which is “basic”, “cors”, “default”, “error” or
    “opaque”.

    • "basic": normal, same origin response, with all headers exposed except
      “Set-Cookie” and “Set-Cookie2″.
    • "cors": response was received from a valid cross-origin request. Certain headers and the body may be accessed.
    • "error": network error. No useful information describing the error is available. The Response’s status is 0, headers are empty and immutable. This is the type for a Response obtained from Response.error().
    • "opaque": response for “no-cors” request to cross-origin resource. Severely
      restricted

    The “error” type results in the fetch() Promise rejecting with TypeError.

    There are certain attributes that are useful only in a ServiceWorker scope. The
    idiomatic way to return a Response to an intercepted request in ServiceWorkers is:

    addEventListener('fetch', function(event) {
      event.respondWith(new Response("Response body", {
        headers: { "Content-Type" : "text/plain" }
      });
    });

    As you can see, Response has a two argument constructor, where both arguments are optional. The first argument is a body initializer, and the second is a dictionary to set the status, statusText and headers.

    The static method Response.error() simply returns an error response. Similarly, Response.redirect(url, status) returns a Response resulting in
    a redirect to url.

    Dealing with bodies

    Both Requests and Responses may contain body data. We’ve been glossing over it because of the various data types body may contain, but we will cover it in detail now.

    A body is an instance of any of the following types.

    In addition, Request and Response both offer the following methods to extract their body. These all return a Promise that is eventually resolved with the actual content.

    • arrayBuffer()
    • blob()
    • json()
    • text()
    • formData()

    This is a significant improvement over XHR in terms of ease of use of non-text data!

    Request bodies can be set by passing body parameters:

    var form = new FormData(document.getElementById('login-form'));
    fetch("/login", {
      method: "POST",
      body: form
    })

    Responses take the first argument as the body.

    var res = new Response(new File(["chunk", "chunk"], "archive.zip",
                           { type: "application/zip" }));

    Both Request and Response (and by extension the fetch() function), will try to intelligently determine the content type. Request will also automatically set a “Content-Type” header if none is set in the dictionary.

    Streams and cloning

    It is important to realise that Request and Response bodies can only be read once! Both interfaces have a boolean attribute bodyUsed to determine if it is safe to read or not.

    var res = new Response("one time use");
    console.log(res.bodyUsed); // false
    res.text().then(function(v) {
      console.log(res.bodyUsed); // true
    });
    console.log(res.bodyUsed); // true
     
    res.text().catch(function(e) {
      console.log("Tried to read already consumed Response");
    });

    This decision allows easing the transition to an eventual stream-based Fetch API. The intention is to let applications consume data as it arrives, allowing for JavaScript to deal with larger files like videos, and perform things like compression and editing on the fly.

    Often, you’ll want access to the body multiple times. For example, you can use the upcoming Cache API to store Requests and Responses for offline use, and Cache requires bodies to be available for reading.

    So how do you read out the body multiple times within such constraints? The API provides a clone() method on the two interfaces. This will return a clone of the object, with a ‘new’ body. clone() MUST be called before the body of the corresponding object has been used. That is, clone() first, read later.

    addEventListener('fetch', function(evt) {
      var sheep = new Response("Dolly");
      console.log(sheep.bodyUsed); // false
      var clone = sheep.clone();
      console.log(clone.bodyUsed); // false
     
      clone.text();
      console.log(sheep.bodyUsed); // false
      console.log(clone.bodyUsed); // true
     
      evt.respondWith(cache.add(sheep.clone()).then(function(e) {
        return sheep;
      });
    });

    Future improvements

    Along with the transition to streams, Fetch will eventually have the ability to abort running fetch()es and some way to report the progress of a fetch. These are provided by XHR, but are a little tricky to fit in the Promise-based nature of the Fetch API.

    You can contribute to the evolution of this API by participating in discussions on the WHATWG mailing list and in the issues in the Fetch and ServiceWorker specifications.

    For a better web!

    The author would like to thank Andrea Marchesini, Anne van Kesteren and Ben
    Kelly for helping with the specification and implementation.

  5. Ruby support in Firefox Developer Edition 38

    It was a long-time request from East Asian users, especially Japanese users, to have ruby support in the browser.

    Formerly, because of the lack of native ruby support in Firefox, users had to install add-ons like HTML Ruby to make ruby work. However, in Firefox Developer Edition 38, CSS Ruby has been enabled by default, which also brings the support of HTML5 ruby tags.

    Introduction

    What is ruby? In short, ruby is an extra text, which is usually small, attached to the main text for indicating the pronunciation or meaning of the corresponding characters. This kind of annotation is widely used in Japanese publications. It is also common in Chinese for books for children, educational publications, and dictionaries.

    Ruby Annotation

    Basic Usage

    Basically, the ruby support consists of four main tags: <ruby>, <rb>, <rt>, and <rp>. <ruby> is the tag that wraps the whole ruby structure, <rb> is used to mark the text in the normal line, <rt> is for the annotation, and <rp> is a tag which is hidden by default. With the four tags, the result above can be achieved from the following code:

    <ruby>
      <rb>とある<rb>科学<rb><rb>超電磁砲</rb>
      <rp></rp><rt>とある<rt>かがく<rt><rt>レールガン</rt><rp></rp>
    </ruby>

    Since <rb> and <rt> can be auto-closed by themselves, we don’t bother to add code to close those tags manually.

    As shown in the image, the duplicate parts as well as <rp>s are hidden automatically. But why should we add content which is hidden by default?

    The answer is: it is more natural in semantics, and it helps conversion to the inline form, which is a more general form accepted by more software. For example, this allows the page to have a decent effect on a browser with no ruby support. It also enables the user agent to generate well-formed plain text when you want to copy text with ruby (though this feature hasn’t yet been landed on Firefox).

    In addition, the extra content makes it possible to provide inline style of the annotation without changing the document. You would only need to add a rule to your stylesheet:

    ruby, rb, rt, rp {
      display: inline;
      font-size: inherit;
    }

    Actually, if you don’t have those requirements, only <ruby> and <rt> are necessary. For simplest cases, e.g. a single ideographic character, you can use code like:

    <ruby><rt>Saki</rt></ruby>

    Advanced Support

    Aside from the basic usage of ruby, Firefox now provides support for more advanced cases.

    By default, if the width of an annotation does not match its base text, the shorter text will be justified as shown in the example above. However, this behavior can be controlled via the ruby-align property. Aside from the default value (space-around), it can also make the content align to both sides (space-between), centered (center), or aligned to the start side (start).

    Multiple levels of annotations are also supported via tag <rtc> which is the container of <rt>s. Every <rtc> represents one level of annotation, but if you leave out the <rtc>, the browser will do some cleanup to wrap consecutive <rt>s in a single anonymous <rtc>, forming one level.

    For example, we can extend the example above to:

    <ruby>
      <rb>とある<rb>科学<rb><rb>超電磁砲</rb>
      <rp></rp><rt>とある<rt>かがく<rt><rt>レールガン</rt><rp></rp>
      <rtc><rt>Toaru<rt>Kagaku<rt>no<rt>Rērugan</rt></rtc><rp></rp>
    </ruby>

    If you do not put any <rt> inside a <rtc>, this annotation will become a span across the whole base:

    <ruby>
      <rb>とある<rb>科学<rb><rb>超電磁砲</rb>
      <rp></rp><rt>とある<rt>かがく<rt><rt>レールガン</rt><rp></rp>
      <rtc lang="en">A Certain Scientific Railgun</rtc><rp></rp>
    </ruby>

    You can use ruby-position to place the given level of annotation on the side you want. For the examples above, if you want to put the second level under the main line, you can apply ruby-position: under; to the <rtc> tag. Currently, only under and the default value over is supported.

    (Note: The CSS Working Group is considering a change to the default value of ruby-position, so that annotations become double-sided by default. This change is likely to happen in a future version of Firefox.)

    In the end, an advanced example of ruby combining everything introduced above:

    Advanced Example for Ruby

    rtc:lang(en) {
      ruby-position: under;
      ruby-align: center;
      font-size: 75%;
    }
    <ruby>
      <rb>とある<rb>科学<rb><rb>超電磁砲</rb>
      <rp></rp><rt>とある<rt>かがく<rt><rt>レールガン</rt><rp></rp>
      <rtc lang="en">A Certain Scientific Railgun</rtc><rp></rp>
    </ruby>

    Got questions, comments, feedback on the implementation? Don’t hesitate to share your thoughts or issues here or via bugzilla.

  6. Announcing the MDN Fellowship Program

    For nearly a decade, the Mozilla Developer Network (MDN) has been a vital source of technical information for millions of web and mobile developers. And while each month hundreds of developers actively contribute to MDN, we know there are many more with deep expertise in the Web who aren’t participating—yet. Certainly MDN and the Web would benefit from their knowledge and skill, and we’re piloting a program to provide benefits for them as well.

    Mozillians at Summit 2013.

    Mozillians at Summit 2013. Photo by Tristan Nitot.


    What it is

    The MDN Fellowship pilot is a seven-week part-time (5-10 hours per week) education and leadership program pairing advanced web and mobile developers with engineering and educational experts at Mozilla to work on significant, influential web projects. Fellows will contribute by developing apps, API descriptions, and curriculum on MDN for their project. They will also receive hands-on coaching from project mentors, as well as others at Mozilla who will provide training and guidance on curriculum development, so that their work can be taught to others.

    An overview of projects

    Here’s a look at some of the technical topics and the associated project tasks we’ve identified for our first group of MDN fellows.

    • ServiceWorkers essentially act as proxy servers that sit between web applications, the browser, and (when available) the network. They are key to the success of web apps, enabling the creation of effective offline experiences and allow access to push notifications and background sync APIs.
      What you’ll work on: You’ll write a demonstration web app (new or existing) to showcase Service Worker functionality and provide detailed API descriptions.
    • WebGL is the latest incarnation of the OpenGL family of real-time rendering immediate mode graphics APIs. This year WebGL is getting some cool new features with the publication of standardization efforts around the WebGL 2.0 spec.
      What you’ll do: Develop a curriculum on MDN for teaching the WebGL APIs developers new to graphics programming.
    • Web app performance. App performance is impacted by many factors, including serving content, rendering, and interactivity. Finding and addressing performance bottlenecks depends on tooling the browser networking and rendering but also (and often more importantly) user perception.
      What you’ll do. Develop a curriculum on MDN for teaching developers to master performance tooling and to develop web apps with performance as a feature.
    • TestTheWebForward. Mozilla participates in an important W3C initiative, TestTheWebForward, a community-driven effort for open web platform testing.
      What you’ll do: Review various existing technical specifications to identify gaps between the documentation and current situation. Refine existing tests to adapt to this cross-browser test harness.
    • MDN curriculum development. MDN serves as a trusted resource for millions of developers. In 2015, MDN will expand the scope of this content by developing Content Kits: key learning materials including code samples, video screencasts and demos, and more.

      What you’ll do: Act as lead curator for technical curriculum addressing a key web technology, developing code samples, videos, interactive exercises and other essential educational components. You may propose your own subject area (examples: virtual reality on the web, network security, CSS, etc.) or we will work with you to match you based on your subject area knowledge and Mozilla priorities.

    “I’m looking forward to working with a Fellow to make low-level real-time graphics more approachable,” says WebGL project mentor Nick Desaulniers. More information, including project mentors and the type of skills & experience required for each project, can be found on the MDN Fellowship page.

    MDN writers

    Photo by Tristan Nitot.

    How to apply

    If any of these projects sounds interesting, check out the website and apply by April 1. We’ll announce the fellows in May and start the program in June by bringing fellows and mentors together to get everyone familiarized with their projects and one another. Then, for 6 weeks, you’ll work directly with your mentor on your project from your home base. You’ll also receive ongoing feedback and coaching to set you up for success on teaching what you’re building to larger groups of developers.

    Here’s a timeline:

    • Now – April 1: Apply!
    • April: Finalist candidates will be interviewed.
    • May: Fellows are announced.
    • June (dates & location TBA): Orientation at a Mozilla space.
    • June 29 – August 3: Work on your projects + regular team calls to receive coaching on your work.
    • August 11-12: Graduation

    This program is for you if…

    You code, document, and ship with confidence, and now you’d like to start sharing your skills with a broader community. Maybe you want a way out of the walled garden and you’d like to contribute to Mozilla’s mission of keeping the web open for all. Perhaps you want to be effective at sharing your expertise with a wider audience.

    Consider this program if you want an opportunity to:

    • Amplify the impact of your technical expertise by contributing to influential, significant projects at Mozilla.
    • Stretch your technical expertise by working closely with Mozilla technical mentors.
    • Integrate educational best practices into your work with Mozilla teaching mentors.

    Check out the MDN Fellowship website, apply by April 1, and encourage others as well!

    Mozillians at Mozcamp Warsaw. Photo by Tristan Nitot

  7. asm.js Speedups Everywhere

    asm.js is an easy-to-optimize subset of JavaScript. It runs in all browsers without plugins, and is a good target for porting C/C++ codebases such as game engines – which have in fact been the biggest adopters of this approach, for example Unity 3D and Unreal Engine.

    Obviously, developers porting games using asm.js would like them to run well across all browsers. However, each browser has different performance characteristics, because each has a different JavaScript engine, different graphics implementation, and so forth. In this post, we’ll focus on JavaScript execution speed and see the significant progress towards fast asm.js execution that has been happening across the board. Let’s go over each of the four major browsers now.

    Chrome

    Already in 2013, Google released Octane 2.0, a new version of their primary JavaScript benchmark suite, which contained a new asm.js benchmark, zlib. Benchmarks define what browsers optimize: things that matter are included in benchmarks, and browsers then compete to achieve the best scores. Therefore, adding an asm.js benchmark to Octane clearly signaled Google’s belief that asm.js content is important to optimize for.

    A further major development happened more recently, when Google landed TurboFan, a new work-in-progress optimizing compiler for Chrome’s JavaScript engine, v8. TurboFan has a “sea of nodes” architecture (which is new in the JavaScript space, and has been used very successfully elsewhere, for example in the Java server virtual machine), and aims to reach even higher speeds than CrankShaft, the first optimizing compiler for v8.

    While TurboFan is not yet ready to be enabled on all JavaScript content, as of Chrome 41 it is enabled on asm.js. Getting the benefits of TurboFan early on asm.js shows the importance of optimizing asm.js for the Chrome team. And the benefits can be quite substantial: For example, TurboFan speeds up Emscripten‘s zlib benchmark by 13%, and fasta by 24%.

    Safari

    During the last year, Safari’s JavaScript Engine, JavaScriptCore, introduced a new JIT (Just In Time compiler) called FTL. FTL stands for “Fourth Tier LLVM,” as it adds a fourth level of optimization above the three previously-existing ones, and it is based on LLVM, a powerful open source compiler framework. This is exciting because LLVM is a top-tier general-purpose compiler, with many years of optimizations put into it, and Safari gets to reuse all those efforts. As shown in the blogposts linked to earlier, the speedups that FTL provides can be very substantial.

    Another interesting development from Apple this year was the introduction of a new JavaScript benchmark, JetStream. JetStream contains several asm.js benchmarks, an indication that Apple believes asm.js content is important to optimize for, just as when Google added an asm.js benchmark to Octane.

    Internet Explorer

    The JavaScript engine inside Internet Explorer is named Chakra. Last year, the Chakra team blogged about a suite of optimizations coming to IE in Windows 10 and pointed to significant improvements in the scores on asm.js workloads in Octane and JetStream. This is yet another example of how having asm.js workloads in common benchmarks drives measurement and optimization.

    The big news, however, is the recent announcement by the Chakra team that they are working on adding specific asm.js optimizations, to arrive in Windows 10 together with the other optimizations mentioned earlier. These optimizations haven’t made it to the Preview channel yet, so we can’t measure and report on them here. However, we can speculate on the improvements based on the initial impact of landing asm.js optimizations in Firefox. As shown in this benchmark comparisons slide containing measurements from right after the landing, asm.js optimizations immediately brought Firefox to around 2x slower than native performance (from 5-12x native before). Why should these wins translate to Chakra? Because, as explained in our previous post, the asm.js spec provides a predictable way to validate asm.js code and generate high-quality code based on the results.

    So, here’s looking forward to good asm.js performance in Windows 10!

    Firefox

    As we mentioned before, the initial landing of asm.js optimizations in Firefox generally put Firefox within 2x of native in terms of raw throughput. By the end of 2013, we were able to report that the gap had shrunk to around 1.5x native – which is close to the amount of variability that different native compilers have between each other anyhow, so comparisons to “native speed” start to be less meaningful.

    At a high-level, this progress comes from two kinds of improvements: compiler backend optimizations and new JavaScript features. In the area of compiler backend optimizations, there has been a stream of tiny wins (specific to particular code patterns or hardware) making it difficult to point to any one thing. Two significant improvements stand out, though:

    Along with backend optimization work, two new JavaScript features have been incorporated into asm.js which unlock new performance capabilities in the hardware. The first feature, Math.fround, may look simple but it enables the compiler backend to generate single-precision floating-point arithmetic when used carefully in JS. As described in this post, the switch can result in anywhere from a 5% – 60% speedup, depending on the workload. The second feature is much bigger: SIMD.js. This is still a stage 1 proposal for ES7 so the new SIMD operations and the associated asm.js extensions are only available in Firefox Nightly. Initial results are promising though.

    Separate from all these throughput optimizations, there have also been a set of load time optimizations in Firefox: off-main-thread and parallel compilation of asm.js code as well as caching of the compiled machine code. As described in this post, these optimizations significantly improve the experience of starting a Unity- or Epic-sized asm.js application. Existing asm.js workloads in the benchmarks mentioned above do not test this aspect of asm.js performance so we put together a new benchmark suite named Massive that does. Looking at Firefox’s Massive score over time, we can see the load-time optimizations contributing to a more than 6x improvement (more details in the Hacks post introducing the Massive benchmark).

    The Bottom Line

    What is most important, in the end, are not the underlying implementation details, nor even specific performance numbers on this benchmark or that. What really matters is that applications run well. The best way to check that is to actually run real-world games! A nice example of an asm.js-using game is Dead Trigger 2, a Unity 3D game:

    The video shows the game running on Firefox, but as it uses only standard web APIs, it should work in any browser. We tried it now, and it renders quite smoothly on Firefox, Chrome and Safari. We are looking forward to testing it on the next Preview version of Internet Explorer as well.

    Another example is Cloud Raiders:

    As with Unity, the developers of Cloud Raiders were able to compile their existing C++ codebase (using Emscripten) to run on the web without relying on plugins. The result runs well in all four of the major browsers.

    In conclusion, asm.js performance has made great strides over the last year. There is still room for improvement – sometimes performance is not perfect, or a particular API is missing, in one browser or another – but all major browsers are working to make sure that asm.js runs quickly. We can see that by looking at the benchmarks they are optimizing on, which contain asm.js, and in the new improvements they are implementing in their JavaScript engines, which are often motivated by asm.js. As a result, games that not long ago would have required plugins are quickly getting to the point where they can run well without them, in modern browsers across the web.

  8. Firefox Developer Edition 38: 64-bits and more

    In celebration of the 10th anniversary of Firefox, we unveiled Firefox Developer Edition, the first browser created specifically for developers. At that time, we also announced plans to ship a 64-bit version of Firefox. Today we’re happy to announce the next phase of that plan: 64-bit builds for Firefox Developer Edition are now available on Windows, adding to the already supported platforms of OS X and Linux.

    A 64-bit build is a major step toward giving users rich, desktop-quality app experiences in the browser. Let’s also take a look at at some of the other features that make this a release worth noting. If you haven’t downloaded the Developer Edition browser yet, it’s a fine time to give it a try. Here’s why:

    DevEditionEpic

    Unreal demo in Win 64-bit Developer Edition

    Run larger applications

    A 32-bit browser is limited to 4GB of address space. That address space is further whittled down by fragmentation issues. Meanwhile, web applications are getting bigger and bigger. Browser-based games that deliver performant, native-like gameplay, such as those built with Epic Games’ Unreal Engine, are often much larger than we expect from traditional web applications. These games ship with large assets that must be stored in memory so they can be synchronously loaded.

    For some of the largest of these apps, a 64-bit browser means the difference between whether or not a game will run. For example, when porting to asm.js it’s recommended to keep heap size to 512mb in a 32-bit browser. That goes up to 2GB in a 64-bit version of Firefox.

    Emscripten helps port C and C++ code to run on the Web and deliver native-like performance. For an in-depth look at how assets are stored and accessed using a variety of methods in asm.js/emscripten built applications, read Alon Zakai’s post on Synchronous Execution and Filesystem Access in Emscripten.

    Gain faster execution and increased security

    64-bit Firefox just goes faster. We get access to new hardware registers and instructions to speed up JavaScript code.

    For asm.js code, the increased address space also lets us use hardware memory protection to safely remove bounds checks from asm.js heap accesses. The gains are pretty dramatic: 8%-17% on the asmjs-apps-*-throughput tests as reported on arewefastyet.com.

    The larger 64-bit address space also improves the effectiveness of ASLR (address space layout randomization), making it more difficult for web content to exploit the browser.

    Firefox Developer Edition additions and improvements

    Beyond the new 64-bit capabilities, the Firefox 38 Developer Edition release implements many new features, as it does every 6 weeks when it is updated. Some of these are described below. For all the details and associated bugs in progress, you’ll want to visit the release notes.

    WebRTC changes

    In a post about WebRTC from 2013, we documented some workarounds and limitations of WebRTC mozRTCPeerConnection. One fix involved adding multiple MediaStreams to one mozRTCPeerConnection and renegotiating on an existing session.

    The new version of Firefox Developer Edition fixes these issues. We now support adding multiple media streams (camera, screen sharing, audio stream) to the same mozRTCPeerConnection within a WebRTC conversation. This allows the developer to call the addStream method for each additional stream, which in turn triggers the onAddStream event for the clients.

    Renegotiation allows streams to be modified during a conversation, for example sharing a screen stream during a conversation. This is now possible without re-creating a session.

    webrtcexample

    WebRTC with multiple streams

    Last week we announced that WebRTC requires Perfect Forward Secrecy (PFS) starting in Firefox 38. We’ll dig a little deeper into details of our WebRTC implementation in an upcoming article. Stay tuned.

    The BroadcastChannel API

    The BroadcastChannel API allows simple messaging between browser contexts with the same user agent and origin is now available. Here’s more detail and some ideas for how to use the BroadcastChannel API in Firefox 38.

    Support for KeyboardEvent.code

    KeyboardEvent.code is now enabled by default. The code attribute give a developer the ability to determine which physical key is pressed without keyboard layout or keyboard state modifications.

    keyboard.code

    KeyboardEvent code attribute

    For more examples of uses cases see the motivation section of the UI Events Specification (formerly DOM Level 3 Events).

    XHR logging

    The Network Monitor already displays a great deal of information on XMLHttpRequests, but often the console is used to debug code along with network requests. In the latest Developer Edition of Firefox, the console now supports filtering XMLHttpRequests within console logging.

    xhrnet

    Network Monitor XHR Request

    xhrfilter

    XHR logging in console

    Let us know what you think

    Many additional improvements are available in this version. Download it now. Tell a friend.

    As always, you can take a close look at the Developer Edition release notes. Please be sure to share your feedback and feature ideas in the Firefox Developer Tools UserVoice channel.

  9. Birdsongs, Musique Concrète, and the Web Audio API

    In January 2015, my friend and collaborator Brian Belet and I presented Oiseaux de Même — an audio soundscape app created from recordings of birds — at the first Web Audio Conference. In this post I’d like to describe my experience of implementing this app using the Web Audio API, Twitter Bootstrap, Node.js, and REST APIs.

    Screenshot showing Birds of a Feather, a soundscape created with field recordings of birds that are being seen in your vicinity.

    Screenshot showing Birds of a Feather, a soundscape created with field recordings of birds that are being seen in your vicinity.

    What is it? Musique Concrète and citizen science

    We wanted to create a web-based Musique Concrète, building an artistic sound experience by processing field recordings. We decided to use xeno-canto — a library of over 200,000 recordings of 9,000 different bird species — as our source of recordings. Almost all the recordings are licensed under Creative Commons by their generous recordists. We select recordings from this library based on data from eBird, a database of tens of millions of bird sightings contributed by bird watchers everywhere. By using the Geolocation API to retrieve eBird sightings near to the listeners’ location, our soundscape can consist of recordings of bird species that bird watchers have reported recently near the listener — each user gets a personalized soundscape that changes daily.

    Use of the Web Audio API

    We use the browser’s Web Audio API to play back the sounds from xeno-canto. The Web Audio API allows developers to play back, record, analyze, and process sound by creating AudioNodes that are connected together, like an old modular synthesizer.

    Our soundscape is implemented using four AudioBuffer nodes, each of which plays a field recording in a loop. These loops are placed in a stereo field using Panner nodes, and mixed together before being sent to the listener’s speakers or headphones.

    Controls

    After all the sounds have loaded and begin playing, we offer users several controls for manipulating the sounds as they play:

    • The Pan button randomizes the spatial location of the sound in 3D space.
    • The Rate button randomizes the playback rate.
    • The Reverse button reverses the direction of sound playback.
    • Finally, the Share button lets you capture the state of the soundscape and save that snapshot for later.

    The controls described above are implemented as typical JavaScript event handlers. When the Pan button is pressed, for example, we run this handler:

    // sets the X,Y,Z position of the Panner to random values between -1 and +1
    BirdSongPlayer.prototype.randomizePanner = function() {
      this.resetLastActionTime();
      // NOTE: x = -1 is LEFT
      this.panPosition = { x: 2 * Math.random() - 1, y: 2 * Math.random() - 1, z: 2 * Math.random() - 1}
      this.panner.setPosition( this.panPosition.x, this.panPosition.y, this.panPosition.z);
    }

    Some parts of the Web Audio API are write-only

    I had a few minor issues where I had to work around shortcomings in the Web Audio API. Other authors have already documented similar experiences; I’ll summarize mine briefly here:

    • Can’t read Panner position: In the event handler for the Share button, I want to retrieve and store the current Audio Buffer playback rate and Panner position. However, the current Panner node does not allow retrieval of the position after setting it. Hence, I store the new Panner position in an instance variable in addition to calling setPosition().

      This has had a minimal impact on my code so far. My longer-term concern is that I’d rather store the position in the Panner and retrieve it from there, instead of storing a copy elsewhere. In my experience, multiple copies of the same information becomes a readability and maintainability problem as code grows bigger and more complex.

    • Can’t read AudioBuffer’s playbackRate: The Rate button described above calls linearRampToValueAtTime() on the playbackRate AudioParam. As far as I can tell, AudioParams don’t let me retrieve their values after calling linearRampToValueAtTime(), so I’m obliged to keep a duplicate copy of this value in my JS object.
    • Can’t read AudioBuffer playback position: I’d like to show the user the current playback position for each of my sound loops, but the API doesn’t provide this information. Could I compute it myself? Unfortunately, after a few iterations of ramping an AudioBuffer’s playbackRate between random values, it is very difficult to compute the current playback position within the buffer. Unlike some API users, I don’t need a highly accurate position, I just want to visualize for my users when the current sound loop restarts.

    Debugging with the Web Audio inspector

    Firefox’s Web Audio inspector shows how Audio Nodes are connected to one another.

    Firefox’s Web Audio inspector shows how Audio Nodes are connected to one another.



    I had great success using Firefox’s Web Audio inspector to watch my Audio Nodes being created and interconnected as my code runs.

    In the screenshot above, you can see the four AudioBufferSources, each feeding through a GainNode and PannerNode before being summed by an AudioDestination. Note that each recording is also connected to an AnalyzerNode; the Analyzers are used to create the scrolling amplitude graphs for each loop.

    Visualizing sound loops

    As the soundscape evolves, users often want to know which bird species is responsible for a particular sound they hear in the mix. We use a scrolling visualization for each loop that shows instantaneous amplitude, creating distinctive shapes you can correlate with what you’re hearing. The visualization uses the Analyzer node to perform a fast Fourier transform (FFT) on the sound, which yields the amplitude of the sound at every frequency. We compute the average of all those amplitudes, and then draw that amplitude at the right edge of a Canvas. As the contents of the Canvas shift sideways on every animation frame, the result is a horizontally scrolling amplitude graph.

    BirdSongPlayer.prototype.initializeVUMeter = function() {
      // set up VU meter
      var myAnalyser = this.analyser;
      var volumeMeterCanvas = $(this.playerSelector).find('canvas')[0];
      var graphicsContext = volumeMeterCanvas.getContext('2d');
      var previousVolume = 0;
     
      requestAnimationFrame(function vuMeter() {
        // get the average, bincount is fftsize / 2
        var array =  new Uint8Array(myAnalyser.frequencyBinCount);
        myAnalyser.getByteFrequencyData(array);
        var average = getAverageVolume(array);
        average = Math.max(Math.min(average, 128), 0);
     
        // draw the rightmost line in black right before shifting
        graphicsContext.fillStyle = 'rgb(0,0,0)'
        graphicsContext.fillRect(258, 128 - previousVolume, 2, previousVolume);
     
        // shift the drawing over one pixel
        graphicsContext.drawImage(volumeMeterCanvas, -1, 0);
     
        // clear the rightmost column state
        graphicsContext.fillStyle = 'rgb(245,245,245)'
        graphicsContext.fillRect(259, 0, 1, 130);
     
        // set the fill style for the last line (matches bootstrap button)
        graphicsContext.fillStyle = '#5BC0DE'
        graphicsContext.fillRect(258, 128 - average, 2, average);
     
        requestAnimationFrame(vuMeter);
        previousVolume = average;
      });
    }

    What’s next

    I’m continuing to work on cleaning up my JavaScript code for this project. I have several user interface improvements suggested by my Mozillia colleagues that I’d like to try. And Prof. Belet and I are considering what other sources of geotagged sounds we can use to make more soundscapes with. In the meantime, please try Oiseaux de Même for yourself and let us know what you think!

  10. WebRTC requires Perfect Forward Secrecy (PFS) starting in Firefox 38

    Today, we are announcing that Firefox 38 will take further measures to secure users’ communications by removing support in WebRTC for all DTLS cipher suites that do not support forward secrecy. For developers: if you have a WebRTC application or server that doesn’t support PFS ciphers, you will need to update your code.

    Forward secrecy, also known as Perfect Forward Secrecy (PFS), is a feature of a cryptographic protocol that limits the damage of a key compromise: “This means that the compromise of one [session] cannot lead to the compromise of others, and also that there is not a single secret value which can lead to the compromise of multiple [sessions]”.

    The PFS suites in TLS and DTLS use an ephemeral Diffie-Hellman key exchange (DHE) or elliptic-curve Diffie-Hellman (ECDHE) to create a different shared secret key for each session. The WebRTC security architecture recommends that PFS suites be preferred for WebRTC.

    Due to bug 102794, Firefox is unable to act as a server for DHE cipher suites. We plan to add complete DHE support, but until then we recommend the use of the ECDHE cipher suites.

    Existing users of the webrtc.org codebase who are using OpenSSL and derivatives such as BoringSSL need to update to enable ECDHE ciphers. This bug contains more details.

    If you have a WebRTC application or server that doesn’t support PFS ciphers, you should be working on getting that resolved ASAP. Firefox 38 is scheduled for Beta the week of March 30th, and a general release is planned for Tuesday, May 12th.