Mozilla

Firefox Articles

Sort by:

View:

  1. Mozilla Introduces the First Browser Built For Developers: Firefox Developer Edition

    Developers are critical to the continued success of the Web. The content and apps they create compel us to come back to the Web every day, whether on a computer or mobile phone.

    In celebration of the 10th anniversary of Firefox, we’re excited to unveil Firefox Developer Edition, the first browser created specifically for developers.

    Ten years ago, we built Firefox for early adopters and developers to give them more choice and control. Firefox integrated WebAPIs and Add-ons to enable people to get the most out of the Web. Now we’re giving developers the whole browser as a hard-hat area, allowing us to bring front and center the features most relevant to them. Having a dedicated developer browser means we can tailor the browsing experience to what developers do every day.

    Because Firefox is part of an open-source, independent community and not part of a proprietary ecosystem, we’re able to offer features other browsers can’t by applying our tools everywhere the Web goes, regardless of platform or device.

    One of the biggest pain points for developers is having to use numerous siloed development environments in order to create engaging content or for targeting different app stores. For these reasons, developers often end up having to bounce between different platforms and browsers, which decreases productivity and causes frustration.

    Firefox Developer Edition solves this problem by creating a focal point to streamline your development workflow. It’s a stable developer browser which is not only a powerful authoring tool but also robust enough for everyday browsing. It also adds new features that simplify the process of building for the entire Web, whether targeting mobile or desktop across many different platforms.

    If you’re an experienced developer, you’ll already be familiar with the installed tools so you can focus on developing your content or app as soon as you open the browser. There’s no need to download additional plugins or applications to debug mobile devices. If you’re a new Web developer, the streamlined workflow and the fact that everything is already set up and ready to go makes it easier to get started building sophisticated applications.

    So what’s under the hood?

    The first thing you’ll notice is the distinctive dark design running through the browser. We applied the developer tools theme to the entire browser. It’s trim and sharp and focused on saving space for the content on your screen. It also fits in with the darker look common among creative app development tools.

    We’ve also integrated two powerful new features, Valence and WebIDE that improve workflow and help you debug other browsers and apps directly from within Firefox Developer Edition.

    Valence (previously called Firefox Tools Adapter) lets you develop and debug your app across multiple browsers and devices by connecting the Firefox dev tools to other major browser engines. Valence also extends the awesome tools we’ve built to debug Firefox OS and Firefox for Android to the other major mobile browsers including Chrome on Android and Safari on iOS. So far these tools include our Inspector, Debugger and Console and Style Editor.

    WebIDE allows you to develop, deploy and debug Web apps directly in your browser, or on a Firefox OS device. It lets you create a new Firefox OS app (which is just a web app) from a template, or open up the code of an existing app. From there you can edit the app’s files. It’s one click to run the app in a simulator and one more to debug it with the developer tools.

    Firefox Developer Edition also includes all the tools experienced Web developers are familiar with, including:

    • Responsive Design Mode – see how your website or Web app will look on different screen sizes without changing the size of your browser window.
    • Page Inspector- examine the HTML and CSS of any Web page and easily modify the structure and layout of a page.
    • Web Console – see logged information associated with a Web page and use Web Console and interact with a Web page using JavaScript.
    • JavaScript Debugger – step through JavaScript code and examine or modify its state to help track down bugs.
    • Network Monitor – see all the network requests your browser makes, how long each request takes and details of each request.
    • Style Editor – view and edit CSS styles associated with a Web page, create new ones and apply existing CSS stylesheets to any page.
    • Web Audio Editor – inspect and interact with Web Audio API in real time to ensure that all audio nodes are connected in the way you expect.

    Give it a try and let us know what you think. We’re keen to hear your feedback.

    More Information:

  2. Generational Garbage Collection in Firefox

    Generational garbage collection (GGC) has now been enabled in the SpiderMonkey JavaScript engine in Firefox 32. GGC is a performance optimization only, and should have no observable effects on script behavior.

    So what is it? What does it do?

    GGC is a way for the JavaScript engine to collect short-lived objects faster. Say you have code similar to:

    function add(point1, point2) {
        return [ point1[0] + point2[0], point1[1] + point2[1] ];
    }

    Without GGC, you will have high overhead for garbage collection (from here on, just “GC”). Each call to add() creates a new Array, and it is likely that the old arrays that you passed in are now garbage. Before too long, enough garbage will pile up that the GC will need to kick in. That means the entire JavaScript heap (the set of all objects ever created) needs to be scanned to find the stuff that is still needed (“live”) so that everything else can be thrown away and the space reused for new objects.

    If your script does not keep very many total objects live, this is totally fine. Sure, you’ll be creating tons of garbage and collecting it constantly, but the scan of the live objects will be fast (since not much is live). However, if your script does create a large number of objects and keep them alive, then the full GC scans will be slow, and the performance of your script will be largely determined by the rate at which it produces temporary objects — even when the older objects aren’t changing, and you’re just re-scanning them over and over again to discover what you already knew. (“Are you dead?” “No.” “Are you dead?” “No.” “Are you dead?”…)

    Generational collector, Nursery & Tenured

    With a generational collector, the penalty for temporary objects is much lower. Most objects will be allocated into a separate memory region called the Nursery. When the Nursery fills up, only the Nursery will be scanned for live objects. The majority of the short-lived temporary objects will be dead, so this scan will be fast. The survivors will be promoted to the Tenured region.

    The Tenured heap will also accumulate garbage, but usually at a far lower rate than the Nursery. It will take much longer to fill up. Eventually, we will still need to do a full GC, but under typical allocation patterns these should be much less common than Nursery GCs. To distinguish the two cases, we refer to Nursery collections as minor GCs and full heap scans as major GCs. Thus, with a generational collector, we split our GCs into two types: mostly fast minor GCs, and fewer slower major GCs.

    GGC Overhead

    While it might seem like we should have always been doing this, it turns out to require quite a bit of infrastructure that we previously did not have, and it also incurs some overhead during normal operation. Consider the question of how to figure out whether some Nursery object is live. It might be pointed to by a live Tenured object — for example, if you create an object and store it into a property of a live Tenured object.

    How do you know which Nursery objects are being kept alive by Tenured objects? One alternative would be to scan the entire Tenured heap to find pointers into the Nursery, but this would defeat the whole point of GGC. So we need a way of answering the question more cheaply.

    Note that these Tenured ⇒ Nursery edges in the heap graph won’t last very long, because the next minor GC will promote all survivors in the Nursery to the Tenured heap. So we only care about the Tenured objects that have been modified since the last minor (or major) GC. That won’t be a huge number of objects, so we make the code that writes into Tenured objects check whether it is writing any Nursery pointers, and if so, record the cross-generational edges in a store buffer.

    In technical terms, this is known as a write barrier. Then, at minor GC time, we walk through the store buffer and mark every target Nursery object as being live. (We actually use the source of the edge at the same time, since we relocate the Nursery object into the Tenured area while marking it live, and thus the Tenured pointer into the Nursery needs to be updated.)

    With a store buffer, the time for a minor GC is dependent on the number of newly-created edges from the Tenured area to the Nursery, not just the number of live objects in the Nursery. Also, keeping track of the store buffer records (or even just the checks to see whether a store buffer record needs to be created) does slow down normal heap access a little, so some code patterns may actually run slower with GGC.

    Allocation Performance

    On the flip side, GGC can speed up object allocation. The pre-GGC heap needs to be fully general. It must track in-use and free areas and avoid fragmentation. The GC needs to be able to iterate over everything in the heap to find live objects. Allocating an object in a general heap like this is surprisingly complex. (GGC’s Tenured heap has pretty much the same set of constraints, and in fact reuses the pre-GGC heap implementation.)

    The Nursery, on the other hand, just grows until it is full. You never need to delete anything, at least until you free up the whole Nursery during a minor GC, so there is no need to track free regions. Consequently, the Nursery is perfect for bump allocation: to allocate N bytes you just check whether there is space available, then increment the current end-of-heap pointer by N bytes and return the previous pointer.

    There are even tricks to optimize away the “space available” check in many cases. As a result, objects with a short lifespan never go through the slower Tenured heap allocation code at all.

    Timings

    I wrote a simple benchmark to demonstrate the various possible gains of GGC. The benchmark is sort of a “vector Fibonacci” calculation, where it computes a Fibonacci sequence for both the x and y components of a two dimensional vector. The script allocates a temporary object on every iteration. It first times the loop with the (Tenured) heap nearly empty, then it constructs a large object graph, intended to be placed into the Tenured portion of the heap, and times the loop again.

    On my laptop, the benchmark shows huge wins from GGC. The average time for an iteration through the loop drops from 15 nanoseconds (ns) to 6ns with an empty heap, demonstrating the faster Nursery allocation. It also shows the independence from the Tenured heap size: without GGC, populating the long-lived heap slows down the mean time from 15ns to 27ns. With GGC, the speed stays flat at 6ns per iteration; the Tenured heap simply doesn’t matter.

    Note that this benchmark is intended to highlight the improvements possible with GGC. The actual benefit depends heavily on the details of a given script. In some scripts, the time to initialize an object is significant and may exceed the time required to allocate the memory. A higher percentage of Nursery objects may get tenured. When running inside the browser, we force enough major GCs (eg, after a redraw) that the benefits of GGC are less noticeable.

    Also, the description above implies that we will pause long enough to collect the entire heap, which is not the case — our incremental garbage collector dramatically reduces pause times on many Web workloads already. (The incremental and generational collectors complement each other — each attacks a different part of the problem.)

    Continued…

  3. Building Firefox Hub Add-ons for Firefox for Android

    The Firefox Hub APIs allow add-ons to add new panels to the Firefox for Android home page, where users normally find their top sites, bookmarks and history. These APIs were introduced in Firefox 30, but there are more features and bug fixes in Firefox 31 and 32. You can already find some of these add-ons on addons.mozilla.org, and there is some boilerplate code on github to help you get started.

    image01
    image00

    Overview

    There are two main parts to building a Firefox Hub add-on: creating a home panel, and storing data to show in that panel. Home panels consist of different views, each of which displays data from a given dataset.

    Creating a new home panel

    To create a home panel, first use the Home.panels API to register a panel. The register API takes a panel id and an options callback function as parameters. This options callback is called to dynamically generate an options object whenever a panel is installed or updated, which allows for dynamic locale changes.

    function optionsCallback() {
      return {
        title: "My Panel",
        views: [{
          type: Home.panels.View.LIST,
          dataset: "my.dataset@mydomain.org"
        }]
      };
    }
     
    Home.panels.register("my.panel@mydomain.org", optionsCallback);

    You must always register any existing panels on startup, but the first time you want the panel to actually appear on the user’s home page (e.g. when your add-on is installed), you also need to explicitly install the panel.

    Home.panels.install("my.panel@mydomain.org");

    You can modify the options callback function to customize the way data is displayed in your panel. For example, you can choose to display your data in a grid or a list, customize the view that is displayed when no data is available, or choose to launch an intent when the user taps on one of the items.

    Storing data for the panel

    To actually show something in your new home panel, use the HomeProvider API to store data. This API allows you to asynchronously save and delete data, as well as register a callback to allow the browser to periodically sync your data for you.

    The HomeProvider API gives you access to HomeStorage objects, which you can interact with to save and delete data from a given dataset. These methods are designed to be used with Task.jsm to execute asynchronous transactions within a task.

    let storage = HomeProvider.getStorage("my.dataset@mydomain.org");
    Task.spawn(function() {
      yield storage.save(items);
    }).then(null, Cu.reportError);

    In Firefox 31, we expanded the save API to support replacing existing data for you, which is convenient for periodically refreshing your dataset.

    function refreshDataset() {
      let items = fetchItems();
      Task.spawn(function() {
            yield storage.save(items, { replace: true });
      }).then(null, Cu.reportError);
    }
     
    HomeProvider.addPeriodicSync("my.dataset@mydomain.org", 3600, 
    refreshDataset);

    This code snippet will ensure that our dataset is refreshed once every 3600 seconds (1 hour).

    What’s new in Firefox 32 Beta

    In addition to bug fixes, Firefox 32 also adds a few more features to the set of Firefox Hub APIs.

    Refresh handler

    In addition to support for periodically updating data, we also added support for “pull to refresh”, which gives users the power to manually refresh panel data. To take advantage of this feature, you can add an onrefresh property to your view declaration.

    function optionsCallback() {
      return {
        title: "My Panel",
        views: [{
          type: Home.panels.View.LIST,
          dataset: "my.dataset@mydomain.org",
          onrefresh: refreshDataset
        }]
      };
    }

    With this new line added, swiping down on your panel will trigger a refresh indicator and call the refreshDataset function. The refresh indicator will disappear after a save call is made for that dataset.

    Authentication view

    We added support for an authentication view, to make it easier for your add-on to use data that requires authentication. This view includes space for text and an image, as well as a button that triggers an authentication flow. To use this feature, you can add an auth property to your panel declaration.

    function optionsCallback() {
      return {
        title: "My Panel",
        views: [{
          type: Home.panels.View.LIST,
          dataset: "my.dataset@mydomain.org"
        }],
        auth: {
         authenticate: function authenticate() {
            // … do some stuff to authenticate the user …
           Home.panels.setAuthenticated("my.panel@mydomain.org", true);
         },
         messageText: "Please log in to see your data",
         buttonText: "Log in"
       }
      };
    }

    By default, the authentication view will appear when your panel is first installed, and the authenticate function will be called when the user taps the button in the view. It is up to you to call setAuthenticated(true) when the user successfully completes an authentication flow, and you can also call setAuthenticated(false) when a user becomes unauthenticated. This authentication state will persist between app runs, so it is up to you to reset it if you need to.

    Future work

    We have ideas about ways to expand these APIs, but please let us know if there is anything you would like to see! We’re also always looking for new contributors to Firefox for Android, and we’d love to help you get started writing patches.

  4. Firefox OS Apps run on Android

    At Mozilla we believe that apps and browsing are best viewed as cooperative and symbiotic, each better when working together. We are working to strengthen that relationship by building an apps ecosystem that is built using the Web technologies that so many developers are already familiar with.

    We built Firefox OS as a mobile OS that puts the Web and Open Web Apps at the centre of the mobile experience. The efforts to reduce the performance gaps between the Web and native are paying rich dividends and our work on exposing device capabilities to the Web via WebAPIs, have made web first app development a viable alternative to native platforms.

    Build Open Web Apps, run out-of-the-box on Android

    Now, with Firefox for Android 29, Mozilla is extending this open Open Web Apps ecosystem to Android. Over the past few months, we have been working on providing a “native experience” for Open Web Apps. What this means is that as a user, you can now manage your web app just like you would a native app. You can install/update/uninstall the app and the app will also show up in the App Drawer as well as the Recent Apps list.

    As a developer, you can now build your Open Web App for Firefox OS devices and have that app reach millions of existing Firefox for Android users without having to change a single line of code!

    Check out the video to see an Open Web App in action on an Android device,

    Better yet, if you have installed Firefox for Android try one or build an app and submit it to the Marketplace.

    We also recommend reading Testing Your Native Android App.

  5. Creating a Multiplayer Game with TogetherJS and CreateJS

    Bubble Hell Duel is a multiplayer HTML5 dogfighting game. The object of the game is to dodge bubbles launched from your opponent while returning fire. This game was written mainly as a prototype for learning and the source code is available on GitHub. You can try the game out in single or multiplayer here. Currently the game does not contain any sound effects but uses CreateJS and TogetherJS.

    screenshot

    In this post I would like to share some of my experiences when developing the game. Please share your thoughts in the comments if you agree or have other suggestions.

    Game Engines

    When developing a 2d game you can write you own engine or make use of some fantastic libraries that are available. After spending a few days looking at the various options available I decided to use CreateJS. As I have some experience with Flash, CreateJS made sense for my needs as there was not much of a learning curve. I also wanted to make use of some Flash animations and CreateJS supported this feature. I will elaborate a bit more on animations later in the article.

    As I am a C++ developer I believe emscripten is also a good choice. It allows C/C++ code to be compiled to JavaScript, which can be executed in the browser. I am of the opinion that the static type checking and compile-time optimizations are great assets when developing large code bases. I have used emscripten before and it works very well, but for this project I wanted the fast and convenient prototyping capabilities of JavaScript. I also wanted to expand my JavaScript knowledge.

    I’d like to mention a few other libraries that seem very interesting: Cocos2d-x is making an emscripten port and they already support HTML5 binding. I also like pixi.js as it provides a webGL renderer but also supports Canvas fallback when the browser does not support webGL.

    C++ vs JavaScript

    At first I was a little bit worried about the performance of JavaScript, and that was the reason my decision between using CreateJS or emscripten was difficult. Fortunately a simple benchmark showed that a naive collision detection algorithm with about 400 balls on screen could still reach 40+ fps, which was enough for my simple experiment.

    As someone who has coded more in C++ than JavaScript I loved how quickly I could translate my thoughts into code and test them out on multiple browsers. On the other hand it was not very comfortable debugging my JavaScript. C++ compilers are quite good at pointing out misspellings and other mistakes that cause runtime issues. While the “use strict” directive and other mechanisms like closure compilers have their purpose they were not very helpful to me especially when variables became undefined. Rooting for the cause of errors can be somewhat difficult comparatively.

    As an example of difficult debugging, I encountered the following issue. I was using float numbers for coordinates and other geometric values like angles. These values were passed to the other player using the TogetherJS.send method for synchronization:

    var player = { x: 10.0, y: 10.0 };
    TogetherJS.send({type:'sync',x:player.x,y:player.y});
    TogetherJS.hub.on('sync', function(msg){
        enemy.x = msg.x;
        enemy.y = msg.y;
    });

    This worked, but lots of decimals were sent in this way, so I decided to relax the accuracy:

    TogetherJS.send({type:'sync', x:Math.round(player.x), y:Math.round(player.y) });

    Then I thought integers might not be accurate enough for collision detection, so I added more digits to the messages:

    TogetherJS.send({type:'sync', x:player.x.toFixed(2), y:player.y.toFixed(2) });

    While this seemed a reasonable solution, it actually induced a bug that was very hard to find and I did not notice it until I tested the game after implementing some more features. I noticed while playing the game the opponent would never move.

    It took me hours in debugging before I could locate the cause. I do not think I would have made this mistake using C++.

    If you would like to see this bug in action take a look at this jsFiddle project. Look at the three canvas tag outputs and you will notice the third canvas contains the bug. This issue occurs because toFixed returns a string representation.

    I am not sure using a closure compiler would have avoided this issue, but I did find in another project that it definitely helps with optimizations.

    Animation with Flash

    As with most games I wanted to use a good deal of animation. I was very familiar with creating animations in Flash and found that CreateJS supported several ways of consuming the Flash animations and presenting them in HTML5. CreateJS is a set of libraries and tools used to create interactive HTML5 content. So by using CreateJS I could consume my animations as well as use the other libraries available for loop handling, resource management and in the future, sound manipulation. For a quick introduction to CreateJS take a look at this video.

    CreateJS, which Mozilla now sponsors, offers great support for Flash animations.

    There are two ways of using Flash animations in HTML5 with CreateJS. The first option is to directly export the Flash animation in a way that you can access all the elements in their original form, including paths, transformations and tweens. The advantage to this approach is that it produces smaller files, and CreateJS allows you to transfer them into a sprite sheet on the client side, for faster rendering. Adobe Flash CS6 offers the CreateJS Toolkit plugin that allows the designer to export all the content of an animation to HTML5 files. This generally results in a JavaScript file with all the graphics and tweens, an HTML file, and a set of image files. You can open up the HTML document in your browser and see the animation.

    Another option is to export the animation into a sprite sheet, that is an image containing all the frames with a JavaScript file describing the position and size of each frame. These files can be easily integrated into HTML based games or applications via the SpriteSheet class in CreateJS. This is the approach I used for this game. To see the code where I use the SpriteSheet have a look at this link. If you want some more detail on this approach take a look at this video.

    I should also note that you can use a tool called Zoë to export directly to a sprite sheet or a JSON file from a Flash Animation as well.

    marisa

    The above image is an example of a sprite sheet that I use in the game and was generated as described above. The original image came from the game Touhou Hisouten ~ Scarlet Weather Rhapsody, which is availabe at http://www.spriters-resource.com.

    Multiplayer with TogetherJS

    On my first iteration of the code the game was not multiplayer. Originally it was a single-player bullet hell game, with a boss foe randomly moving across the screen. I could not last more than 30 seconds before succumbing to withering fire. It was interesting enough that I thought multiplayer would be exciting.

    I had heard of Together.js not long after it was released. The jsFiddle project is powered by Together.js and offers an impressive collaboration mode. This led me to using Together.js in my game. It is also very nice that Mozilla offers a default hub server simplifying the process of creating a multiplayer web based game. To learn more about Together.js be sure to check out this article.

    It was easy and comfortable integrating Together.js into my game, as it works like other event dispatcher/listeners frameworks.

    With Together.js, I was able to implement random match and invitation only multiplayer modes in the game. I did face a few design challenges that I had to overcome when designing the communication protocol.

    First off, I did not put code in to prevent cheating with two-party communications and assumed a certain level of trust between players. In the game design currently all collision detection of a player is done locally. Theoretically if you block corresponding messages you can mask that you have taken damage.

    Another area that I hacked a bit is that the bubbles of the enemy avatar are generated locally and randomly. This means that the bubbles seen from your character avatar are not necessarily the same as your opponent is seeing.

    In practice neither of these shortcuts should ruin the fun of the game.
    I did encounter a couple of issues or caveats with Together.JS.

    • I did not find a way to disable the cursor updating in Together.js. While this is useful in collaborative tools I did not need it in my game.
    • I am using Together.js in an asymmetric way, where both players see themselves as the red skirted Avatar (Reimu). This allows for easier placement of the player at the bottom of the screen and the opponent at the top. This also means that when you move the main player from an opponent’s view of the game your move is seen as the opponents move and vice versa.

    The Fun of Making Mistakes

    There are two visual effects in the game that came as unexpected surprises:

    • When a round finishes and the message ‘You Win’ or ‘You Lose’ appears, the time is frozen for a few seconds. This acts like a dramatic pause.
    • When a charge attack is released, the bullets are fixed and then gradually blown away toward the enemy.

    Neither of these effects was designed in this way. I didn’t want the pause and I wanted the bullets to continue rotating around the player upon releasing. However I made mistakes, and the result seemed to be much better than I had planned, so they made the final cut.

    Conclusion and Future Plans

    It is always fun learning new things. I like the fact that I could prototype and visualize pretty quickly. In the future I might add more patterns for the bullet curtains, and a few sound effects. In addition I will probably also draw more background images or possibly animate them.

    While developing the game I did realize in order to get a natural and intuitive feel required more effort than I expected. This is something I have always taken for granted while playing game.

    The code is open source, so feel free to fork and play. Be sure to comment if you have any suggestions for improving the game or the existing code.

  6. Reconciling Mozilla’s Mission and W3C EME

    May 19 Update: We’ve added an FAQ below the text of the original post to address some of the questions and comments Mozilla has received regarding EME.

    With most competing browsers and the content industry embracing the W3C EME specification, Mozilla has little choice but to implement EME as well so our users can continue to access all content they want to enjoy. Read on for some background on how we got here, and details of our implementation.

    Digital Rights Management (DRM) is a tricky issue. On the one hand content owners argue that they should have the technical ability to control how users share content in order to enforce copyright restrictions. On the other hand, the current generation of DRM is often overly burdensome for users and restricts users from lawful and reasonable use cases such as buying content on one device and trying to consume it on another.

    DRM and the Web are no strangers. Most desktop users have plugins such as Adobe Flash and Microsoft Silverlight installed. Both have contained DRM for many years, and websites traditionally use plugins to play restricted content.

    In 2013 Google and Microsoft partnered with a number of content providers including Netflix to propose a “built-in” DRM extension for the Web: the W3C Encrypted Media Extensions (EME).

    The W3C EME specification defines how to play back such content using the HTML5 <video> element, utilizing a Content Decryption Module (CDM) that implements DRM functionality directly in the Web stack. The W3C EME specification only describes the JavaScript APIs to access the CDM. The CDM itself is proprietary and is not specified in detail in the EME specification, which has been widely criticized by many, including Mozilla.

    Mozilla believes in an open Web that centers around the user and puts them in control of their online experience. Many traditional DRM schemes are challenging because they go against this principle and remove control from the user and yield it to the content industry. Instead of DRM schemes that limit how users can access content they purchased across devices we have long advocated for more modern approaches to managing content distribution such as watermarking. Watermarking works by tagging the media stream with the user’s identity. This discourages copyright infringement without interfering with lawful sharing of content, for example between different devices of the same user.

    Mozilla would have preferred to see the content industry move away from locking content to a specific device (so called node-locking), and worked to provide alternatives.

    Instead, this approach has now been enshrined in the W3C EME specification. With Google and Microsoft shipping W3C EME and content providers moving over their content from plugins to W3C EME Firefox users are at risk of not being able to access DRM restricted content (e.g. Netflix, Amazon Video, Hulu), which can make up more than 30% of the downstream traffic in North America.

    We have come to the point where Mozilla not implementing the W3C EME specification means that Firefox users have to switch to other browsers to watch content restricted by DRM.

    This makes it difficult for Mozilla to ignore the ongoing changes in the DRM landscape. Firefox should help users get access to the content they want to enjoy, even if Mozilla philosophically opposes the restrictions certain content owners attach to their content.

    As a result we have decided to implement the W3C EME specification in our products, starting with Firefox for Desktop. This is a difficult and uncomfortable step for us given our vision of a completely open Web, but it also gives us the opportunity to actually shape the DRM space and be an advocate for our users and their rights in this debate. The existing W3C EME systems Google and Microsoft are shipping are not open source and lack transparency for the user, two traits which we believe are essential to creating a trustworthy Web.

    The W3C EME specification uses a Content Decryption Module (CDM) to facilitate the playback of restricted content. Since the purpose of the CDM is to defy scrutiny and modification by the user, the CDM cannot be open source by design in the EME architecture. For security, privacy and transparency reasons this is deeply concerning.

    From the security perspective, for Mozilla it is essential that all code in the browser is open so that users and security researchers can see and audit the code. DRM systems explicitly rely on the source code not being available. In addition, DRM systems also often have unfavorable privacy properties. To lock content to the device DRM systems commonly use “fingerprinting” (collecting identifiable information about the user’s device) and with the poor transparency of proprietary native code it’s often hard to tell how much of this fingerprinting information is leaked to the server.

    We have designed an implementation of the W3C EME specification that satisfies the requirements of the content industry while attempting to give users as much control and transparency as possible. Due to the architecture of the W3C EME specification we are forced to utilize a proprietary closed-source CDM as well. Mozilla selected Adobe to supply this CDM for Firefox because Adobe has contracts with major content providers that will allow Firefox to play restricted content via the Adobe CDM.

    Firefox does not load this module directly. Instead, we wrap it into an open-source sandbox. In our implementation, the CDM will have no access to the user’s hard drive or the network. Instead, the sandbox will provide the CDM only with communication mechanism with Firefox for receiving encrypted data and for displaying the results.

    Traditionally, to implement node-locking DRM systems collect identifiable information about the user’s device and will refuse to play back the content if the content or the CDM are moved to a different device.

    By contrast, in Firefox the sandbox prohibits the CDM from fingerprinting the user’s device. Instead, the CDM asks the sandbox to supply a per-device unique identifier. This sandbox-generated unique identifier allows the CDM to bind content to a single device as the content industry insists on, but it does so without revealing additional information about the user or the user’s device. In addition, we vary this unique identifier per site (each site is presented a different device identifier) to make it more difficult to track users across sites with this identifier.

    Adobe and the content industry can audit our sandbox (as it is open source) to assure themselves that we respect the restrictions they are imposing on us and users, which includes the handling of unique identifiers, limiting the output to streaming and preventing users from saving the content. Mozilla will distribute the sandbox alongside Firefox, and we are working on deterministic builds that will allow developers to use a sandbox compiled on their own machine with the CDM as an alternative. As plugins today, the CDM itself will be distributed by Adobe and will not be included in Firefox. The browser will download the CDM from Adobe and activate it based on user consent.

    While we would much prefer a world and a Web without DRM, our users need it to access the content they want. Our integration with the Adobe CDM will let Firefox users access this content while trying to maximize transparency and user control within the limits of the restrictions imposed by the content industry.

    There is also a silver lining to the W3C EME specification becoming ubiquitous. With direct support for DRM we are eliminating a major use case of plugins on the Web, and in the near future this should allow us to retire plugins altogether. The Web has evolved to a comprehensive and performant technology platform and no longer depends on native code extensions through plugins.

    While the W3C EME-based DRM world is likely to stay with us for a while, we believe that eventually better systems such as watermarking will prevail, because they offer more convenience for the user, which is good for the user, but in the end also good for business. Mozilla will continue to advance technology and standards to help bring about this change.

    FAQ

    What did Mozilla announce?
    In a sentence: Mozilla is adding a new plug-in integration point to Firefox to allow an external DRM component from Adobe to supply the function of decrypting and decoding video data in a black box which is designed to make it difficult for the user to extract the decryption keys or the decrypted compressed data.

    A plug-in of this new type is called a Content Decryption Module (CDM) and is exposed to the Web via the Encrypted Media Extensions (EME) API proposed at the W3C by Google, Microsoft and Netflix (Here is a short technical explanation of EME). A CDM integrates with the HTML5 <video> and <audio> support provided by the Gecko engine instead of the <embed> or <object> elements that third parties have historically used to enable playback for video wrapped in DRM to Firefox, via software such as Adobe Flash Player and Microsoft Silverlight. We have formed a relationship with Adobe, who will distribute to end users a Firefox-compatible CDM implementing the Adobe Access DRM scheme, and Firefox will facilitate the download and installation of that CDM. Streaming services requiring DRM and implementing the EME-compatible version of Adobe Access should thereby, if they choose to, be able to stream media to Firefox Desktop users on Windows, Mac or Linux.

    Does this mean Mozilla is adding DRM to Firefox?
    No. Mozilla is providing a new integration point for third-party DRM that works with Firefox. Third-party DRM that works with Firefox is not new. Firefox (and every other browser) already provides another integration point for third parties to ship DRM: the Netscape Plugin API (NPAPI), which has been part of web browsers since 1995. What’s new is the ability of the third-party DRM to integrate with the HTML <video> element and its APIs when previously third-party DRM instead integrated with the <embed> and <object> elements. When integrating with <video>, the capabilities of the DRM component are more limited, and the browser has control over the style and accessibility of the playing video.

    Firefox, as shipped by Mozilla, will continue to be Free Software / Open Source Software.

    Why is Mozilla adding a new DRM integration point when the NPAPI already exists?
    NPAPI plug-ins come with much more than just DRM. In addition to the Adobe Access DRM component, Adobe Flash Player comes with an entire ActionScript runtime, a broad set of APIs, a graphics stack, a media stack and a networking stack. Likewise, in addition to the PlayReady DRM component, Microsoft Silverlight comes with a CLI virtual machine, a broad set of APIs, a graphics stack, a media stack and a networking stack. Driven in major part by Mozilla, the Open Web Platform is growing to match almost all the functionality that Adobe Flash Player or Microsoft Silverlight provide—with one big exception being DRM, which is necessarily non-open. The use of NPAPI plug-ins in most other situations is not as sustainable as it once was. As plugin owners start to migrate from supporting their plugins (for example, Microsoft appears to be ending Silverlight support and Adobe has discontinued Flash for Android), Firefox cannot continue to rely on NPAPI plug-ins for providing video DRM (and thereby allow users to watch movies from major Hollywood studios).

    The new CDM integration point is a much more focused plug-in API than the NPAPI. It permits a third-party component to provide the one function that an Open Source implementation of the Open Web Platform cannot provide to Hollywood’s satisfaction: decrypting and decoding video while aiming to make it very difficult for the end-user to tamper with the process. The browser’s media stack and the associated HTML5 APIs can be used for everything else. Since a CDM has less functionality than NPAPI plug-ins, it is easier to sandbox a CDM and easier to port it to new platforms.

    Why isn’t DRM dying together with NPAPI plug-ins?
    Mozilla’s competitors don’t appear to be letting DRM die together with NPAPI (or ActiveX) plug-ins. In fact, the Encrypted Media Extensions API was developed by Microsoft, Google and Netflix and Microsoft and Google have already implemented EME in their respective browsers.

    Netflix operates a massively popular (where available) online service that allows end-users to watch movies from major Hollywood studios and they are already serving content to Internet Explorer and Chrome OS using EME with Microsoft’s and Google’s own DRM schemes (PlayReady and Widevine).

    If Mozilla didn’t enable the possibility of installing the Adobe Access CDM for use with EME, we’d be in a situation similar to the one we were in when we did not support the H.264 codec in HTML5 video. Instead of moving away from H.264, Web sites still delivered H.264 video to Firefox users—but did it via the NPAPI using Adobe Flash Player or Microsoft Silverlight rather than via the <video> tag.

    Similarly, if Mozilla didn’t enable the use of a Hollywood-approved DRM scheme with HTML5 video using EME, Firefox users would need to continue using Flash, Silverlight or another NPAPI plugin to view Hollywood movies on Windows and Mac. As noted in the previous answer, the long-term future of that capability is in doubt, and the experience (both in terms of installation and in terms of performance) would be worse than the experience in Chrome and IE with their bundled EME CDMs. On other operating systems, Firefox users would be locked out of viewing Hollywood movies (as is the case today), but other browsers, for example Chrome on Linux and Android, would be in a position to support them.

    The ability to watch movies from major Hollywood studios is a feature users value. Netflix alone accounts for fully 1/3 of bandwidth usage in North America during the evening peak time. We expect that many users around the world would switch browsers in pursuit of this ability, or of a better experience, if Firefox provided either no experience or a worse experience (depending on operating system).

    How will Firefox facilitate the installation of the Adobe Access CDM?
    The user experience for EME in Firefox is still being considered. Users will have choice whether to enable use of the CDM.

    What does this mean for interoperability of the EME specification?
    The Adobe Access CDM as used with Firefox will support ISO Common Encryption (CENC). This is a way of encrypting individual tracks within an MP4 container using 128-bit AES-CTR such that the MP4 file declares the key identifiers for the AES keys needed for decryption but doesn’t contain the keys themselves. It is then up to the CDM to request the AES keys by ID from a key server that knows how to talk with the CDM. (The communication between the CDM and the key server is mediated through the EME API and a JavaScript program that can relay the EME messages to the key server over XMLHttpRequest.)

    It follows that a site can serve the same MP4/CENC files and the same JavaScript program to different browsers that have CDMs for different DRM schemes, as long as the site runs a distinct key server for each DRM scheme, since each DRM scheme has its format for the EME-mediated messages between the CDM and the key server.

    So there is expected to be interoperability on the level of media files and on the level of JS code served to different browsers, but CDMs from different vendors are expected to acquire keys using mutually incompatible protocols. (The EME API sees byte buffers whose contents are opaque to EME.)

    Whether EME+CENC is an interoperability improvement depends on what you compare it to. When a content provider operates a full array of key servers for the various DRM schemes that different players may support, it will be an interoperability improvement compared to video delivered via Adobe Flash Player or Microsoft Silverlight, or via apps written for a small set of specific mobile platforms. However, if a content provider doesn’t operate a full array of key servers and caters only to a subset of the EME-relevant DRM schemes, interoperability may not be as good as that provided by current plug-ins. And no DRM scheme can provide the full interoperability benefits of DRM-less HTML5 video.

    Won’t having to support multiple key servers with mutually incompatible DRM protocols (in order to get cross-browser support) make Web publishing prohibitively expensive for independent publishers?
    DRM is a requirement imposed by the major studios onto services that license movies from them. Independent video publishers can avoid the cost of DRM by not imposing a DRM requirement on themselves.

    Which streaming services will be supported?
    This is a new agreement with Adobe and it’s too early to be certain exactly which streaming services will support it.

    Since the Adobe Access CDM contains an H.264 decoder, does this mean that the decoder can be used for non-DRM content?
    Yes. The CDM component could also be used to provide non-DRMed H.264 and/or AAC support in the <video> tag. It is not yet determined for certain where, when and if this capability will be used—that depends on the availability of other options (such as OpenH264).

    The market conditions regarding the need for H.264 support in the browser have not changed significantly since Mozilla made the decision in 2012 to provide support for it (via OS libraries or third party software). Mozilla continues to believe that patent un-encumbered codecs are best for the web, and encourages video producers to use open codecs (WebM for example) without the use of DRM.

    What does this mean for downstream users of the Firefox code base?
    The solution consists of three parts: the browser, the CDM host and the CDM.

    The CDM host is an executable distinct from the browser that communicates with the browser using an inter-process communication (IPC) mechanism. The CDM is a shared library loaded by the CDM host. The CDM host drops privileges, such as disk and network access, before calling into the CDM.

    Mozilla will develop the CDM host and is planning on making its code open source as is the norm for Mozilla-developed code. However, the CDM will refuse to work if it finds itself in a host that isn’t identical to the Mozilla-shipped CDM host executable. In other words, downstream recipients of the source code for the CDM host won’t be able to exercise the freedom to modify the CDM host without rendering it useless (unless they also make arrangements with Adobe).

    This leaves downstream users of the Firefox code base with the following options:

    1. Not supporting the Adobe Access CDM.
    2. Distributing their own browser build that retains Firefox’s IPC behavior and distributing a copy of Mozilla’s CDM host executable.
    3. Distributing their own browser build that retains Firefox’s IPC behavior and distributing a self-built CDM host executable that is bit-identical to Mozilla’s CDM host executable. (I.e. this requires doing the work to achieve deterministic builds for the CDM host.)
    4. Making arrangements directly with Adobe to get a non-Mozilla CDM host executable recognized by the CDM.

    Do I have to run proprietary software in order to use Firefox?
    No. The Adobe Access CDM is entirely optional. However, we expect Hollywood studios, via their video streaming partners, to deny you access to view their content using the <video> tag if you choose not to use it.

    Does this mean applying DRM to HTML?
    No, this is about enabling DRM to be applied to video and audio tracks when played using HTML facilities. The DRM doesn’t apply to the HTML document that contains the video or audio element, to page images, or anything else other than video and audio tracks. There are no plans to support DRM for captioning data, for example. Mozilla strongly opposes any future expansion in scope of the W3C EME specification.

    Why is DRM supported for the <audio> element?
    “Audio” is a subset of “video with audio,” so if we restricted DRM to the <video> element, those who wished to use DRM with audio would just use a “video-less video.”

    Also, even though record labels gave up on DRM for music files which are sold to users, they still require DRM for music subscription services (that is, services where the user loses the ability to play the music upon terminating the subscription). Support for EME in the <audio> element helps move those services move off NPAPI plug-ins.

  7. It’s a wrap! “App Basics for FirefoxOS” is out and ready to get you started

    A week ago we announced a series of video tutorials around creating HTML5 apps for Firefox OS. Now we released all the videos and you can watch the series in one go.

    wrap
    Photo by Olliver Hallmann

    The series is aimed at web developers who want to build their first HTML5 application. Specifically it is meant to be distributed in the emerging markets, where Firefox OS is the first option to get an affordable smartphone and start selling apps to the audiences there.

    Over the last week, we released the different videos of the series – one each day:

    Yesterday we announced the last video in the series. For all of you who asked for the whole series to watch in one go, you now got the chance to do so.

    There are various resources you can use:

    What’s next?

    There will be more videos on similar topics coming in the future and we are busy getting the videos dubbed in other languages. If you want to help us get the word out, check the embedded versions of the videos on Codefirefox.com, where we use Amara to allow for subtitles.

    Speaking of subtitles and transcripts, we are currently considering both, depending on demand. If you think this would be a very useful thing to have, please tell us in the comments.

    Thanks

    Many thanks to Sergi, Jan, Jakob, Ketil, Nathalie and Anne from Telenor, Brian Bondy from Khan Academy, Paul Jarrat and Chris Heilmann of Mozilla to make all of this possible. Technologies used to make this happen were Screenflow, Amazon S3, Vid.ly by encoding.com and YouTube.

  8. Introducing the Canvas Debugger in Firefox Developer Tools

    The Canvas Debugger is a new tool we’ll be demoing at the Game Developers Conference in San Francisco. It’s a tool for debugging animation frames rendered on a Canvas element. Whether you’re creating a visualization, animation or debugging a game, this tool will help you understand and optimize your animation loop. It will let you debug either a WebGL or 2D Canvas context.

    Canvas Debugger Screenshot

    You can debug an animation using a traditional debugger, like our own JavaScript Debugger in Firefox’ Developer Tools. However, this can be difficult as it becomes a manual search for all of the various canvas methods you may wish to step through. The Canvas Debugger is designed to let you view the rendering calls from the perspective of the animation loop itself, giving you a much better overview of what’s happening.

    How it works

    The Canvas Debugger works by creating a snapshot of everything that happens while rendering a frame. It records all canvas context method calls. Each frame snapshot contains a list of context method calls and the associated JavaScript stack. By inspecting this stack, a developer can trace the call back to the higher level function invoked by the app or engine that caused something to be drawn.

    Certain types of Canvas context functions are highlighted to make them easier to spot in the snapshot. Quickly scrolling through the list, a developer can easily spot draw calls or redundant operations.

    Canvas Debugger Call Highlighting Detail

    Each draw call has an associated screenshot arranged in a timeline at the bottom of the screen as a “film-strip” view. You can “scrub” through this film-strip using a slider to quickly locate a draw call associated with a particular bit of rendering. You can also click a thumbnail to be taken directly to the associated draw call in the animation frame snapshot.

    Canvas Debugger Timeline Picture

    The thumbnail film-strip gives you get a quick overview of the drawing process. You can easily see how the scene is composed to get the final rendering.

    Stepping Around

    You might notice a familiar row of buttons in the attached screenshot. They’ve been borrowed from the JavaScript Debugger and provide the developer a means to navigate through the animation snapshot. These buttons may change their icons at final release, but for now, we’ll describe them as they currently look.

    Canvas Debugger Buttons image

    • “Resume” – Jump to the next draw call.
    • “Step Over” – Goes over the current context call.
    • “Step Out” – Jumps out of the animation frame (typically to the next requestAnimationFrame call).
    • “Step In” – Goes to the next non-context call in the JavaScript debugger

    Jumping to the JavaScript debugger by “stepping in” on a snapshot function call, or via a function’s stack, allows you to add a breakpoint and instantly pause if the animation is still running. Much convenience!

    Future Work

    We’re not done. We have some enhancements to make this tool even better.

    • Add the ability to inspect the context’s state at each method call. Highlight the differences in state between calls.
    • Measure Time spent in each draw call. This will readily show expensive canvas operations.
    • Make it easier to know which programs and shaders are currently in use at each draw call, allowing you to jump to the Shader Editor and tinkering with shaders in real time. Better linkage to the Shader Editor in general.
    • Inspecting Hit Regions by either drawing individual regions separately, colored differently by id, or showing the hit region id of a pixel when hovering over the preview panel using the mouse.

    And we’re just getting started. The Canvas Debugger should be landing in Firefox Nightly any day now. Watch this space for news of its landing and more updates.

  9. App basics for Firefox OS – a screencast series to get you started

    Over the next few days we’ll release a series of screencasts explaining how to start your first Open Web App and develop for Firefox OS.

    Firefox OS - Intro and hello

    Each of the screencasts is terse enough to watch in a short break and the whole series should not take you more than an hour of your time. The series features Jan Jongboom (@janjongboom), Sergi Mansilla (@sergimansilla) of Telenor Digital and Chris Heilmann (@codepo8) of Mozilla and was shot in three days in Oslo, Norway at the offices of Telenor Digital in February 2014.

    Here are the three of us telling you about the series and what to expect:

    Firefox OS is an operating system that brings the web to mobile devices. Instead of being a new OS with new technologies and development environments it builds on standardised web technologies that have been in use for years now. If you are a web developer and you want to build a mobile app, Firefox OS gives you the tools to do so, without having to change your workflow or learn a totally new development environment. In this series of short videos, developers from Mozilla and Telenor met in Oslo, Norway to explain in a few steps how you can get started to build applications for FirefoxOS. You’ll learn:

    • how to build your first application for Firefox OS
    • how to debug and test your application both on the desktop and the real device
    • how to get it listed in the marketplace
    • how to use the APIs and special interfaces Firefox OS offers a JavaScript developer to take advantage of the hardware available in smartphones.

    In addition to the screencasts, you can download the accompanying code samples from GitHub . If you want to try the code examples out for yourself, you will need to set up a very simple development environment. All you need is:

    • A current version of Firefox (which comes out of the box with the developer tools you need) – we recommend getting Firefox Aurora or Nightly if you really want to play with the state-of-the-art technology.
    • A text editor – in the screencasts we used Sublime Text, but any will do. If you want to be really web native, you can try Adobe Brackets.
    • A local server or a server to push your demo files to. A few of the demo apps need HTTP connections instead of local ones.

    sergi and chris recording

    Over the next few days we’ll cover the following topics:

    In addition to the videos, you can also go to the Wiki page of the series to get extra information and links on the subjects covered.

    Come back here to see the links appear day by day or follow us on Twitter at @mozhacks to get information when the next video is out.

    jan recording his video

    Once the series is out, there’ll be a Wiki resource to get them all in one place. Telenor are also working on getting these videos dubbed in different languages. For now, stay tuned.

    Many thanks to Sergi, Jan, Jakob, Ketil, Nathalie and Anne from Telenor to make all of this possible.

  10. Upcoming changes to the Firefox Developer tools node picker

    If you are a user of the Firefox Developer tools you’ll soon see a change of the node picker of the Page Inspector component.

    As documented on Bugzilla and reported by Patrick Brosset these changes mean:

    • The node inspect button in the devtools has moved from the inspector-panel toolbar, on the left, to the toolbox toolbar, on the right:
      new node highlighter position in the devtools

    • The highlighter is shown as you hover over nodes in the markup-panel (instead of having to click on them)
    • What was called the “lock” state isn’t there anymore. This means, once a node is selected in the markup-panel or by using the inspect button and clicking on the page, the highlighter isn’t going to stay visible for as long as you don’t select another node. This was sometimes frustrating as it may be hiding things you wanted to see.

    You can see the new functionality in action on YouTube.

    This improves the compatibility in user interaction with other developer tools and makes it easier to move in between nodes should you have picked the wrong one.

    Are there any other things you like to see in the Firefox Developer tools? Tell us, and don’t be shy to get involved and file bugs.