Mobile Articles

Sort by:


  1. Pre-orders start today for Flame, the Firefox OS developer phone

    Update 2014-07-15: The pre-order period has ended and the Flame is now available as a standard order, with shipping in 7-10 days.

    The Firefox OS Flame reference device that we announced at end of February is now available for pre-order at for $170 including free shipping.

    Pre-order now.

    To standardize the design, development, and testing of Firefox OS, Mozilla has partnered with a company called Thundersoft to manufacture, distribute, and update a Firefox OS reference phone called the Flame. Until now, there has been no “reference device” and the options for getting something through retail were limited.

    Mid-tier phone hardware

    The Flame is representative of the mid-tier phone hardware Mozilla and its partners are targeting over the coming year. It was designed for our developer and contributor community, so we worked with the manufacturer to keep the price as low as possible. We’re excited that we are able to bring a high quality reference device to our developer community at an affordable price.

    The Flame will also be a great development and testing environment for third party app developers targeting Firefox OS and HTML5. The phone offers a software configurable RAM option that can be made to emulate many different devices that will be commercially available later this year.

    Our partner will provide the Flame with updates to each new major version of Firefox OS and a simple mechanism for switching between different release channels; offering Nightly builds that will keep you at the forefront of Firefox OS development.

    If you’ve held off on getting a Firefox OS phone because they weren’t available in your region or the phones in market didn’t meet your development and testing needs, don’t miss out on the opportunity to pre-order one of these Flame reference phones today.

    Specifications & unlocked!

    The Flame is unlocked from any network and comes with the bootloader unlocked.

    • Qualcomm MSM8210 Snapdragon, 1.2GHZ Dual core processor
    • 4.5” screen (FWVGA 854×480 pixels)
    • Cameras: Rear: 5MP with auto-focus and flash / Front: 2MP
    • Frequency: GSM 850/900/1800/1900MHz
      UMTS 850/900/1900/2100MHz
    • 8GB memory, MicroSD slot
    • 256MB – 1GB RAM (adjustable by developer)
    • A-GPS, NFC
    • Dual SIM Support
    • Battery capacity: 1,800 mAh
    • WiFi: 802.11 b/g/n, Bluetooth 3.0, Micro USB

    NOTE: Once pre-ordered, the Flame will take approximately four weeks before it ships. The Flame ships free to anywhere in the world except for Japan. If you want to pre-order a Flame device certified for use in Japan, please visit here for more information.

    For more information:
    Mozilla Developer Network guide to the Flame reference phone

  2. offline web applications

    The network is a key component of any web application, whether it is used to download JavaScript, CSS, and HTML source files and accompanying resources (images, videos, …) or to reach web services (XMLHttpRequest and <forms>).

    Yet having offline support for web applications can be very useful to users. Imagine, for example, a webmail application that allows users to read emails already in their inbox and write new messages even when they are not connected.

    The mechanism used to support offline web applications can also be used to improve an application’s performance by storing data in the cache or to make data persistent between user sessions and when reloading and restoring pages.

    Demo: a To Do List Manager

    To see an offline web application in action, watch Vivien Nicolas’ demo (OGV, MP4), which shows a to do list manager working online and offline on an N900 running Firefox.

    You can also check out the live demo of the application.

    Creating your Own Offline Application

    For a web application to work offline, you need to consider three things:

    Let’s see how to use each of these components.

    Storage: Persistent Data

    DOM storage lets you store data between browser sessions, share data between tabs and prevent data loss (for example from page reloads or browser restarts). The data are stored as strings (for example a JSONified JavaScript object) in a Storage object.

    There are two kinds of storage global objects: sessionStorage and localStorage.

    • sessionStorage maintains a storage area that’s available for the duration of the page session. A page session lasts for as long as the browser is open and survives over page reloads and restores. Opening a page in a new tab or window causes a new session to be initiated.
    • localStorage maintains a storage area that can be used to hold data over a long period of time (e.g. over multiple pages and browser sessions). It’s not destroyed when the user closes the browser or switches off the computer.

    Both localStorage and sessionStorage use the following API:

    window.localStorage and window.sessionStorage {
      long length; // Number of items stored
      string key(long index); // Name of the key at index
      string getItem(string key); // Get value of the key
      void setItem(string key, string data); // Add a new key with value data
      void removeItem(string key); // Remove the item key
      void clear(); // Clear the storage

    Here is an example showing how to store and how to read a string:

    // save the string
    function saveStatusLocally(txt) {
      window.localStorage.setItem("status", txt);
    // read the string
    function readStatus() {
       return window.localStorage.getItem("status");

    Note that the storage properties are limited to an HTML5 origin (scheme + hostname + non-standard port). This means that window.localStorage from is a different instance of window.localStorage from For example, can’t access the storage of

    Are We Offline?

    Before storing data, you may want to know if the user is online or not. This can be useful, for example, to decide whether to store a value locally (client side) or to send it to the server.

    Check if the user is online with the navigator.onLine property.
    In addition, you can be notified of any connectivity changes by listening to the online and offline events of the window element.

    Here is a very simple piece of JavaScript code, which sends your status to a server (à la twitter).

    • If you set your status and you’re online, it sends the status.
    • If you set your status and you’re offline, it stores your status.
    • If you go online and have a stored status, it sends the stored status.
    • If you load the page, are online, and have a stored status, it sends the stored status.
    function whatIsYourCurrentStatus() {
      var status = window.prompt("What is your current status?");
      if (!status) return;
      if (navigator.onLine) {
      } else {
    function sendLocalStatus() {
      var status = readStatus();
      if (status) {
    window.addEventListener("load", function() {
       if (navigator.onLine) {
    }, true);
    window.addEventListener("online", function() {
    }, true);
    window.addEventListener("offline", function() {
      alert("You're now offline. If you update your status, it will be sent when you go back online");
    }, true);

    Offline Resources: the Cache Manifest

    When offline, a user’s browser can’t reach the server to get any files that might be needed. You can’t always count on the browser’s cache to include the needed resources because the user may have cleared the cache, for example. This is why you need to define explicitly which files must be stored so that all needed files and resources are available when the user goes offline: HTML, CSS, JavaScript files, and other resources like images and video.

    The manifest file is specified in the HTML and contains the explicit list of files that should be cached for offline use by the application.

    <html manifest="offline.manifest">

    Here is an example of the contents of a manifest file:


    The MIME-Type type of the manifest file must be: text/cache-manifest.

    See the documentation for more details on the manifest file format and cache behavior.


    The key components you should remember to think about when making your application work offline are to store the user inputs in localStorage, create a cache manifest file, and monitor connection changes.

    Visit the Mozilla Developer Center for the complete documentation.

  3. User-Agent detection, history and checklist


    User-Agent: <something> is a string of characters sent by HTTP clients (browsers, bots, calendar applications, etc.) for each individual HTTP request to a server. The HTTP Protocol as defined in 1991 didn’t have this field, but the next version defined in 1992 added User-Agent in the HTTP requests headers. Its syntax was defined as “the software product name, with an optional slash and version designator“. The prose already invited people to use it for analytics and identify the products with implementation issues.

    This line if present gives the software program used by the original client. This is for statistical purposes and the tracing of protocol violations. It should be included.

    Fast forward to August 2013, the HTTP/1.1 specification is being revised and also defines User-Agent.

    A user agent SHOULD NOT generate a User-Agent field containing
    needlessly fine-grained detail and SHOULD limit the addition of
    subproducts by third parties. Overly long and detailed User-Agent
    field values increase request latency and the risk of a user being
    identified against their wishes (“fingerprinting”).

    Likewise, implementations are encouraged not to use the product
    tokens of other implementations in order to declare compatibility
    with them
    , as this circumvents the purpose of the field. If a user
    agent masquerades as a different user agent, recipients can assume
    that the user intentionally desires to see responses tailored for
    that identified user agent, even if they might not work as well for
    the actual user agent being used.

    Basically, the HTTP specification discouraged since its inception the detection of the User-Agent string for tailoring the user experience. Currently, the user agent strings have become overly long. They are abused in every possible way. They include detailed information. They lie about what they really are and they are used for branding and advertising the devices they run on.

    User-Agent Detection

    User agent detection (or sniffing) is the mechanism used for parsing the User-Agent string and inferring physical and applicative properties about the device and its browser. But let get the record straight. User-Agent sniffing is a future fail strategy. By design, you will detect only what is known, not what will come. The space of small devices (smartphones, feature phones, tablets, watches, arduino, etc.) is a very fast-paced evolving space. The diversity in terms of physical characteristics will only increase. Updating databases and algorithms for identifying correctly is a very high maintenance task which is doomed to fail at a point in the future. Sites get abandoned, libraries are not maintained and Web sites will break just because they were not planned for the future coming devices. All of these have costs in resources and branding.

    New solutions are being developed for helping people to adjust the user experience depending on the capabilities of the products, not its name. Responsive design helps to create Web sites that are adjusting for different screen sizes. Each time you detect a product or a feature, it is important to thoroughly understand why you are trying to detect this feature. You could fall in the same traps as the ones existing with user agent detection algorithms.

    We have to deal on a daily basis with abusive user agent detection blocking Firefox OS and/or Firefox on Android. It is not only Mozilla products, every product and brand has to deal at a point with the fact to be excluded because they didn’t have the right token to pass an ill-coded algorithm. User agent detection leads to situation where a new player can hardly enter the market even if it has the right set of technologies. Remember that there are huge benefits to create a system which is resilient to many situations.

    Some companies will be using the User-Agent string as an identifier for bypassing a pay-wall or offering specific content for a group of users during a marketing campaign. It seems to be an easy solution at first but it creates an environment easy to by-pass in spoofing the user agent.

    Firefox and Mobile

    Firefox OS and Firefox on Android have very simple documented User-Agent strings.

    Firefox OS

    Mozilla/5.0 (Mobile; rv:18.0) Gecko/18.0 Firefox/18.0

    Firefox on Android

    Mozilla/5.0 (Android; Mobile; rv:18.0) Gecko/18.0 Firefox/18.0

    The most current case of user agent detection is to know if the device is a mobile to redirect the browser to a dedicated Web site tailored with mobile content. We recommend you to limit your detection to the simplest possible string by matching the substring mobi in lowercase.


    If you are detecting on the client side with JavaScript, one possibility among many would be to do:

    // Put the User Agent string in lowercase
    var ua = navigator.userAgent.toLowerCase();
    // Better to test on mobi than mobile (Firefox, Opera, IE)
    if (/mobi/i.test(ua)) {
        // do something here
    } else {
        // if not identified, still do something useful

    You might want to add more than one token in the if statement.


    Remember that whatever the number of tokens you put there, you will fail at a point in the future. Some devices will not have JavaScript, will not have the right token. The pattern or the length of the token was not as you had initially planned. The stones on the path are plenty, choose the way of the simplicity.

    Summary: UA detection Checklist Zen

    1. Do not detect user agent strings
    2. Use responsive design for your new mobile sites (media queries)
    3. If you are using a specific feature, use feature detections to enhance, not block
    4. And if finally you are using UA detection, just go with the most simple and generic strings.
    5. Always provide a working fallback whatever the solutions you chose are.

    Practice. Learn. Imagine. Modify. And start again. There will be many road blocks on the way depending on the context, the business requirements, the social infrastructure of your own company. Keep this checklist close to you and give the Web to more people.

  4. H.264 video in Firefox for Android

    Firefox for Android
    has expanded its HTML5 video capabilities to include H.264 video playback. Web developers have been using Adobe Flash to play H.264 video on Firefox for Android, but Adobe no longer supports Flash for Android. Mozilla needed a new solution, so Firefox now uses Android’s “Stagefright” library to access hardware video decoders. The challenges posed by H.264 patents and royalties have been documented elsewhere.

    Supported devices

    Firefox currently supports H.264 playback on any device running Android 4.1 (Jelly Bean) and any Samsung device running Android 4.0 (Ice Cream Sandwich). We have temporarily blocked non-Samsung devices running Ice Cream Sandwich until we can fix or workaround some bugs. Support for Gingerbread and Honeycomb devices is planned for a later release (Bug 787228).

    To test whether Firefox supports H.264 on your device, try playing this “Big Buck Bunny” video.

    Testing H.264

    If your device is not supported yet, you can manually enable H.264 for testing. Enter about:config in Firefox for Android’s address bar, then search for “stagefright”. Toggle the “stagefright.force-enabled” preference to true. H.264 should work on most Ice Cream Sandwich devices, but Gingerbread and Honeycomb devices will probably crash.

    If Firefox does not recognize your hardware decoder, it will use a safer (but slower) software decoder. Daring users can manually enable hardware decoding. Enter about:config as described above and search for “stagefright”. To force hardware video decoding, change the “media.stagefright.omxcodec.flags” preference to 16. The default value is 0, which will try the hardware decoder and fall back to the software decoder if there are problems (Bug 797225). The most likely problems you will encounter are videos with green lines or crashes.

    Giving feedback/reporting bugs

    If you find any video bugs, please file a bug report here so we can fix it! Please include your device model, Android OS version, the URL of the video, and any about:config preferences you have changed. Log files collected from aLogcat or adb logcat are also very helpful.

  5. Detecting touch: it's the 'why', not the 'how'

    One common aspect of making a website or application “mobile friendly” is the inclusion of tweaks, additional functionality or interface elements that are particularly aimed at touchscreens. A very common question from developers is now “How can I detect a touch-capable device?”

    Feature detection for touch

    Although there used to be a few incompatibilities and proprietary solutions in the past (such as Mozilla’s experimental, vendor-prefixed event model), almost all browsers now implement the same Touch Events model (based on a solution first introduced by Apple for iOS Safari, which subsequently was adopted by other browsers and retrospectively turned into a W3C draft specification).

    As a result, being able to programmatically detect whether or not a particular browser supports touch interactions involves a very simple feature detection:

    if ('ontouchstart' in window) {
      /* browser with Touch Events
         running on touch-capable device */

    This snippet works reliably in modern browser, but older versions notoriously had a few quirks and inconsistencies which required jumping through various different detection strategy hoops. If your application is targetting these older browsers, I’d recommend having a look at Modernizr – and in particular its various touch test approaches – which smooths over most of these issues.

    I noted above that “almost all browsers” support this touch event model. The big exception here is Internet Explorer. While up to IE9 there was no support for any low-level touch interaction, IE10 introduced support for Microsoft’s own Pointer Events. This event model – which has since been submitted for W3C standardisation – unifies “pointer” devices (mouse, stylus, touch, etc) under a single new class of events. As this model does not, by design, include any separate ‘touch’, the feature detection for ontouchstart will naturally not work. The suggested method of detecting if a browser using Pointer Events is running on a touch-enabled device instead involves checking for the existence and return value of navigator.maxTouchPoints (note that Microsoft’s Pointer Events are currently still vendor-prefixed, so in practice we’ll be looking for navigator.msMaxTouchPoints). If the property exists and returns a value greater than 0, we have touch support.

    if (navigator.msMaxTouchPoints > 0) {
      /* IE with pointer events running
         on touch-capable device */

    Adding this to our previous feature detect – and also including the non-vendor-prefixed version of the Pointer Events one for future compatibility – we get a still reasonably compact code snippet:

    if (('ontouchstart' in window) ||
         (navigator.maxTouchPoints > 0) ||
         (navigator.msMaxTouchPoints > 0)) {
          /* browser with either Touch Events of Pointer Events
             running on touch-capable device */

    How touch detection is used

    Now, there are already quite a few commonly-used techniques for “touch optimisation” which take advantage of these sorts of feature detects. The most common use cases for detecting touch is to increase the responsiveness of an interface for touch users.

    When using a touchscreen interface, browsers introduce an artificial delay (in the range of about 300ms) between a touch action – such as tapping a link or a button – and the time the actual click event is being fired.

    More specifically, in browsers that support Touch Events the delay happens between touchend and the simulated mouse events that these browser also fire for compatibility with mouse-centric scripts:

    touchstart > [touchmove]+ > touchend > delay > mousemove > mousedown > mouseup > click

    See the event listener test page to see the order in which events are being fired, code available on GitHub.

    This delay has been introduced to allow users to double-tap (for instance, to zoom in/out of a page) without accidentally activating any page elements.

    It’s interesting to note that Firefox and Chrome on Android have removed this delay for pages with a fixed, non-zoomable viewport.

    <meta name="viewport" value="... user-scalable = no ...">

    See the event listener with user-scalable=no test page, code available on GitHub.

    There is some discussion of tweaking Chrome’s behavior further for other situations – see issue 169642 in the Chromium bug tracker.

    Although this affordance is clearly necessary, it can make a web app feel slightly laggy and unresponsive. One common trick has been to check for touch support and, if present, react directly to a touch event (either touchstart – as soon as the user touches the screen – or touchend – after the user has lifted their finger) instead of the traditional click:

    /* if touch supported, listen to 'touchend', otherwise 'click' */
    var clickEvent = ('ontouchstart' in window ? 'touchend' : 'click');
    blah.addEventListener(clickEvent, function() { ... });

    Although this type of optimisation is now widely used, it is based on a logical fallacy which is now starting to become more apparent.

    The artificial delay is also present in browsers that use Pointer Events.

    pointerover > mouseover > pointerdown > mousedown > pointermove > mousemove > pointerup > mouseup > pointerout > mouseout > delay > click

    Although it’s possible to extend the above optimisation approach to check navigator.maxTouchPoints and to then hook up our listener to pointerup rather than click, there is a much simpler way: setting the touch-action CSS property of our element to none eliminates the delay.

    /* suppress default touch action like double-tap zoom */
    a, button {
      -ms-touch-action: none;
          touch-action: none;

    See the event listener with touch-action:none test page, code available on GitHub.

    False assumptions

    It’s important to note that these types of optimisations based on the availability of touch have a fundamental flaw: they make assumptions about user behavior based on device capabilities. More explicitly, the example above assumes that because a device is capable of touch input, a user will in fact use touch as the only way to interact with it.

    This assumption probably held some truth a few years back, when the only devices that featured touch input were the classic “mobile” and “tablet”. Here, touchscreens were the only input method available. In recent months, though, we’ve seen a whole new class of devices which feature both a traditional laptop/desktop form factor (including a mouse, trackpad, keyboard) and a touchscreen, such as the various Windows 8 machines or Google’s Chromebook Pixel.

    As an aside, even in the case of mobile phones or tablets, it was already possible – on some platforms – for users to add further input devices. While iOS only caters for pairing an additional bluetooth keyboard to an iPhone/iPad purely for text input, Android and Blackberry OS also let users add a mouse.

    On Android, this mouse will act exactly like a “touch”, even firing the same sequence of touch events and simulated mouse events, including the dreaded delay in between – so optimisations like our example above will still work fine. Blackberry OS, however, purely fires mouse events, leading to the same sort of problem outlined below.

    The implications of this change are slowly beginning to dawn on developers: that touch support does not necessarily mean “mobile” anymore, and more importantly that even if touch is available, it may not be the primary or exclusive input method that a user chooses. In fact, a user may even transition between any of their available input methods in the course of their interaction.

    The innocent code snippets above can have quite annoying consequences on this new class of devices. In browsers that use Touch Events:

    var clickEvent = ('ontouchstart' in window ? 'touchend' : 'click');

    is basically saying “if the device support touch, only listen to touchend and not click” – which, on a multi-input device, immediately shuts out any interaction via mouse, trackpad or keyboard.

    Touch or mouse?

    So what’s the solution to this new conundrum of touch-capable devices that may also have other input methods? While some developers have started to look at complementing a touch feature detection with additional user agent sniffing, I believe that the answer – as in so many other cases in web development – is to accept that we can’t fully detect or control how our users will interact with our web sites and applications, and to be input-agnostic. Instead of making assumptions, our code should cater for all eventualities. Specifically, instead of making the decision about whether to react to click or touchend/touchstart mutually exclusive, these should all be taken into consideration as complementary.

    Certainly, this may involve a bit more code, but the end result will be that our application will work for the largest number of users. One approach, already familiar to developers who’ve strived to make their mouse-specific interfaces also work for keyboard users, would be to simply “double up” your event listeners (while taking care to prevent the functionality from firing twice by stopping the simulated mouse events that are fired following the touch events):

    blah.addEventListener('touchend', function(e) {
      /* prevent delay and simulated mouse events */
    blah.addEventListener('click', someFunction);

    If this isn’t DRY enough for you, there are of course fancier approaches, such as only defining your functions for click and then bypassing the dreaded delay by explicitly firing that handler:

    blah.addEventListener('touchend', function(e) {
      /* prevent delay and simulated mouse events */
      /* trigger the actual behavior we bound to the 'click' event */;
    blah.addEventListener('click', function() {
      /* actual functionality */

    That last snippet does not cover all possible scenarios though. For a more robust implementation of the same principle, see the FastClick script from FT labs.

    Being input-agnostic

    Of course, battling with delay on touch devices is not the only reason why developers want to check for touch capabilities. Current discussions – such as this issue in Modernizr about detecting a mouse user – now revolve around offering completely different interfaces to touch users, compared to mouse or keyboard, and whether or not a particular browser/device supports things like hovering. And even beyond JavaScript, similar concepts (pointer and hover media features) are being proposed for Media Queries Level 4. But the principle is still the same: as there are now common multi-input devices, it’s not straightforward (and in many cases, impossible) anymore to determine if a user is on a device that exclusively supports touch.

    The more generic approach taken in Microsoft’s Pointer Events specification – which is already being scheduled for implementation in other browser such as Chrome – is a step in the right direction (though it still requires extra handling for keyboard users). In the meantime, developers should be careful not to draw the wrong conclusions from touch support detection and avoid unwittingly locking out a growing number of potential multi-input users.

    Further links

  6. More details about the WebAPI effort

    As we’ve hoped, there has been a lot of interest in the newly announced WebAPI effort. So I figured that I should explain in more detail some of my thinking around what we’re hoping to do and the challenges that are ahead of us.


    The goal of this effort is to create APIs to expand what the Web can do. We don’t want people to end up choosing to develop for a proprietary platform just because the Web is lacking some capability.

    The main effort, at least initially, is to enable access to hardware connected to the device, and data which is stored or available to the device. As for hardware, we want to make the full range of hardware that people use available to the web platform. From common hardware like cameras, to more rarely used (but no less awesome) hardware like USB-driven Nerf cannons. We also want to enable communication hardware like Bluetooth and NFC.

    For data stored on the device, the most commonly discussed data today is contacts and calendar. This includes the ability to both read and write data. That is, we both want the Web platform to be able to enumerate contacts stored on the device, and read their details, as well as add and remove contacts. In short, we want it to be possible to create a Web page or Web app which lets the user manage his contact list. Same thing for calendar events and other types of data stored on devices.

    Security and Privacy

    One big reason these types of APIs haven’t been developed for the Web platform yet is because of their security and privacy implications. I would obviously not want every single Web page out there to be able to mess around with my contact list or my calendar. And being able to issue any commands to any USB device that I happen to have plugged in would likely result in everyone’s computer immediately being zombified.

    So as we are developing these APIs, we always have to develop a security model to go along with them. In some cases simply asking the user, which is how we currently do Geolocation, might work. In others, where security implications are scarier or where describing the risk to the user is harder, we’ll have to come up with better solutions.

    I really want to emphasize that we don’t yet know what the security story is going to be, but that we’re absolutely planning on having a solid security solution before we ship an API to millions of users.

    Robert O’Callahan has a really great post about permissions for Web applications.


    Mozilla has always had a strong commitment to Web standards. This is of course not something we’re changing! All of the APIs that we are developing will be developed with the goal of being standardized and implemented across both browsers and devices.

    But it’s important to ensure that standards are good standards. This takes experimenting. Especially in areas which are as new to the Web, and as security sensitive, as these are.

    Standards organizations aren’t a good place to do research. This is why we want to experiment and do research outside the standards organizations first. But always in the open, and always listening to feedback. We’re also going to clearly prefix any APIs as to indicate that they are experiments and might change once they get standardized.

    Once we have a better understanding of what we think makes a good API we will create a proposal and bring to working groups like the Device API group at W3C, WAC and WHATWG.

    Throughout this process we will of course be in contact with other interested parties, such as other browser vendors and web developers. This is part of the normal research and making sure that an API is a good API.

    Mozilla always has and always will be a good steward of the open Web. We are not interested in creating a Mozilla-specific Web platform. We are interested in moving the open Web platform forward.

    High Level vs. Low Level

    One thing that often comes up with API design is whether we should do high level or low level APIs. For example, do we provide a low-level USB API, or a high-level API for cameras?

    There are pros and cons with both. High level means that we can create more developer-friendly APIs. We can also provide a better security model since we can ensure that the page won’t issue any unexpected USB commands, and we can ensure that no privacy-sensitive access is made without user approval. But high level also means that developers can’t access a type of device until we’ve added support for it. So until we’ve added an API for Nerf cannons, there will be no way to talk to them.

    Exposing a low-level USB API on the other hand, means that web pages can talk to any USB device in existence, with no need for us to add an explicit API for them. However it also requires developers to get their hands dirty with the details of the USB protocol and differences between devices.

    The approach we’re planning on taking is to do both high-level and low-level APIs, as well as give people the proper incentives to use the one that is best for the user. But a very important point is to provide low-level APIs early to ensure that Mozilla isn’t on the critical path for innovation. Over time, we can add high-level APIs where that makes sense.

    How you can join

    As with all things Mozilla, we’re planning on doing all our work in the open. In fact, we’ll be relying on your help to make this successful! As to keep discussions focused, we’re going to use the a new discussion forum for all communication. This means that you can participate through email, newsgroups, or the web-based google group UI.

    You can subscribe to the mailing list at

    For other methods go to:

    We also use the #webapi IRC channel on

    We’ll also be tracking progress on the wiki page

    Looking forward to hearing from you to help build the next stage for the web platform!

    Hiring developers

    Edit: Forgot to mention, we are hiring several full time engineers for working on the WebAPI team! Read the job description and apply.

  7. Remote Debugging Firefox OS with Weinre

    NOTE: since this article was published, the Mozilla developer tools team has released the App Manager, a much more effective way to remotely debug Firefox OS apps. To find out more, read Using the App Manager on MDN.

    If you’ve wanted to contribute to Gaia, or have been writing a webapp for Firefox OS, one of the pain points you probably ran into, when using either B2G desktop or testing on a device, is the lack of developer tools to inspect and debug your HTML, CSS and JavaScript.

    Currently we have two tracking bugs, go ahead an vote on them to bump their priority, to track the work going into developing a native remote inspector and style editor for Firefox OS but, I have some pretty exciting news. You can have access to a remote debugger today.

    And how is this possible I hear you ask? Well, one word: Weinre. Weinre is a project of the Apache Foundation and stands for WEb INspector REmote and is exactly what it’s name suggests, a tool in the same vein as Firebug or Webinspector but, able to run and debug web pages remotely. So, if you have used tools such as the Firefox Developer Tools or Chrome Dev Tools, using Weinre will be second nature. But enough talk, let’s get this up and running.

    Setting Up Weinre

    As Weinre runs on top of Node.js your first port of call would be to install Node.js. Node.js comes with NPM (Node Package Manager) bundled nowadays and this is then then what we are going to use to install Weinre. From a terminal run the following:

    npm -g install weinre

    NOTE: The -g flag is used to install Weinre as a global Node.js module for command line goodness but, on Linux and Mac, this means you most likely are going to need to run the above by prepending sudo to the above command.

    Once the installation process is complete, we are ready to use Weinre to debug. But first, let’s make absolutely sure that Weinre was indeed installed successfully. In your terminal, run the following:

    $ weinre --boundHost --httpPort 9090
    2013-01-28T10:42:40.498Z weinre: starting server at

    If you see a line similar to the last line above, your installation was a success and the Weinre server us up and running. With that, fire up a browser (NOTE: The UI for Weinre is built specifically for Webkit based browsers so, while it might work to some degree in other browsers, I would suggest you use Chrome) and point it to

    Above then is the landing page for the Weinre server giving you access to the documentation, some other trinkets, as well as the Weinre client, the page we really want to head to so, go ahead and click on the debug client link.

    From the above you can see that we have one connected client, this is the current instance of the web inspector, some general properties of our server but, no targets. Let’s get our target set-up.

    NOTE: If the UI of winery looks very familiar that’s because Winery uses the same UI code as the web inspector in Chrome and Safari.

    Setting Up A Weinre Target

    In Weinre targets are the web pages or apps that you want to debug, and in order for the target to be able to connect, we need to add a one liner to the relevant file of our app. For this post, let’s inspect the Calendar app. Go ahead and open up gaia -> apps -> calendar -> index.html and scroll right to the bottom. Just before the closing body tag, insert the following line:

    <script src=""></script>

    Before we can launch B2G Desktop and try this out however, there is one more step. Gaia uses a Content Security Policy and as part of that scripts are said to only be allowed to load, if from the same origin as the application. So, if we were to try and load the Calendar now, the script from above would be blocked as it is not being loaded from the specified origin.

    To overcome this, we need to temporarily disable CSP. To do this, open up gaia -> build -> preferences.js and add the following line, around line 24:

    prefs.push(["security.csp.enable", false]);

    Debugging Using Weinre and B2G Desktop

    Now we are ready to test Weinre. If you are not already inside the Gaia root directory, change into it now and execute:

    DEBUG=1 make

    Once the profile is built, launch B2G desktop:

    /Applications/ -profile /Users/username/mozilla/projects/gaia/profile

    Once B2G launches, unlock the screen, swipe two screens to the right and click on the Calendar icon to launch the Calendar app. Once the app launches, you will see a new item appear on Weinre’s client dashboard:

    As you can see, the Calendar has successfully connected and is now listed as one of our targets. Go ahead and click on the ‘Elements’ tab.

    Listed here is the HTML of our app on the left and our CSS on the right! You can go right ahead and edit either the HTML or the CSS as you normally would and see the changes reflect live. Note that even though the CSS looks grayed out and disabled, it if fully editable. You can also add completely new styles to the current element using the empty block or amending existing rules. You will also notice you have access to the computed styles as well as metrics of the current element.

    Working With The Console

    The next tab of interest to us is the Console tab. Here you can code away and run any JavaScript you want directly against the current app or execute code exposed by the app. To see how this works, let’s interact with the call log portion of the Dialer.

    First step then is to move our script import from Calendar to Dialer. Grab the code from Calendar and then open up gaia -> apps – > communication -> dialer -> index.html and paste the code. Next rebuild your profile using ‘make’ and finally relaunch B2G desktop.

    Once it is launched again, click on the Dialer icon at the bottom left of the home screen. Once loaded, confirm that the communication channels are open to Weinre by opening and confirming that the target now looks as follows: [channel: t-7 id: anonymous] - app://

    With the dialer open, click on the call log icon, bottom left. Currently the call log is already populated with some dummy data but, let’s create our own. Click over to the Console tab in Weinre, type the following and press enter.


    If you look at the view on desktop now, it would seem that nothing has happened but wait, there is more. Type in the following and press enter:


    Aha! As you can see, our call log is empty. Next step then, is to add an entry back. To do this, we will create a dummy call entry Object and then pass this to the add function of the RecentsDBManager to store it:

    // Dummy entry
    var recentCall = {
        type: 'incoming-refused',
        number: '555-6677',
        date: new Date()

    And as you can see now, the entry we just created has been added to storage, IndexedDB to be exact, and is visible in the call log view. As you might very well have noticed, another of the great features that comes with the console is auto-complete which will further speed up development.

    The combination of features that this exposes already opens new doors and will make working on Firefox OS, or writing apps for the OS, much easier with less time spent between building profiles, tearing down and relaunching B2G. Which all makes for happier developers.

    But hey wait a second, what about debugging on the device? This will work exactly the same as the above with one small difference, the IP. When you want to debug on the device you first need to know the IP address of your host computer. Then you need to start up Weinre using this IP as the buondHost and also as the IP when including the script into you target documents.

    On Mac and Linux you can get this address using ifconfig and on Windows it is ipconfig. Once you have the new IP, just stop the current instance of Weinre and then do the following:

    weinre --boundHost --httpPort 9090

    Then inside you target document add:

    <script src=""></script>

    Make and push your Gaia profile to the device using:

    make install-gaia

    Launch your target app and you are in business!


    While this solution is not perfect, you need to remember to undo your changes before committing anything to source control, having to manually add the script every time is not ideal and then there are also some things that do not work 100%, such as, highlighting DOM elements as you hover over the HTML source and debugging JavaScript with breakpoints and such, this does go a long way towards improving the lives of developers both working directly on Gaia as well as those writing exciting new apps for Firefox OS.

    But there is already some light at the end of the tunnel with regards to managing the injection of the script manually, disabling CSP and ensuring things are cleaned up before pushing to source control. Jan Jongboom has opened a pull request against the Gaia repo that looks extremely promising and will alleviate a lot of this so, go on and give him a hand and let’s get this merged into Gaia. Happy Hacking!

    An important note: None of the above would have happened if it was not for Kevin Grandon who remembered using Weinre and sent out the email that set the ball rolling. Thanks Kevin!

  8. HTML out of the Browser

    Amongst my friends, I’m known as something of a Star Wars nerd. My longtime nick has been cfjedimaster (a combination of two passions, the other being ColdFusion), I work in a room packed to the gills with Star Wars toys, and I’ve actually gotten inked up twice now with Star Wars tats. That being said, it was another movie that had the most influence on my current career – Tron.


    I had already discovered an interest in computers before then, but it was Tron that really crystallized the idea for me. All of sudden I imagined myself being the programmer – creating intelligent programs and dominating the grid. Yeah, I know, I was a nerd back then too.

    My dreams, though, ran into reality rather quickly during my time as a Comp Sci major in college. First – I discovered that “Hey, I’m good at math!” means crap all when you hit Calculus 3, and secondly, I discovered that I really wasn’t that interested in the performance of one sort versus another. I switched to English as a major (always a good choice) but kept messing around with computers. It was during this time that I was exposed to Mosaic and the early web.

    I quickly jumped into the web as – well – I’ll put it bluntly – easier programming than what I had been exposed to before. I can remember LiveScript. I can remember my first Perl CGI scripts. This wasn’t exactly light cycle coding but it was simpler, fun, and actually cutting edge. I’ve spent a good chunk of my adult life now as a web developer, and while I certainly have delusions of the web being a pristine environment, it has been good to me (and millions of others) and I’m loving to see how much it evolves over time.

    One of the most fascinating ways that web technologies have grown is outside of the web itself. In this article I’m going to look at all the various ways we can reuse our web-based technologies (HTML, JavaScript, and CSS) in non-web based environments. While it would be ludicrous to say that one shouldn’t learn other technologies, or that web standards work everywhere and in every situation, I feel strongly that the skills behind the web are ones open to a large variety of people in different disciplines – whether or not you got that PhD in computer science!


    This is typically the point where I’d discuss how important mobile is, but it’s 2014 and I think we’re past that now. Mobile development has typically involved either Java (Android) or Objective-C (iOS). Developers can also use web standards to build native applications. One solution is Apache Cordova (AKA PhoneGap).

    Cordova uses a web view wrapped in a native application to allow web developers to build what are typically referred to as hybrid applications. Along with providing an easy way to get your HTML into an app, Cordova provides a series of different plugins that let you do more than what a typical web page can do on a device. So for example, you have easy access to the camera:, onFail, { quality: 50,
        destinationType: Camera.DestinationType.DATA_URL
    function onSuccess(imageData) {
        var image = document.getElementById('myImage');
        image.src = "data:image/jpeg;base64," + imageData;
    function onFail(message) {
        alert('Failed because: ' + message);

    You can also work with the accelerometer, GPS, contacts, the file system, and more. Cordova provides a JavaScript API that handles the communication to native code in the back end. Best of all, the same code can be used to build native applications for multiple different platforms (including Firefox OS, now supported in Cordova & PhoneGap).

    To be clear, this isn’t a case of being able to take an existing web site and just package it up. Mobile applications are – by definition – completely different from a simple web site. But the fact that you can use your existing knowledge gives the developer a huge head start.

    Another example of this (and one that is hopefully well known to readers of this site) is Firefox OS. Unlike Cordova, developers don’t have to wrap their HTML inside a web view wrapper. The entire operating system is web standards based. What makes Firefox OS even more intriguing is support for hosted applications. Firefox OS is a new platform, and the chance that your visitors are using a device with it is probably pretty slim. But with a hosted application I can easily provide support for installation on the device while still running my application off a traditional URL.

    Consider a simple demo I built called INeedIt. If you visit this application in Chrome, it just plain works. If you visit it in a mobile browser on Android or iOS – it just works. But visit it with Firefox and code will trigger to ask if you want to install the application. Here is the block that handles this.

    if(!$rootScope.checkedInstall &amp;&amp; ("mozApps" in window.navigator)) {
        var appUrl = 'http://''/manifest.webapp';
        console.log('havent checked and can check');
        var request = window.navigator.mozApps.checkInstalled(appUrl);
        //silently ignore
        request.onerror = function(e) {
            console.log('Error checking install ';
        request.onsuccess = function(e) {
            if (request.result) {
                console.log("App is installed!");
            else {
                console.log("App is not installed!");
                if(confirm('Would you like to install this as an app?')) {
                    console.log('ok, lets try to install');
                    var installRequest = window.navigator.mozApps.install(appUrl);
                    installRequest.onerror = function() {
                        console.log('install failure: ';
                        alert('Sorry, install failed.');
                    installRequest.onsuccess = function() {
                        console.log('did it');
                        alert('Thanks, app installed!');
    } else {
        console.log('either checked or non compat');

    Pretty simple, right? What I love about this is that the code is 100% ignored outside of Firefox OS but automatically enhanced for folks using that operating system. I risk nothing – but I get the benefit of providing them a way to download and have my application on their device screen.

    FF OS prompting you to install the app


    Of course, there are still a few people who sit in front of a gray box (or laptop) for their day to day work. Many desktop applications have been replaced by web pages, but there are still things that outside the scope of web apps. There are still times when a desktop app makes sense. And fortunately – there’s multiple ways of building them with web standards as well.

    So you know the code example I just showed you? The one where Firefox OS users would be given a chance to install the application from the web page? That exact same code works on the desktop as well. While still in development (in fact, the application I built doesn’t work due to a bug with Geolocation), it will eventually allow you to push your web based application both to a mobile Firefox OS user as well as the other billion or so desktop users. Here’s the application installed in my own Applications folder.

    INeedIt as a desktop app

    As I said though – this is still relatively new and needs to bake a bit longer before you can make active use of it. Something you can use right now is Node Webkit. This open source project allows you to wrap Node.js applications in a desktop shell. Executables can than be created for Windows, Mac, and Linux. You get all the power of a “real” desktop application with the ease of use of web standards as your platform. There’s already a growing list of real applications out there making use of the framework – some I had even used before what realizing they used Node Webkit behind the scenes.

    As an example, check out A Wizard’s Lizard, a RGP with random dungeons and great gameplay.

    Screen shot - Wizard's Lizard

    Native App Extensions

    In the previous section we covered applications built with web standards. There are also multiple applications out there today, built natively, that can be extended with web standards. As a web developer you are probably already familiar with Firefox Add-Ons and Chrome extensions. There is an incredibly rich ecosystem of browser extensions for just about any need you can think of. What interests me however is the move to use web standards to open other products as well.

    Did you know Photoshop, yes, Photoshop, now has the ability to extended with Node.js? Dubbed “Adobe Generator”, this extensibility layer allows for a script-based interface to the product. One example of this is the ability to generate web assets from layers based on a simple naming scheme. If you’ve ever had to manually create web assets, and update them, from a PSD, you will appreciate this. The entire feature though is driven by JavaScript and all makes use of a public API you can build upon. The code, and samples, are all available via GitHub.

    Generator running within Photoshop

    What Next?

    Coming from the perspective of someone who has been in this industry for way too long, I can say that I feel incredibly lucky that web standards have become such a driving force of innovation and creativity. But it is not luck that has driven this improvement. It is the hard work of many people, companies, and organizations (like Mozilla) that have created the fertile landscape we have before us today. To continue this drive requires all of us to become involved, evangelize to others, and become proponents of a web created by everyone – for everyone.