Featured Articles

Sort by:


  1. Join Us for Firefox OS App Days

    Firefox OS App DaysIf you’re a developer interested in web technologies, I’d like to invite you to participate in Firefox OS App Days, a worldwide set of 20+ hack days organized by Mozilla to help you get started developing apps for Firefox OS.

    At each App Day event, you’ll have the opportunity to learn, hack and celebrate Firefox OS, Mozilla’s open source operating system for the mobile web. Technologists and developers from Mozilla will present tools and technology built to extend and support the Web platform, including mobile Web APIs to access device hardware features such as the accelerometer. We’ll also show you how to use the browser-based Firefox OS Simulator to view and test mobile apps on the desktop.

    Firefox OS App Days are a chance to kick start creation of apps for the Firefox Marketplace, and represent a great opportunity to build new apps or optimize existing HTML5 apps for Firefox OS, as well as demo your projects to an audience of peers, tech leaders and innovators.

    We’re exciting to be working with our Mozilla Reps, who are helping organize these events, and with our partners Deutsche Telecom and Telefónica, who are supporting a number of them across the world.

    We look forward to seeing you there!

    Event Details

    The agenda for these all-day events will be customized for each locale and venue, but a typical schedule might include:

    • 08:30 – 09:30 Registration. Light breakfast.
    • 09:30 – 11:30 Firefox OS, Firefox Marketplace & Mozilla Apps Ecosystem.
    • 11:30 – 12:00 Video. Q&A.
    • 12:00 – 13:00 Lunch
    • 13:00 – 17:00 App hacking.
    • 17:30 – 19:00 Demos & party.

    Signing Up

    Firefox OS App Days launch on 19 January and continue through 2 February, with the majority of the events taking place on 26 January. This wiki page has a master list of all the events and their registration forms, from Sao Paulo to Warsaw to Nairobi to Wellington — and many more. Find the App Day nearest you and register. (N.B. Venue capacities vary, but most are limited to 100 attendees so don’t delay.)

    Getting Ready

    Plan on bringing your Linux, Mac or Windows development machine and an idea for an app you’d like to develop for the Firefox Marketplace. If you have an Android device, bring it along, too. You can see the Firefox Marketplace in action on the Aurora version of Firefox for Android.

    If you want to get started before the event, review the material below and bring an HTML5 app that you’ve begun and want to continue, get feedback on or recruit co-developers for.

  2. NORAD Tracks Santa

    This year, Open Web standards like WebGL, Web Workers, Typed Arrays, Fullscreen, and more will have a prominent role in NORAD’s annual mission to track Santa Claus as he makes his journey around the world. That’s because Analytical Graphics, Inc. used Cesium as the basis for the 3D Track Santa application.

    Cesium is an open source library that uses JavaScript, WebGL, and other web technologies to render a detailed, dynamic, and interactive virtual globe in a web browser, without the need for a plugin. Terrain and imagery datasets measured in gigabytes or terabytes are streamed to the browser on demand, and overlaid with lines, polygons, placemarks, labels, models, and other features. These features are accurately positioned within the 3D world and can efficiently move and change over time. In short, Cesium brings to the Open Web the kind of responsive, geospatial experience that was uncommon even in bulky desktop applications just a few years ago.

    The NORAD Tracks Santa web application goes live on December 24. Cesium, however, is freely available today for commercial and non-commercial use under the Apache 2.0 license.

    In this article, I’ll present how Cesium uses cutting edge web APIs to bring an exciting in-browser experience to millions of people on December 24.

    The locations used in the screenshots of the NORAD Tracks Santa application are based on test data. We, of course, won’t know Santa’s route until NORAD starts tracking him on Christmas Eve. Also, the code samples in this article are for illustrative purposes and do not necessarily reflect the exact code used in Cesium. If you want to see the official code, check out our GitHub repo.


    Cesium could not exist without WebGL, the technology that brings hardware-accelerated 3D graphics to the web.

    It’s hard to overstate the potential of this technology to bring a whole new class of scientific and entertainment applications to the web; Cesium is just one realization of that potential. With WebGL, we can render scenes like the above, consisting of hundreds of thousands of triangles, at well over 60 frames per second.

    Yeah, you could say I’m excited.

    If you’re familiar with OpenGL, WebGL will seem very natural to you. To oversimplify a bit, WebGL enables applications to draw shaded triangles really fast. For example, from JavaScript, we execute code like this:

    gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
    gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
    gl.drawElements(gl.TRIANGLES, numberOfIndices, gl.UNSIGNED_SHORT, 0);

    vertexBuffer is a previously-configured data structure holding vertices, or corners of triangles. A simple vertex just specifies the position of the vertex as X, Y, Z coordinates in 3D space. A vertex can have additional attributes, however, such as colors and the vertex’s coordinates within a 2D image for texture mapping.

    The indexBuffer links the vertices together into triangles. It is a list of integers where each integer specifies the index of a vertex in the vertexBuffer. Each triplet of indices specifies one triangle. For example, if the first three indices in the list are [0, 2, 1], the first triangle is defined by linking up vertices 0, 2, and 1.

    The drawElements call instructs WebGL to draw the triangles defined by the vertex and index buffers. The really cool thing is what happens next.

    For every vertex in vertexBuffer, WebGL executes a program, called a vertex shader, that is supplied by the JavaScript code. Then, WebGL figures out which pixels on the screen are “lit up” by each triangle – a process called rasterization. For each of these pixels, called fragments, another program, a fragment shader, is invoked. These programs are written in a C-like language called GLSL that executes on the system’s Graphics Processing Unit (GPU). Thanks to this low-level access and the impressive parallel computation capability of GPUs, these programs can do sophisticated computations very quickly, creating impressive visual effects. This feat is especially impressive when you consider that they are executed hundreds of thousands or millions of times per render frame.

    Cesium’s fragment shaders approximate atmospheric scattering, simulate ocean waves, model the reflection of the sun off the ocean surface, and more.

    WebGL is well supported in modern browsers on Windows, Linux and Mac OS X. Even Firefox for Android supports WebGL!

    While I’ve shown direct WebGL calls in the code above, Cesium is actually built on a renderer that raises the level of abstraction beyond WebGL itself. We never issue drawElements calls directly, but instead create command objects that represent the vertex buffers, index buffers, and other data with which to draw. This allows the renderer to automatically and elegantly solve esoteric rendering problems like the insufficient depth buffer precision for a world the size of Earth. If you’re interested, you can read more about Cesium’s data-driven renderer.

    For more information about some of the neat rendering effects used in the NORAD Tracks Santa application, take a look at our blog post on the subject.

    Typed Arrays and Cross-Origin Resource Sharing

    Virtual globes like Cesium provide a compelling, interactive 3D view of real-world situations by rendering a virtual Earth combined with georeferenced data such as roads, points of interest, weather, satellite orbits, or even the current location of Santa Claus. At the core of a virtual globe is the rendering of the Earth itself, with realistic terrain and satellite imagery.

    Terrain describes the shape of the surface: the mountain peaks, the hidden valleys, the wide open plains, and everything in between. Satellite or aerial imagery is then overlaid on this otherwise colorless surface and brings it to life.

    The global terrain data used in the NORAD Tracks Santa application is derived from the Shuttle Radar Topography Mission (SRTM), which has a 90-meter spacing between -60 and 60 degrees latitude, and the Global 30 Arc Second Elevation Data Set (GTOPO30), which has 1-kilometer spacing for the entire globe. The total size of the dataset is over 10 gigabytes.

    For imagery, we use Bing Maps, who is also a part of the NORAD Tracks Santa team. The total size of this dataset is even bigger – easily in the terabytes.

    With such enormous datasets, it is clearly impractical to transfer all of the terrain and imagery to the browser before rendering a scene. For that reason, both datasets are broken up into millions of individual files, called tiles. As Santa flies around the world, Cesium downloads new terrain and imagery tiles as they are needed.

    Terrain tiles describing the shape of the Earth’s surface are binary data encoded in a straightforward format. When Cesium determines that it needs a terrain tile, we download it using XMLHttpRequest and access the binary data using typed arrays:

    var tile = ...
    var xhr = new XMLHttpRequest();
'GET', terrainTileUrl, true);
    xhr.responseType = 'arraybuffer';
    xhr.onload = function(e) {
        if (xhr.status === 200) {
            var tileData = xhr.response;
            tile.heights = new Uint16Array(tileData, 0, heightmapWidth * heightmapHeight);
            var heightsBytes = tile.heights.byteLength;
            tile.childTileBits = new Uint8Array(tileData, heightsBytes, 1)[0];
            tile.waterMask = new Uint8Array(tileData, heightsBytes + 1, tileData.byteLength - heightsBytes - 1);
            tile.state = TileState.RECEIVED;
        } else {
            // ...

    Prior to the availability of typed arrays, this process would have been much more difficult. The usual course was to encode the data as text in JSON or XML format. Not only would such data be larger when sent over the wire(less), it would also be significantly slower to process it once it was received.

    While it is generally very straightforward to work with terrain data using typed arrays, two issues make it a bit trickier.

    The first is cross-origin restrictions. It is very common for terrain and imagery to be hosted on different servers than are used to host the web application itself, and this is certainly the case in NORAD Tracks Santa. XMLHttpRequest, however, does not usually allow requests to non-origin hosts. The common workaround of using script tags instead of XMLHttpRequest won’t work well here because we are downloading binary data – we can’t use typed arrays with JSONP.

    Fortunately, modern browsers offer a solution to this problem by honoring Cross-Origin Resource Sharing (CORS) headers, included in the response by the server, indicating that the response is safe for use across hosts. Enabling CORS is easy to do if you have control over the web server, and Bing Maps already includes the necessary headers on their tile files. Other terrain and imagery sources that we’d like to use in Cesium are not always so forward-thinking, however, so we’ve sometimes been forced to route cross-origin requests through a same-origin proxy.

    The other tricky aspect is that modern browsers only allow up to six simultaneous connections to a given host. If we simply created a new XMLHttpRequest for each tile requested by Cesium, the number of queued requests would grow large very quickly. By the time a tile was finally downloaded, the viewer’s position in the 3D world may have changed so that the tile is no longer even needed.

    Instead, we manually limit ourselves to six outstanding requests per host. If all six slots are taken, we won’t start a new request. Instead, we’ll wait until next render frame and try again. By then, the highest priority tile may be different than it was last frame, and we’ll be glad we didn’t queue up the request then. One nice feature of Bing Maps is that it serves the same tiles from multiple hostnames, which allows us to have more outstanding requests at once and to get the imagery into the application faster.

    Web Workers

    The terrain data served to the browser is, primarily, just an array of terrain heights. In order to render it, we need to turn the terrain tile into a triangle mesh with a vertex and index buffer. This process involves converting longitude, latitude, and height to X, Y, and Z coordinates mapped to the surface of the WGS84 ellipsoid. Doing this once is pretty fast, but doing it for each height sample, of which each tile has thousands, starts to take some measurable time. If we did this conversion for several tiles in a single render frame, we’d definitely start to see some stuttering in the rendering.

    One solution is to throttle tile conversion, doing at most N per render frame. While this would help with the stuttering, it doesn’t avoid the fact that tile conversion competes with rendering for CPU time while other CPU cores sit idle.

    Fortunately, another great new web API comes to the rescue: Web Workers.

    We pass the terrain ArrayBuffer downloaded from the remote server via XMLHttpRequest to a Web Worker as a transferable object. When the worker receives the message, it builds a new typed array with the vertex data in a form ready to be passed straight to WebGL. Unfortunately, Web Workers are not yet allowed to invoke WebGL, so we can’t create vertex and index buffers in the Web Worker; instead, we post the typed array back to the main thread, again as a transferable object.

    The beauty of this approach is that terrain data conversion happens asynchronously with rendering, and that it can take advantage of the client system’s multiple cores, if available. This leads to a smoother, more interactive Santa tracking experience.

    Web Workers are simple and elegant, but that simplicity presents some challenges for an engine like Cesium, which is designed to be useful in various different types of applications.

    During development, we like to keep each class in a separate .js file, for ease of navigation and to avoid the need for a time-consuming combine step after every change. Each class is actually a separate module, and we use the Asynchronous Module Definition (AMD) API and RequireJS to manage dependencies between modules at runtime.

    For use in production environments, it is a big performance win to combine the hundreds of individual files that make up a Cesium application into a single file. This may be a single file for all of Cesium or a user-selected subset. It may also be beneficial to combine parts of Cesium into a larger file containing application-specific code, as we’ve done in the NORAD Tracks Santa application. Cesium supports all of these use-cases, but the interaction with Web Workers gets tricky.

    When an application creates a Web Worker, it provides to the Web Worker API the URL of the .js file to invoke. The problem is, in Cesium’s case, that URL varies depending on which of the above use-cases is currently in play. Worse, the worker code itself needs to work a little differently depending on how Cesium is being used. That’s a big problem, because workers can’t access any information in the main thread unless that information is explicitly posted to it.

    Our solution is the cesiumWorkerBootstrapper. Regardless of what the WebWorker will eventually do, it is always constructed with cesiumWorkerBootstrapper.js as its entry point. The URL of the bootstrapper is deduced by the main thread where possible, and can be overridden by user code when necessary. Then, we post a message to the worker with details about how to actually dispatch work.

    var worker = new Worker(getBootstrapperUrl());
    var bootstrapMessage = {
        loaderConfig : {},
        workerModule : 'Workers/' + processor._workerName
    if (typeof require.toUrl !== 'undefined') {
        bootstrapMessage.loaderConfig.baseUrl = '..';
    } else {
        bootstrapMessage.loaderConfig.paths = {
            'Workers' : '.'

    The worker bootstrapper contains a simple onmessage handler:

    self.onmessage = function(event) {
        var data =;
        require(data.loaderConfig, [data.workerModule], function(workerModule) {
            //replace onmessage with the required-in workerModule
            self.onmessage = workerModule;

    When the bootstrapper receives the bootstrapMessage, it uses the RequireJS implementation of require, which is also included in cesiumWorkerBootstrapper.js, to load the worker module specified in the message. It then “becomes” the new worker by replacing its onmessage handler with the required-in one.

    In use-cases where Cesium itself is combined into a single .js file, we also combine each worker into its own .js file, complete with all of its dependencies. This ensures that each worker needs to load only two .js files: the bootstrapper plus the combined module.

    Mobile Devices

    One of the most exciting aspects of building an application like NORAD Tracks Santa on web technologies is the possibility of achieving portability across operating systems and devices with a single code base. All of the technologies used by Cesium are already well supported on Windows, Linux, and Mac OS X on desktops and laptops. Increasingly, however, these technologies are becoming available on mobile devices.

    The most stable implementation of WebGL on phones and tablets is currently found in Firefox for Android. We tried out Cesium on several devices, including a Nexus 4 phone and a Nexus 7 tablet, both running Android 4.2.1 and Firefox 17.0. With a few tweaks, we were able to get Cesium running, and the performance was surprisingly good.

    We did encounter a few problems, however, presumably a result of driver bugs. One problem was that normalizing vectors in fragment shaders sometimes simply does not work. For example, GLSL code like this:

    vec3 normalized = normalize(someVector);

    Sometimes results in a normalized vector that still has a length greater than 1. Fortunately, this is easy to work around by adding another call to normalize:

    vec3 normalized = normalize(normalize(someVector));

    We hope that as WebGL gains more widespread adoption on mobile, bugs like this will be detected by the WebGL conformance tests before devices and drivers are released.

    The Finished Application

    As long-time C++ developers, we were initially skeptical of building a virtual globe application on the Open Web. Would we be able to do all the things expected of such an application? Would the performance be good?

    I’m pleased to say that we’ve been converted. Modern web APIs like WebGL, Web Workers, and Typed Arrays, along with the continual and impressive gains in JavaScript performance, have made the web a convenient, high-performance platform for sophisticated 3D applications. We’re looking forward to continuing to use Cesium to push the limits of what is possible in a browser, and to take advantage of new APIs and capabilities as they become available.

    We’re also looking forward to using this technology to bring a fun, 3D Santa tracking experience to millions of kids worldwide this Christmas as part of the NORAD Tracks Santa team. Check it out on December 24 at

  3. Fantastic front-end performance Part 1 – Concatenate, Compress & Cache – A Node.JS Holiday Season, part 4

    This is episode 4, out of a total 12, in the A Node.JS Holiday Season series from Mozilla’s Identity team. It’s the first post about how to achieve better front-end performance.

    In this part of our “A Node.JS Holiday Season” series we’ll talk about front-end performance and introduce you to tools we’ve built and use in Mozilla to make the Persona front-end be as fast as possible.

    We’ll talk about connect-cachify, a tool to automate some of the most important parts of front-end performance.

    Before we do that, though, let’s recap quickly what we can do as developers to make our solutions run on the machines of our users as smooth as possible. If you already know all about performance optimisations, feel free to proceed to the end and see how connect-cachify helps automate some of the things you might do by hand right now.

    Three Cs of client side performance

    The web is full of information related to performance best practices. While many advanced techniques exist to tweak every last millisecond from your site, three basic tools should form the foundation – concatenate, compress and cache.


    The goal of concatenation is to minimize the number of requests made to the server. Server requests are costly. The amount of time needed to establish an HTTP connection is sometimes more expensive than the amount of time necessary to transfer the data itself. Every request adds to the overhead that it takes to view your site and can be especially problematic on mobile devices where there is significant connection latency. Have you ever browsed to a shopping site on your mobile phone while connected to the Edge network and grimaced as each image loaded one by one? That is connection latency rearing its head.

    SPDY is a new protocol built on top of HTTP that aims to reduce page load time by combining resource requests into a single HTTP connection. Unfortunately, at the present time only recent versions of Firefox, Chrome and Opera support this new protocol.

    Combining external resources wherever possible, though old fashioned, works across all browsers and does not degrade with the advent of SPDY. Tools exist to combine the three most common types of external resources – JavaScript, CSS and images.

    JavaScript & CSS

    A site with more than one external JavaScript inclusion should consider combining the scripts into a single file for production. Browsers have traditionally blocked all other rendering while JavaScript is downloaded and processed. Since each requested JavaScript resource carries with it a certain amount of latency, the following is slower than it needs to be:

      <script src="jquery.min.js"></script>
      <script src="main.js"></script>
      <script src="image-carousel.js"></script>
      <script src="widget.js"></script>

    By combining four requests into one, the total amount of time the browser is blocked due to latency will be significantly reduced.

      <script src="main.production.js"></script>

    Working with combined JavaScript while still in development can be very difficult so concatenation is usually only done for a production site.

    Like JavaScript, individual CSS files should be combined into a single resource for production. The process is the same.


    Data URIs and image sprites are the two primary methods that exist to reduce the number of requested images.

    data: URI

    A data URI is a special form of a URL used to embed images directly into HTML or CSS. Data URIs can be used in either the src attribute of an img tag or as the url value of a background-image in CSS. Because embedded images are base64 encoded, they require more bytes but one less HTTP request than the original external binary image. If the included image is small, the increased byte size is usually more than offset by the reduction in the number of HTTP requests. Neither IE6 nor IE7 support data URIs so know your target audience before using them.

    Image sprites

    Image sprites are a great alternative whenever a data URI cannot be used. An image sprite is a collection of images combined into a single larger image. Once the images are combined, CSS is used to show only the relevant portion of the sprite. Many tools exist to create a sprite out of a collection of images.

    A drawback to sprites comes in the form of maintenance. The addition, removal or modification of an image within the sprite requires a congruent change to the CSS.

    Sprite Cow helps you get the background-position, width and height of sprites within a spritesheet as a nice bit of copyable css.

    Removing extra bytes – minification, optimization & compression

    Combining resources to minimize the number of HTTP requests goes a long way to speeding up a site, but we can still do more. After combining resources, the number of bytes that are transferred to the user should be minimized. Minimizing bytes usually takes the form of minification, optimization and compression.

    JavaScript & CSS

    JavaScript and CSS are text resources that can be effectively minified. Minification is a process that transforms the original text by eliminating anything that is irrelevant to the browser. Transformations to both JavaScript and CSS start with the removal of comments and extra whitespace. JavaScript minification is much more complex. Some minifiers perform transforms that replace multi-character variable names with a single character, remove language constructs that are not strictly necessary and even go so far as to replace entire statements with shorter equivalent statements.

    UglifyJS, YUICompressor and Google Closure Compiler are three popular tools to minify JavaScript.

    Two CSS minifiers include YUICompressor and UglifyCSS.


    Images frequently contain data that can be removed without affecting its visual quality. Removing these extra bytes is not difficult, but does require specialized image handling tools. Our own Francois Marier has written two blog posts on working with PNGs and with GIFs. from Yahoo! is an online optimization tool. ImageOptim is an equivalent offline tool for OSX – simply drag and drop your images into the tool and it will reduce their size automatically. You don’t need to do anything – ImageOptim simply replaces the original files with the much smaller ones.

    If a loss of visual quality is acceptable, re-compressing an image at a higher compression level is an option.

    The Server Can Help Too!

    Even after combining and minifying resources, there is more. Almost all servers and browsers support HTTP compression. The two most popular compression schemes are deflate and gzip. Both of these make use of efficient compression algorithms to reduce the number of bytes before they ever leave the server.


    Concatenation and compression help first time visitors to our sites. The third C, caching, helps visitors that return. A user who returns to our site should not have to re-download all of the resources again. HTTP provides two widely adopted mechanisms to make this happen, cache headers and ETags.

    Cache headers come in two forms and are suitable for static resources that change infrequently, if ever. The two header options are Expires and Cache-Control: max-age. The Expires header specifies the date after which the resource must be re-requested. max-age specifies how many seconds the resource is valid for. If a resource has a cache header, the browser will only re-request that resource once the cache expiration date has passed.

    An ETag is essentially a resource version tag that is used to validate whether the local version of a resource is the same as the server’s version. An ETag is suitable for dynamic content or content can change at any time. When a resource has an ETag, it says to the browser “Check the server to see if the version is the same, if it is, use the version you already have.” Because an ETag requires interaction with the server, it is not as efficient as a fully cached resource.


    The advantage to using time/date based cache-control headers instead of ETags is that resources are only re-requested once the cache has expired. This is also its biggest drawback. What happens if a resource changes? The cache has to somehow be busted.

    Cache-busting is usually done by adding a version number to the resource URL. Any change to a resources URL causes a cache-miss which in turn causes the resource to be re-downloaded.

    For example if has a cache header set to expire in one year but the logo changes, users who have already downloaded the logo will only see the update a year from now. This can be fixed by adding some sort of version identifier to the URL.


    When the logo is updated, a new version is used meaning the logo will be re-requested.


    Connect-cachify – A NodeJS library to serve concatenated and cached resources

    Connect-cachify is a NodeJS middleware developed by Mozilla that makes it easy to serve up properly concatenated and cached resources.

    While in production mode, connect-cachify serves up pre-generated production resources with a cache expiration of one year. If not in production mode, individual development resources are served instead, making debugging easy. Connect-cachify does not perform concatenation and minification itself but instead relies on you to do this in your project’s build script.

    Configuration of connect-cachify is done through the setup function. Setup takes two parameters, assets and options. assets is a dictionary of production to development resources. Each production resource maps to a list its individual development resources.

    • options is an optional dictionary that can take the following values:
    • prefix – String to prepend to the hash in links. (Default: none)
    • production – Boolean indicating whether to serve development or production resources. (Defaults to true)
    • root – The fully qualified path from which static resources are served. This is the same value that you’d send to the static middleware. (Default: ‘.’)

    Example of connect-cachify in action

    First, let’s assume we have a simple HTML file we wish to use with connect-cachify. Our HTML file includes three CSS resources as well as three Javascript resources.

      <title>Dashboard: Hamsters of North America</title>
      <link rel="stylesheet" type="text/css" href="/css/reset.css" />
      <link rel="stylesheet" type="text/css" href="/css/common.css" />
      <link rel="stylesheet" type="text/css" href="/css/dashboard.css" />
      <script type="text/javascript" src="/js/lib/jquery.js"></script>
      <script type="text/javascript" src="/js/magick.js"></script>
      <script type="text/javascript" src="/js/laughter.js"></script>

    Set up the middleware

    Next, include the connect-cachify library in your NodeJS server. Create your production to development resource map and configure the middleware.

    // Include connect-cachify
    const cachify = require('connect-cachify');
    // Create a map of production to development resources
    var assets = {
      "/js/main.min.js": [
      "/css/dashboard.min.css": [
    // Hook up the connect-cachify middleware
    app.use(cachify.setup(assets, {
      root: __dirname,
      production: your_config['use_minified_assets'],

    To keep code DRY, the asset map can be externalized into its own file and used as configuration to both connect-cachify and your build script.

    Update your templates to use cachify

    Finally, your templates must be updated to indicate where production JavaScript and CSS should be included. JavaScript is included using the “cachify_js” helper whereas CSS uses the “cachify_css” helper.

      <title>Dashboard: Hamsters of North America</title>
      <%- cachify_css('/css/dashboard.min.css') %>
      <%- cachify_js('/js/main.min.js') %>

    Connect-cachified output

    If the production flag is set to false in the configuration options, connect-cachify will generate three link tags and three script tags, exactly as in the original. However, if the production flag is set to true, only one of each tag will be generated. The URL in each tag will have the MD5 hash of the production resource prepended onto its URL. This is used for cache-busting. When the contents of the production resource change, its hash also changes, effectively breaking the cache.

      <title>Dashboard: Hamsters of North America</title>
      <link rel="stylesheet" type="text/css" href="/v/2abdd020a6/css/dashboard.min.css" />
      <script type="text/javascript" src="/v/acdd2ab372/js/main.min.js"></script>

    That’s all there is to setting up connect-cachify.


    There are a lot of easy wins when looking to speed up a site. By going back to basics and using the three Cs – concatenation, compression and caching – you will go a long way towards improving the load time of your site and the experience for your users. Connect-cachify helps with concatenation and caching in your NodeJS apps but there is still more we can do. In the next installment of A NodeJS Holiday Season, we will look at how to generate dynamic content and make use of ETagify to still serve maximally cacheable resources.

    Previous articles in the series

    This was part four in a series with a total of 12 posts about Node.js. The previous ones are:

  4. Firefox OS Simulator 1.0 is here!

    Three weeks back, we introduced the Firefox OS Simulator, a tool that allows web developers to try out their apps in Firefox OS from the comfort of their current Windows/Mac/Linux computers. We’ve seen a number of comments from people who used the Simulator as an easy way to get a peek at Firefox OS today, which is fine, too.

    Since that last blog post, the team has been squashing bugs to get the Simulator ready for more use. Linux users in particular will be happy to know that the current release will run on many more systems. With today’s 1.0 release, we hope to see many more users!


    We’re leaving the “Preview” tag on the Simulator for now, both because the Simulator is new and because Firefox OS itself is still in development. This is a perfect time to create apps that work on Firefox OS (and Android and the web at large!) because your apps can be ready for the big launch.

    Install the Firefox OS Simulator from today!

    Screencast showing Firefox OS Simulator in action

    (If you’ve opted in to HTML5 video on YouTube you will get that, otherwise it will fallback to Flash)

    Getting help

    If you spot any bugs, please file them on GitHub. Got a question? You can ask us on the dev-webapps mailing list or on #openwebapps on

  5. Firebug 1.11 New Features

    Firebug 1.11 has been released and so, let’s take a look at some of the new features introduced in this version.


    First of all, check out the following compatibility table:

    • Firebug 1.10 with Firefox 13.0 – 17.0
    • Firebug 1.11 with Firefox 17.0 – 20.0

    Firebug 1.11 is open source project surrounded by contributors and volunteers and so, let me also introduce all developers who contributed to Firebug 1.11

    • Jan Odvarko
    • Sebastian Zartner
    • Simon Lindholm
    • Florent Fayolle
    • Steven Roussey
    • Farshid Beheshti
    • Harutyun Amirjanyan
    • Bharath Thiruveedula
    • Nikhil Verma
    • Antanas Arvasevicius
    • Chris Coulson

    New Features

    SPDY Support

    Are you optimizing your page and using SPDY protocol? Cool, the Net panel is now indicating whether the protocol is in action.

    Performance Timing Visualization

    Another feature related to page load performance. If you are analyzing performance-timing you can simply log the timing data into the Console panel and check out nice interactive graph presenting all information graphically.

    Just execute the following expression on Firebug’s Command Line:


    Read detailed description of this feature.

    CSS Query Selector Tool

    Firebug offers new side panel (available in the CSS panel) that allows quick execution of CSS selectors. You can either insert your own CSS Selector or get list of matching elements for an existing selector.

    In order to see list of matching elements for an existing CSS rule, just right click on the rule and pick Get Matching Elements menu item. The list of elements (together with the CSS selector) will be displayed in the side panel.

    New include() command

    Firebug supports a new command called include(). This command can be executed on the Command Line and used to include a JavaScript file into the current page.

    The simplest usage looks as follows:

    If you are often including the same script (e.g. jqueryfying your page), you can create an alias.
    include("", "jquery");

    And use the alias as follow:

    In order to see list of all defined aliases, type: include(). Note, that aliases are persistent across Firefox restarts.

    Read detailed description of this command on Firebug wiki.

    Observing window.postMessage()

    Firebug improves the way how message events, those generated by window.postMessage() method, are displayed in the Console panel.

    The log now displays:

    • origin window/iframe URL
    • data associated with the message
    • target window/iframe object

    See detailed explanation of the feature.

    Copy & Paste HTML

    It’s now possible to quickly clone entire parts of HTML markup by Copy & Paste. Copy HTML action has been available for some time, but Paste HTML is new. Note that XML and SVG Copy & Paste is also supported!

    See detailed explanation of the feature.

    Styled Logging

    The way how console logs can be styled using custom CSS (%c formatting variable) has been enhanced. You can now use more style formatters in one log.

    console.log("%cred-text %cgreen-text", "color:red", "color:green");

    See detailed explanation of this feature.

    Log Function Calls

    Log Function Calls feature has been improved and it also shows the current stack trace of the place where monitored function is executed.

    See detailed explanation of this feature.

    Improved $() and $$() commands

    Firebug also improves existing commands for querying DOM elements.

    $() command uses querySelector()
    $$() command uses querySelectorAll()

    It also means that the argument passed in must be CSS selector.

    So, for example, if you need to get an element with ID equal to “content” you need to use # character.


    If you forget, Firebug will nicely warn you :-)

    Autocompletion for built-in properties

    The Command Line supports auto-completion even for built-in members of String.prototype or Object.prototype and other objects.

    There are many other improvements and you can see the entire list in our release notes. You can also see the official announcement on

    Follow us on Twitter to be updated!

    Jan ‘Honza’ Odvarko

  6. Performance with JavaScript String Objects

    This article aims to take a look at the performance of JavaScript engines towards primitive value Strings and Object Strings. It is a showcase of benchmarks related to the excellent article by Kiro Risk, The Wrapper Object. Before proceeding, I would suggest visiting Kiro’s page first as an introduction to this topic.

    The ECMAScript 5.1 Language Specification (PDF link) states at paragraph 4.3.18 about the String object:

    String object member of the Object type that is an instance of the standard built-in String constructor

    NOTE A String object is created by using the String constructor in a new expression, supplying a String value as an argument.
    The resulting object has an internal property whose value is the String value. A String object can be coerced to a String value
    by calling the String constructor as a function (15.5.1).

    and David Flanagan’s great book “JavaScript: The Definitive Guide”, very meticulously describes the Wrapper Objects at section 3.6:

    Strings are not objects, though, so why do they have properties? Whenever you try to refer to a property of a string s, JavaScript converts the string value to an object as if by calling new String(s). [...] Once the property has been resolved, the newly created object is discarded. (Implementations are not required to actually create and discard this transient object: they must behave as if they do, however.)

    It is important to note the text in bold above. Basically, the different ways a new String object is created are implementation specific. As such, an obvious question one could ask is “since a primitive value String must be coerced to a String Object when trying to access a property, for example str.length, would it be faster if instead we had declared the variable as String Object?”. In other words, could declaring a variable as a String Object, ie var str = new String("hello"), rather than as a primitive value String, ie var str = "hello" potentially save the JS engine from having to create a new String Object on the fly so as to access its properties?

    Those who deal with the implementation of ECMAScript standards to JS engines already know the answer, but it’s worth having a deeper look at the common suggestion “Do not create numbers or strings using the ‘new’ operator”.

    Our showcase and objective

    For our showcase, we will use mainly Firefox and Chrome; the results, though, would be similar if we chose any other web browser, as we are focusing not on a speed comparison between two different browser engines, but at a speed comparison between two different versions of the source code on each browser (one version having a primitive value string, and the other a String Object). In addition, we are interested in how the same cases compare in speed to subsequent versions of the same browser. The first sample of benchmarks was collected on the same machine, and then other machines with a different OS/hardware specs were added in order to validate the speed numbers.

    The scenario

    For the benchmarks, the case is rather simple; we declare two string variables, one as a primitive value string and the other as an Object String, both of which have the same value:

      var strprimitive = "Hello";
      var strobject    = new String("Hello");

    and then we perform the same kind of tasks on them. (notice that in the jsperf pages strprimitive = str1, and strobject = str2)

    1. length property

      var i = strprimitive.length;
      var k = strobject.length;

    If we assume that during runtime the wrapper object created from the primitive value string strprimitive, is treated equally with the object string strobject by the JavaScript engine in terms of performance, then we should expect to see the same latency while trying to access each variable’s length property. Yet, as we can see in the following bar chart, accessing the length property is a lot faster on the primitive value string strprimitive, than in the object string strobject.

    (Primitive value string vs Wrapper Object String – length, on jsPerf)

    Actually, on Chrome 24.0.1285 calling strprimitive.length is 2.5x faster than calling strobject.length, and on Firefox 17 it is about 2x faster (but having more operations per second). Consequently, we realize that the corresponding browser JavaScript engines apply some “short paths” to access the length property when dealing with primitive string values, with special code blocks for each case.

    In the SpiderMonkey JS engine, for example, the pseudo-code that deals with the “get property” operation looks something like the following:

      // direct check for the "length" property
      if (typeof(value) == "string" && property == "length") {
        return StringLength(value);
      // generalized code form for properties
      object = ToObject(value);
      return InternalGetProperty(object, property);

    Thus, when you request a property on a string primitive, and the property name is “length”, the engine immediately just returns its length, avoiding the full property lookup as well as the temporary wrapper object creation. Unless we add a property/method to the String.prototype requesting |this|, like so:

      String.prototype.getThis = function () { return this; }

    then no wrapper object will be created when accessing the String.prototype methods, as for example String.prototype.valueOf(). Each JS engine has embedded similar optimizations in order to produce faster results.

    2. charAt() method

      var i = strprimitive.charAt(0);
      var k = strobject["0"];

    (Primitive value string vs Wrapper Object String – charAt(), on jsPerf)

    This benchmark clearly verifies the previous statement, as we can see that getting the value of the first string character in Firefox 20 is substiantially faster in strprimitive than in strobject, about x70 times of increased performance. Similar results apply to other browsers as well, though at different speeds. Also, notice the differences between incremental Firefox versions; this is just another indicator of how small code variations can affect the JS engine’s speed for certain runtime calls.

    3. indexOf() method

      var i = strprimitive.indexOf("e");
      var k = strobject.indexOf("e");

    (Primitive value string vs Wrapper Object String – IndexOf(), on jsPerf)

    Similarly in this case, we can see that the primitive value string strprimitive can be used in more operations than strobject. In addition, the JS engine differences in sequential browser versions produce a variety of measurements.

    4. match() method

    Since there are similar results here too, to save some space, you can click the source link to view the benchmark.

    (Primitive value string vs Wrapper Object String – match(), on jsPerf)

    5. replace() method

    (Primitive value string vs Wrapper Object String – replace(), on jsPerf)

    6. toUpperCase() method

    (Primitive value string vs Wrapper Object String – toUpperCase(), on jsPerf)

    7. valueOf() method

      var i = strprimitive.valueOf();
      var k = strobject.valueOf();

    At this point it starts to get more interesting. So, what happens when we try to call the most common method of a string, it’s valueOf()? It seems like most browsers have a mechanism to determine whether it’s a primitive value string or an Object String, thus using a much faster way to get its value; surprizingly enough Firefox versions up to v20, seem to favour the Object String method call of strobject, with a 7x increased speed.

    (Primitive value string vs Wrapper Object String – valueOf(), on jsPerf)

    It’s also worth mentioning that Chrome 22.0.1229 seems to have favoured too the Object String, while in version 23.0.1271 a new way to get the content of primitive value strings has been implemented.

    A simpler way to run this benchmark in your browser’s console is described in the comment of the jsperf page.

    8. Adding two strings

      var i = strprimitive + " there";
      var k = strobject + " there";

    (Primitive string vs Wrapper Object String – get str value, on jsPerf)

    Let’s now try and add the two strings with a primitive value string. As the chart shows, both Firefox and Chrome present a 2.8x and 2x increased speed in favour of strprimitive, as compared with adding the Object string strobject with another string value.

    9. Adding two strings with valueOf()

      var i = strprimitive.valueOf() + " there";
      var k = strobject.valueOf() + " there";

    (Primitive string vs Wrapper Object String – str valueOf, on jsPerf)

    Here we can see again that Firefox favours the strobject.valueOf(), since for strprimitive.valueOf() it moves up the inheritance tree and consequently creates a new wapper object for strprimitive. The effect this chained way of events has on the performance can also be seen in the next case.

    10. for-in wrapper object

      var i = "";
      for (var temp in strprimitive) { i += strprimitive[temp]; }
      var k = "";
      for (var temp in strobject) { k += strobject[temp]; }

    This benchmark will incrementally construct the string’s value through a loop to another variable. In the for-in loop, the expression to be evaluated is normally an object, but if the expression is a primitive value, then this value gets coerced to its equivalent wrapper object. Of course, this is not a recommended method to get the value of a string, but it is one of the many ways a wrapper object can be created, and thus it is worth mentioning.

    (Primitive string vs Wrapper Object String – Properties, on jsPerf)

    As expected, Chrome seems to favour the primitive value string strprimitive, while Firefox and Safari seem to favour the object string strobject. In case this seems much typical, let’s move on the last benchmark.

    11. Adding two strings with an Object String

      var str3 = new String(" there");
      var i = strprimitive + str3;
      var k = strobject + str3;

    (Primitive string vs Wrapper Object String – 2 str values, on jsPerf)

    In the previous examples, we have seen that Firefox versions offer better performance if our initial string is an Object String, like strobject, and thus it would be seem normal to expect the same when adding strobject with another object string, which is basically the same thing. It is worth noticing, though, that when adding a string with an Object String, it’s actually quite faster in Firefox if we use strprimitive instead of strobject. This proves once more how source code variations, like a patch to a bug, lead to different benchmark numbers.


    Based on the benchmarks described above, we have seen a number of ways about how subtle differences in our string declarations can produce a series of different performance results. It is recommended that you continue to declare your string variables as you normally do, unless there is a very specific reason for you to create instances of the String Object. Also, note that a browser’s overall performance, particularly when dealing with the DOM, is not only based on the page’s JS performance; there is a lot more in a browser than its JS engine.

    Feedback comments are much appreciated. Thanks :-)

  7. Using secure client-side sessions to build simple and scalable Node.JS applications – A Node.JS Holiday Season, part 3

    This is episode 3, out of a total 12, in the A Node.JS Holiday Season series from Mozilla’s Identity team. It covers using sessions for scalable Node.js applications.

    Static websites are easy to scale. You can cache the heck out of them and you don’t have state to propagate between the various servers that deliver this content to end-users.

    Unfortunately, most web applications need to carry some state in order to offer a personalized experience to users. If users can log into your site, then you need to keep sessions for them. The typical way that this is done is by setting a cookie with a random session identifier and storing session details on the server under this identifier.

    Scaling a stateful service

    Now, if you want to scale that service, you essentially have three options:

    1. replicate that session data across all of the web servers,
    2. use a central store that each web server connects to, or
    3. ensure that a given user always hits the same web server

    These all have downsides:

    • Replication has a performance cost and increases complexity.
    • A central store will limit scaling and increase latency.
    • Confining users to a specific server leads to problems when that
      server needs to come down.

    However, if you flip the problem around, you can find a fourth option: storing the session data on the client.

    Client-side sessions

    Pushing the session data to the browser has some obvious advantages:

    1. the data is always available, regardless of which machine is serving a user
    2. there is no state to manage on servers
    3. nothing needs to be replicated between the web servers
    4. new web servers can be added instantly

    There is one key problem though: you cannot trust the client not to tamper with the session data.

    For example, if you store the user ID for the user’s account in a cookie, it would be easy for that user to change that ID and then gain access to someone else’s account.

    While this sounds like a deal breaker, there is a clever solution to work around this trust problem: store the session data in a tamper-proof package. That way, there is no need to trust that the user hasn’t modified the session data. It can be verified by the server.

    What that means in practice is that you encrypt and sign the cookie using a server key to keep users from reading or modifying the session data. This is what client-sessions does.


    If you use Node.JS, there’s a library available that makes getting started with client-side sessions trivial: node-client-sessions. It replaces Connect‘s built-in session and cookieParser middlewares.

    This is how you can add it to a simple Express application:

    const clientSessions = require("client-sessions");
      secret: '0GBlJZ9EKBt2Zbi2flRPvztczCewBxXK' // set this to a long random string!

    Then, you can set properties on the req.session object like this:

    app.get('/login', function (req, res){
      req.session.username = 'JohnDoe';

    and read them back:

    app.get('/', function (req, res){
      res.send('Welcome ' + req.session.username);

    To terminate the session, use the reset function:

    app.get('/logout', function (req, res) {

    Immediate revocation of Persona sessions

    One of the main downsides of client-side sessions as compared to server-side ones is that the server no longer has the ability to destroy sessions.

    Using a server-side scheme, it’s enough to delete the session data that’s stored on the server because any cookies that remain on clients will now point to a non-existent session. With a client-side scheme though, the session data is not on the server, so the server cannot be sure that it has been deleted on every client. In other words, we can’t easily synchronize the server state (user logged out) with the state that’s stored on the client (user logged in).

    To compensate for this limitation, client-sessions adds an expiry to the cookies. Before unpacking the session data stored in the encrypted cookie, the server will check that it hasn’t expired. If it has, it will simply refuse to honour it and consider the user as logged out.

    While the expiry scheme works fine in most applications (especially when it’s set to a relatively low value), in the case of Persona, we needed a way for users to immediately revoke their sessions as soon as they learn that they password has been compromised.

    This meant keeping a little bit of state on the backend. The way we made this instant revocation possible was by adding a new token in the user table as well as in the session cookie.

    Every API call that looks at the cookie now also reads the current token value from the database and compares it with the token from the cookie. Unless they are the same, an error is returned and the user is logged out.

    The downside of this solution, of course, is the extra database read for each API call, but fortunately we already read from the user table in most of these calls, so the new token can be pulled in at the same time.

    Learn more

    If you want to give client-sessions a go, have a look at this simple demo application. Then if you find any bugs, let us know via our bug tracker.

    Previous articles in the series

    This was part three in a series with a total of 12 posts about Node.js. The previous ones are:

  8. Hacking Firefox OS

    This blog post is written by Luca Greco, a Mozilla community member who loves to hack, especially on JavaScript and other web-related technologies.

    A lot of developers are already creating mobile applications using Web technologies (powered by containers like Phonegap/Cordova as an example),
    usually to develop cross platform applications or leverage their current code and/or expertise.

    As a consequence Firefox OS is a very intriguing project for quite a few reasons:

    • Web apps have first class access to the platform
    • Web apps are native (less abstraction levels and better performance)
    • Platform itself is web-based (and customizable using web technologies)

    Mobile platforms based on web technologies can be the future, and we can now touch it, and even more important, we can help to define it and push it further thanks to a platform (Firefox OS) developed completely in the open. I could not resist the temptation, so I started to dive into Firefox OS code, scrape it using MXR and study documentation on the wiki.

    Hack On Firefox OS

    In a couple of weeks I was ready to put together an example app and an unofficial presentation on the topic (presented at the local LinuxDay 2012):

    Slides: “Hack On Firefox OS”

    Example App: Chrono for Gaia (user interface of Firefox OS):

    During this presentation I tried to highlight an important strength I believe Firefox OS and Firefox have in common:

    “It’s not a static build, it’s alive! a truly interactive environment. Like the Web”

    I can telnet inside a B2G instance and look around for interesting stuff or run JavaScript snippets to interactively experiment the new WebAPIs. There’s more than one option to get a remote JavaScript shell inside B2G:

    • Marionette, mainly used for automated tests
    • B2G Remote JS Shell, a minimal JavaScript shell optionally exposed on a tcp port (which I think will be deprecated in future releases)

    Unfortunately these tools currently lack integrated inspection utils (e.g. console.log/console.dir or MozRepl repl.inspect and, so during the presentation I opted to install MozRepl as extension on the B2G simulator, but in the last weeks Remote WebConsole landed into Firefox Nightly.

    Obviously RemoteWebConsole isn’t mature yet, so we need to be prepared for bugs and silent errors (e.g. we need to be sure B2G has “remote debugger” enabled or it will fail without errors), but it features objects inspection, network progress logging, JavaScript and CSS errors (like our local Firefox WebConsole).

    Developing for Firefox OS

    In my experience, developing an OpenWebApps for Firefox OS isn’t really different from hybrid applications based on Phonegap-like technologies:

    We’ll try to code and test major parts of our application on desktop browsers and their development tools, using mockups/shims in place of native functionalities. But during my 2 weeks of studying, I’ve collected an interesting amount of working notes that can be useful to share, so in next sections I’m going to:

    • Review development workflow and tools
    • Put together a couple of useful tips and tricks
    • Release to the public (independently and in the Mozilla Marketplace)

    As an example app, I’ve chosen to create a simple chronometer:

    Workflow and Tools

    During this experience my toolset was composed by:

    • VoloJS – development server and automated production build (minify js/css, generate manifest.appcache)
    • Firefox Nightly Web Developer Tools – Markup View, Tilt, Responsive Design View and Web Console
    • R2D2B2G – integrate/interoperate with B2G from a Firefox Addon

    Using Volo, I can test my app from volo integrated http server, split JavaScript code into modules using Require.js, and finally generate a production version, minified and optionally equipped of an auto-generated manifest.appcache.

    During my development cycle I iteratively:

    • Make a change
    • Reload and inspect changes on desktop browser
    • Try changes on b2g-simulator using R2D2B2G
    • Debug on desktop browser or remotely on b2g-simulator
    • Restart the loop :-)

    Using my favorite desktop browser (Firefox, obviously :-P), I have the chance to use very powerful inspection/debugging tools, not usually available on mobile web-runtimes:

    • Markup Viewer to inspect and change DOM tree state
    • Styles Editor to inspect and change CSS properties
    • Tilt to check where offscreen dom elements are located
    • Web Console to inspect and change the JavaScript environment

    Thanks to new Mozilla projects like “Firefox OS” and “Firefox for Android”, more and more of these tools are now available as “Remote Web Tools” and can be connected to a remote instance.

    Tips and Tricks

    Gaia UI Building Blocks

    Gaia isn’t only a B2G UI implementation, it’s a design style guide and a collection of pre-built CSS styles which implements these guidelines:

    We can import components’ styles from the repo above and apply them to our app to achieve a really nice native Look & Feel on Firefox OS. Some components are not stable, which means they can interact badly with other components’ styles or doesn’t work perfectly on all platforms (e.g. Firefox Desktop or Firefox for Android), but usually it’s nothing we can’t fix by using some custom and more specific CSS rules.

    It doesn’t feel like a mature and complete CSS framework (e.g. Bootstrap) but it’s promising and I’m sure it will get better.

    Using Responsive Design View we can test different resolutions (and orientations), which helps to reach a good and consistent result even without test our app on Firefox OS or Firefox for Android devices, but we should keep an eye on dpi related tweaks, because we currently can’t fully recognize how it will looks using our Desktop browser.

    App Panels

    A lot of apps needs more than one panel, so first I looked inside official Gaia applications to understand how native apps implements this almost mandatory feature. This is how Gaia Clock application appear from a “Tilt” eye:

    Panels are simple DOM elements (e.g. a section or a div tag) initially positioned offscreen and moved on screen using a CSS transition:

    In the “Chrono” app, you will find this strategy in the Drawer (an unstable Gaia UI Building Block):

    and in the Laps and About panels (combinated with :target pseudo class):

    The Fascinating -moz-element trick

    This is a very fascinating trick, used in the time selector component on Firefox OS:

    and in the Markup View on Firefox Nightly Desktop (currently disabled):

    Thanks to this non-standard CSS feature we can use a DOM Element as background image on another one, e.g. integrating a complex offscreen visual component into the visible space as a single DOM element.

    Fragments from index.html

    Fragments from chrono.css

    Fragments from chrono.js

    Using -moz-element and -moz-calc (compute components size into CSS rules and already included into CSS3 as calc) is really simple, but you can learn more on the subject on MDN:

    Release to the public

    WebApp Manifest

    During our development cycle we install our application into B2G simulator using an R2D2B2G menu option, so we don’t need a real manifest.webapp, but we have to create a real one when ready for a public release or to release it to test users.

    Creating a manifest.webapp is not difficult, it’s only a simple and well documented json file format: App/Manifest on MDN.

    Debugging problems related to this manifest file is still an unknown territory, and some tips can be useful:

    • If the manifest file contains a syntax error or cannot be downloaded and error will be silently reported into the old Error Console (no, they will not be reported inside the new Web Console)
    • If your application is accessible as a subdirectory in its domain, you have to include this path in resources path specified inside the manifest (e.g. launch_path, appcache_path, icons), more on this later
    • You can add an uninstall button in your app to help you as a developer (and test users) to uninstall your app in a platform independent way (because “how to uninstall” an installed webapp will be different if you are on Desktop, Android or Firefox OS)

    Using OpenWebApps APIs, I added to “Chrono” some code to give users the ability to install itself:

    Install an app from your browser to your desktop system

    Check if it’s already installed (as a webapp runtime app or a self-service installer in a browser tab):

    On a Linux Desktop, when you install an OpenWebApp from Firefox, it will create a new launcher (an “.desktop” file) in your “.local/share/applications” hidden directory:

    $ cat ~/.local/share/applications/owa-http\; 
    [Desktop Entry]
    Comment=Gaia Chronometer Example App
    [Desktop Action Uninstall]
    Name=Uninstall App
    Exec=/home/rpl/.http; -remove

    As you will notice, current conventions (and implementation) supports only one application per domain, if you give a look inside the hidden directory of our installed webapp you will find a single webapp.json config file:

    $ ls /home/rpl/.http\;
    Crash Reports  icon.png  profiles.ini  webapp.ini  webapp.json  webapprt-stub  

    Reasons for this limitations are documented on MDN: FAQs about app manifests

    To help yourself debugging problems when you app is running inside the webapp runtime you can run it from command line and enable the old (but still useful) Error Console:

    $ ~/.http\; -jsconsole

    Uninstalling an OpenWebApp is really simple, you can manually removing it using the “wbapprt-stub” executable in the “OpenWebApp hidden directory” (platform dependent method):

    $ ~/.http\; -remove

    Or from JavaScript code, as I’ve done in “Chrono” to give users the ability to uninstall the app from a Firefox Browser tab:

    AppCache Manifest

    This is a feature integrated a lot of time ago into major browsers, but now, thanks to OpenWebApps, it’s becoming really mandatory: without a manifest.appcache, and proper JavaScript code to handle upgrades, our WebApp will not works offline correctly, and will not feel like a real installed application.

    Currently Appcache it’s a kind of black magic and it deserves a “Facts Page” like Chuck Norris: AppCacheFacts.

    Thanks to volo-appcache command, generate a manifest.appcache it’s simple as a single line command:

      $ volo appache
      $ ls -l www-build/manifest.appcache
      $ tar cjvf release.tar.bz2 www-build

    Unfortunately when you need to debug/test your manifest.appcache, you’re on your own, because currently there isn’t any friendly debugging tool integrated into Firefox:

    • appcache downloading progress (and errors) are not currently reported into the WebConsole
    • appcache errors doesn’t contains an error message/description
    • Firefox for Android and Firefox OS doesn’t have an UI to clean your applicationCache

    Debug appcache problems can be very tricky, so here a couple of tricks I learned during this experiment:

    • Subscribe every window.applicationCache events (‘error’, ‘checking’, ‘noupdate’, ‘progress’, ‘downloading’, ‘cached’, ‘updateready’ etc) and log all received events / error messages during development and debugging
    • Add upgrade handling code in your first public release (or prepare yourself to go house by house to help your users to upgrade :-D)
    • On Firefox for Desktop you can remove applicationCache from the Preferences dialog
    • Analyze server side logs to understand where the “appcache updating” is stuck
    • Activate logging on applicationCache internals when you run Firefox or B2G to understand why it’s stuck

      export NSPR_LOG_MODULES=nsOfflineCacheUpdate:5
      export NSPR_LOG_FILE=offlineupdate.log
      firefox -no-remote -ProfileManager &
      tail -f offlineupdate.log
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::Init [7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::AddObserver [7fc56a9fcc08] to update [7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::AddObserver [7fc55c3264d8] to update [7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::Schedule [7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdateService::Schedule [7fc57428dac0, update=7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdateService::ProcessNextUpdate [7fc57428dac0, num=1]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::Begin [7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::NotifyState [7fc55959ce50, 2]
      -1614710976[7fc59e91f590]: 7fc559d0df00: Opening channel for http://html5dev:8888/gaia-chrono-app/manifest.appcache
      -1614710976[7fc59e91f590]: loaded 3981 bytes into offline cache [offset=0]
      -1614710976[7fc59e91f590]: Update not needed, downloaded manifest content is byte-for-byte identical
      -1614710976[7fc59e91f590]: done fetching offline item [status=0]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::LoadCompleted [7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::NotifyState [7fc55959ce50, 3]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::Finish [7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdateService::UpdateFinished [7fc57428dac0, update=7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdateService::ProcessNextUpdate [7fc57428dac0, num=0]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::NotifyState [7fc55959ce50, 10]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::RemoveObserver [7fc56a9fcc08] from update [7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::RemoveObserver [7fc55c3264d8] from update [7fc55959ce50]

    Adding applicationCache support to “Gaia Chrono App”, I used all this tricks to finally discover Firefox didn’t send me an “updateready” event, so I was not able to tell the user to reload the page to start using the new (and already cached) version. With a better understanding of the problem, and searching in the code on MXR and in tickets on Bugzilla, I finally found an existing ticket in the bugtracker: Bug 683794: onupdateready event not fired when an html5 app cache app is updated.

    Workaround this bug its really simple (simpler than tracking it down): add a dummy “updateready” listener on applicationCache object in a script tag to be sure it will be fired:

    If you are going to start to use this feature (and soon or later you will), be prepared to:

    • Implement as suggested from the standard
    • Debug reason why it’s not behave as it should
    • Search an existent bug or file a bug if it’s not reported (NOTE: this is really important!!! :-D)
    • Find a workaround

    This is definitely a features that needs more supports into web developer tools, because a regular web developer doesn’t want to debug his webapp from “browser internals” point of view.

    Port to Firefox for Android

    An interesting feature of OpenWebApps is “they can be installed on any supported platform without (almost) any changes”. As an example we can install our “Chrono” App on our Desktop using Firefox Nightly and on Android using Firefox for Android Nightly.

    In my own opinion, Firefox for Android can be a strategic platform for the future of OpenWebApps and even Firefox OS: Android is an already popular mobile platform and give developers the option to release their application on Firefox OS and Android from a single codebase is a big plus.

    The only problem I faced porting “Chrono” App on Android was related to the different rendering behaviour of Firefox for Android (and as a consequence in the WebAppRT, which contains our application):

    A GeckoScreenshot service will force a repaint only when and where it detects changes. This feature interacts badly with the -moz-element trick and it needs some help to understand what really needs to be repainted:

    Release to the public

    GitHub pages are a rapid and simple option to release our app to the public, even simpler thanks to the volo-ghdeploy command:

      $ volo appcache && volo ghdeploy

    When you deploy your OpenWebApp in a subdirectory of a given domain (e.g. using Github Pages) you should take into account that paths from manifest.webapp needs to be relative to your origin (protocol+host+port) and not your current url:

    We can install only a single OpenWebApp from every origin, so if you wants to deploy more than one app from your Github pages you need to configure Github pages to be exposed on a custom domain: Github Help – Setting up a custom domain with pages.

    When our app is finally online and public accessible we can submit it to the Mozilla Marketplace and gain more visibility.

    During the app submission procedure, your manifest.webapp will be validated and you will be warned if and how you need to tweak it to be able complete your submission:

    • Errors related to missing info (e.g. name or icons)
    • Errors related to invalid values (e.g. orientation)

    As in other mobile marketplaces you should collect and fill into your submission:

    • manifest.webapp url (NOTE: it will be read-only in the developer panel and you can’t change it)
    • A longer description and feature list
    • Short release description
    • One or more screenshots

    Mini Market

    Mozilla Marketplace goal is to help OpenWebApps to gain more visibility, as other mobile store currently do for their ecosystem, but Mozilla named this project OpenWebApps for a reason:

    Mozilla isn’t the only one that could create a Marketplace for OpenWebApps! Mozilla designed it to give us the same freedom we have on the Web, no more no less.

    This is a very powerful feature because it open to developers a lot of interesting use cases:

    • Carrier Apps Market on Firefox OS devices
    • Non-Public Apps Installer/Manager
    • Intranet Apps Installer/Manager


    Obviously Firefox OS and OpenWebApps aren’t fully completed right now (but improves at an impressive speed), Firefox OS doesn’t have an official released SDK, but the web doesn’t have an official SDK and we already use it everyday to do awesome stuff.

    So if you are interested in mobile platforms and you wants to learn how a mobile platform born and grow, or you are a web developer and you wants more and more web technologies into mobile ecosystems…

    You should seriously take a look at Firefox OS:

    We deserve a more open mobile ecosystem, let’s start to push it now, let’s help OpenWebApps and Firefox OS to became our new powerful tools!

    Happy Hacking!

  9. Firefox OS – video presentations and slides on the OS, WebAPIs, hacking and writing apps

    In August, Mozilla’s Director of Research Andreas Gal, and one of the lead engineers for Firefox OS, Philipp von Weitershausen, gave a couple of presentations in Brazil about Firefox OS. We’re now happy to share both the videos and the slides, in various formats for you to see or use, giving your own presentations!


    The videos are available on YouTube in the Mozilla Hacks channel, and they are split up into:

    Firefox OS – Introduction & Components

    FirefoxOS – Developer Enviroment, Apps, Marketplace

    FirefoxOS – WebAPIs & UI hacking

    Note: if you’ve opted in to HTML5 on YouTube you will get HTML5 video.


    There were four different slide decks being used:

    They are also available in Portuguese:

    We’ve also made these slide decks available in other formats if you want to reuse them and give your own presentations about Firefox OS (and if you love talking about the Open Web, have you considered becoming an Evangelism Rep?).

    Keynote format

    PowerPoint format

  10. Tracking Down Memory Leaks in Node.js – A Node.JS Holiday Season

    This post is the first in the A Node.JS Holiday Season series from the
    Identity team at Mozilla, who last month delivered the first beta
    release of Persona
    . To make Persona, we built a collection tools addressing areas ranging from debugging, to localization, to dependency management, and more. This series of posts will share our learnings and tools with the community, tools which are relevant to anyone building a high availability service with Node.JS. We hope you enjoy the series, and look forward to your thoughts and contributions.

    We’ll start off with a topic about a nitty-gritty Node.js problem: memory leaks. We present node-memwatch – a library to help discover and isolate memory leaks in Node.

    Why Bother?

    A fair question to ask about tracking down memory leaks is “Why bother?”. Aren’t there always more pressing problems that need to be tackled first? Why not just restart your service from time to time, or throw more RAM at it? In answer to these questions, we would suggest three things:

    1. You may not be worried about your increasing memory footprint, but V8 is. (V8 is the engine that Node runs on.) As leaks grow, V8 becomes increasingly aggressive about garbage collection, slowing your app down. So in Node, memory leaks hurt performance.
    2. Leaks can trigger other types of failure. Leaky code can hang on to references to limited resources. You may run out of file descriptors; you may suddenly be unable to open new database connections. Problems of this sort may emerge long before your app runs out of memory and still leave you dead in the water.
    3. Finally, sooner or later, your app will crash. And you can bet it will happen right at the moment when you’re getting popular. And then everybody will laugh and say mean things about you on Hacker News and you’ll be sad.

    Where’s That Dripping Sound Coming From?

    In the plumbing of a complex app, there are various places where leaks can occur. Closures are probably the most well-known and notorious. Because closures maintain references to things in their scope, they are common sources for leaks.

    Closure leaks will probably be spotted eventually if somebody’s looking for them, but in Node’s asynchronous world we generate closures all the time in the form of callbacks. If these callbacks are not handled as fast as they are created, memory allocations will build up and code that doesn’t look leaky will act leaky. That’s harder to spot.

    Your application could also leak due to a bug in upstream code. You may be able to track down the location in your code from where the leak is emanating, but you might just stare in bewilderment at your perfectly-written code wondering how in the world it can be leaking!

    It’s these hard-to-spot leaks that make us want a tool like node-memwatch. Legend has it that months ago, our Lloyd Hilaiel locked himself in a closet for two days, trying to track down a memory leak that became noticeable under heavy load testing. (BTW, look forward to Lloyd’s forthcoming post on load testing.)

    After two days of bisecting, he discovered that the culprit was in the Node core: Event listeners in http.ClientRequest were not getting cleaned up. (When this was eventually fixed in Node, the patch consisted of a subtle but crucial two characters.) It was this miserable experience that made Lloyd want to write a tool to help find leaks.

    Tools for Finding Leaks

    There is already a good and continually growing collection of good tools for finding leaks in Node.js applications. Here are some of them:

    • Jimb Esser’s node-mtrace, which uses the
      GCC mtrace utility to profile heap usage.
    • Dave Pacheco’s node-heap-dump takes a snapshot of the V8 heap and serializes the whole thing out in a huge JSON file. It includes tools to traverse and investigate
      the resulting snapshot in JavaScript.
    • Danny Coates’s v8-profiler and node-inspector provide Node bindings for the V8 profiler and a Node debugging interface using the WebKit Web Inspector.
    • Felix Gnass’s fork of the same that un-disables the retainers graph
    • Felix Geisendörfer’s Node Memory Leak Tutorial is a short and sweet explanation of how to use the v8-profiler and node-debugger, and is presently the state-of-the-art for most Node.js memory leak debugging.
    • Joyent’s SmartOS platform, which furnishes an arsenal of tools at your disposal for debugging Node.js memory leaks

    We like all of these tools, but none was a perfect fit for our environment. The Web Inspector approach is fantastic for applications in development, but is difficult to use on a live deployment, especially when multiple servers and subprocess are involved in the mix. As such, it may be difficult to reproduce memory leaks that bite in long-running and heavily-loaded production environments. Tools like dtrace and libumem are truly awe-inspiring, but don’t work on all operating systems.

    Enter node-memwatch

    We wanted a platform-independent debugging library requiring no instrumentation to tell us when our programs might be leaking memory, and help us find where they are leaking. So we wrote node-memwatch.

    It gives you three things:

    • A 'leak' event emitter

      memwatch.on('leak', function(info) {
      // look at info to find out about what might be leaking
    • A 'stats' event emitter

      var memwatch = require('memwatch');
      memwatch.on('stats', function(stats) {
      // do something with post-gc memory usage stats
    • A heap diff class

      var hd = new memwatch.HeapDiff();
      // your code here ...
      var diff = hd.end();
    • And there is also a function to trigger garbage collection which can be
      useful in testing. Ok, four things.

      var stats = memwatch.gc();

    memwatch.on('stats', ...): Post-GC Heap Statistics

    node-memwatch can emit a sample of memory usage directly after a full garbage collection and memory compaction, before any new JS objects have been allocated. (It uses V8′s post-gc hook, V8::AddGCEpilogueCallback, to gather heap usage statistics every time GC occurs.)

    The stats data includes:

    • usage_trend
    • current_base
    • estimated_base
    • num_full_gc
    • num_inc_gc
    • heap_compactions
    • min
    • max

    Here’s an example that shows how this data looks over time with a leaky application. The graph below is tracking memory usage over time. The green crazy line shows what process.memoryUsage() reports, and the red line shows the current_base reported by node_memwatch. The box on the lower-left shows additional statistics.


    Note that the number of incremental GCs is very high. This is a warning sign that V8 is working overtime to try to clean up allocations.

    memwatch.on('leak', ...): Heap Allocation Trends

    We have a simple heuristic to warn you that your app may be leaky. If, over five consecutive GCs, you continue to allocate memory without releasing it, node-memwatch will emit a leak event. The message tells you in nice, human-readable form what’s going on:

    { start: Fri, 29 Jun 2012 14:12:13 GMT,
      end: Fri, 29 Jun 2012 14:12:33 GMT,
      growth: 67984,
      reason: 'heap growth over 5 consecutive GCs (20s) - 11.67 mb/hr' }

    memwatch.HeapDiff(): Finding Leaks

    Finally, node-memwatch can compare snapshots of object names and allocation counts on the heap. The resulting diff can help isolate offenders.

    var hd = new memwatch.HeapDiff();
    // Your code here ...
    var diff = hd.end();

    The contents of diff will look something like this:

      "before": {
        "nodes": 11625,
        "size_bytes": 1869904,
        "size": "1.78 mb"
      "after": {
        "nodes": 21435,
        "size_bytes": 2119136,
        "size": "2.02 mb"
      "change": {
        "size_bytes": 249232,
        "size": "243.39 kb",
        "freed_nodes": 197,
        "allocated_nodes": 10007,
        "details": [
            "what": "Array",
            "size_bytes": 66688,
            "size": "65.13 kb",
            "+": 4,
            "-": 78
            "what": "Code",
            "size_bytes": -55296,
            "size": "-54 kb",
            "+": 1,
            "-": 57
            "what": "LeakingClass",
            "size_bytes": 239952,
            "size": "234.33 kb",
            "+": 9998,
            "-": 0
            "what": "String",
            "size_bytes": -2120,
            "size": "-2.07 kb",
            "+": 3,
            "-": 62

    HeapDiff triggers a full GC before taking its samples, so the data won’t be full of a lot of junk. memwatch‘s event emitters will not notify of HeapDiff GC events, so you can safely put HeapDiff calls in your 'stats' handler.

    In the graph below, we’ve added the objects with the most heap allocations:


    Where to Go From Here

    node-memwatch provides:

    • Accurate memory usage tracking
    • Notifications about probable leaks
    • A means to produce a heap diff
    • That is cross-platform
    • And that does not require any extra instrumentation

    We want it to do more. In particular, we want node-memwatch to be able to provide some examples of a leaked object (e.g., names of variables, array indices, or closure code).

    We hope you’ll find node-memwatch useful in debugging leaks in your Node app, and that you’ll fork the code and help us make it better.