Mozilla

Featured Articles

Sort by:

View:

  1. Writing Web Apps Quickly With Mortar

    With the introduction of Firefox OS, development of an open apps marketplace, and a push to implement powerful web APIs for closer hardware integration, Mozilla is serious about web apps. We believe that the web can deliver an experience similar to native apps, even on mobile.

    We can’t forget about the most important piece, however: a good developer ecosystem. We need to provide various tools to make it easy for both beginners and experts to write web apps and deploy them.

    This is why we’ve created mortar, a collection of app templates that provide starting points for different kinds of apps and tools to manage and deploy them.

    A few weeks ago, Pierre Richard wrote a post here about his experiences getting started writing apps for Firefox OS. There’s some good stuff in his post, including his critiques of mortar. He felt mortar was too complex and introduced a different starting point for writing web apps for Firefox OS.

    This post will loosely respond to some of his points, introduce mortar, and explain why I think it is a valuable starting point for writing web apps for both beginners and experts. A detailed walkthrough of mortar is provided at the end in a screencast.

    Why Are Templates Useful?

    Mortar is more than just a bunch of HTML, CSS, and JavaScript. As described later in this post, it comes with various other tools for automating tasks, such as deployment and minifying and concatenating JavaScript and CSS. These tools are necessary when you want to release your app and maintain it.

    If you’re someone who really enjoys simplicity, you might initially find mortar too complex. I’m definitely someone who wants to boil things down to its simplest form, rejecting unnecessary complexity. We’ve worked hard to make sure that we only include things in mortar that prove to be very useful when building and deploying a web app.

    So I encourage you to learn about the tools mortar comes with, especially if you think you don’t need it. Mortar is our expression of good web development practices, so it’s also a chance to learn about modern tools and good ways to build web apps. Just as Django or Ruby on Rails may seem complex at first, you pay off your initial investment later on.

    The Technology Within Mortar

    Mortar templates are pre-built to help you write web apps quickly. These aren’t specifically Firefox OS apps. We want you to build web apps that can run anywhere the web is. Firefox OS is a good place for a web app, but the mortar templates aren’t specifically targeting Firefox OS (even if some of the more advanced templates borrow a few styles from it).

    All of the mortar templates come with the following:

    • A project structure (folders for css, js, etc)
    • Some initial well-formed HTML, JavaScript, and files like manifest.webapp
    • require.js to manage javascript
    • volo for a development server, css/js optimizations, deployment, and other tasks
    • Some initial js libs: zepto, mozilla’s receipt verifier library, and a cross-platform library to install web apps

    In addition, an installation button is provided (but commented out by default) in case you want to test your app as an installed app (or let users install it outside of an app store). All you have to do is uncomment one line of HTML. This, in combination with the pre-built manifest.webapp file, makes it easy to build an app.

    Here’s what your app looks like when you start it from a template:

    Mortar-app-stub repo on Github

    All of the static files, like HTML, CSS, and JavaScript, are under the www directory. We live one level up from this so that we can use tools like volo and grunt, and if you need to add a server-side component it’s easier. The node_modules and tools directories are only for volo, and you can ignore them.

    We chose to use require.js, a JavaScript module system, because it’s good practice to write JavaScript in modules. It’s not an obscure library; it implements the Asynchronous Module Definition standard for writing modules which is extremely well-known in the javascript community. The module you should start writing JavaScript in is at www/js/app.js.

    Besides managing your Javacript dependencies, modules let us easily compress and concatenate all your JavaScript into one minified file, with zero extra work on your part. In fact, it’s so easy that it’s wrapped up in a volo command for you. Just type volo build in your project and all your JavaScript is concatenated into one file and minified (it also does this with your CSS).

    Volo is a tool much like grunt which simply automates tasks at the command line. We use volo because it integrates nicely with require.js. An example command is volo serve, which fires up a development server for your app. View this documentation for more volo commands available.

    Volo lets you easily deploy your site to github. After customizing www/manifest.webapp for your app, simply do:

    1. volo build
    2. volo ghdeploy

    Your optimized app is now running live on github, and is ready to submit to the marketplace!

    Games and Layouts

    Everything described above is applicable to all apps, but we also want to help people who want to write specific kinds of apps. This is why we’ve created multiple templates, each catering to a specific need:

    • mortar-app-stub: a minimal template that only has a little pre-built HTML (a blank canvas for any type of app)
    • mortar-game-stub: a minimal template that comes with some basic game components
    • mortar-list-detail: a template like app-stub but also includes a layouts library and a basic list-detail type interface
    • mortar-tab-view: a template like list-detail but the default interface is a tabbed view

    We have a blank canvas (app-stub), a game template (game-stub), and two other templates that come with pre-built layouts.

    This is what the game template looks like. It looks pretty boring, but it’s rendered using requestAnimationFrame and you can control the box with the arrow keys or WASD. In the future, we might include image loading and basic collision detection.

    mortar-game-stub template

    The list-detail template is below. It comes with a pre-built layout that implements a list-detail interaction where you can touch a list item and drill down into the details. You can also add new items. This data-driven functionality is powered by backbone.js. We are still improving the design. (The tab-view template looks similar to the list-view shown here.)

    List detail in Mortar (screenshot)

    The layouts library is mortar-layouts, and it’s something we came up with to make it really easy to build application UIs. To me, this might be the most exciting part of mortar. I won’t go into too much detail here, but it uses x-tags and other new HTML5/CSS3 features to bring a native-feeling application UI to the web. View documentation and a live demo (and the source for the demo).

    The layouts library and the two templates that come with layouts are in beta, so keep in mind there are minor bugs. The biggest problem is the lack of design, and we will be working on integrating better styles that reflect Firefox OS (but are easily customizable with CSS!).

    Future Work for Mortar

    Mortar is relatively new, and it needs time to be fully documented and fleshed out. We are working on this actively.

    We are also fine-tuning our templates to figure out what the best starting points are for each one. Admittedly, having the big blue install button be the default screen for the app-stub was a mistake, which has been fixed. Now you get a short helpful message with links of what else you can do.

    Most of the work we have left is with the list-detail and tab-view templates, and figuring out how to deliver useful layouts to developers. We don’t want to force a big framework on them, but feel strongly that we need to help them develop application UIs.

    We love feedback, so feel free to file an issue to get a conversation started!

    Screencast

    I made a screencast which walks through mortar in detail and shows you how to use require.js, volo, and other things that it comes with. Watch this if you are interested in more details!

  2. Join Us for Firefox OS App Days

    Firefox OS App DaysIf you’re a developer interested in web technologies, I’d like to invite you to participate in Firefox OS App Days, a worldwide set of 20+ hack days organized by Mozilla to help you get started developing apps for Firefox OS.

    At each App Day event, you’ll have the opportunity to learn, hack and celebrate Firefox OS, Mozilla’s open source operating system for the mobile web. Technologists and developers from Mozilla will present tools and technology built to extend and support the Web platform, including mobile Web APIs to access device hardware features such as the accelerometer. We’ll also show you how to use the browser-based Firefox OS Simulator to view and test mobile apps on the desktop.

    Firefox OS App Days are a chance to kick start creation of apps for the Firefox Marketplace, and represent a great opportunity to build new apps or optimize existing HTML5 apps for Firefox OS, as well as demo your projects to an audience of peers, tech leaders and innovators.

    We’re exciting to be working with our Mozilla Reps, who are helping organize these events, and with our partners Deutsche Telecom and Telefónica, who are supporting a number of them across the world.

    We look forward to seeing you there!

    Event Details

    The agenda for these all-day events will be customized for each locale and venue, but a typical schedule might include:

    • 08:30 – 09:30 Registration. Light breakfast.
    • 09:30 – 11:30 Firefox OS, Firefox Marketplace & Mozilla Apps Ecosystem.
    • 11:30 – 12:00 Video. Q&A.
    • 12:00 – 13:00 Lunch
    • 13:00 – 17:00 App hacking.
    • 17:30 – 19:00 Demos & party.

    Signing Up

    Firefox OS App Days launch on 19 January and continue through 2 February, with the majority of the events taking place on 26 January. This wiki page has a master list of all the events and their registration forms, from Sao Paulo to Warsaw to Nairobi to Wellington — and many more. Find the App Day nearest you and register. (N.B. Venue capacities vary, but most are limited to 100 attendees so don’t delay.)

    Getting Ready

    Plan on bringing your Linux, Mac or Windows development machine and an idea for an app you’d like to develop for the Firefox Marketplace. If you have an Android device, bring it along, too. You can see the Firefox Marketplace in action on the Aurora version of Firefox for Android.

    If you want to get started before the event, review the material below and bring an HTML5 app that you’ve begun and want to continue, get feedback on or recruit co-developers for.

  3. NORAD Tracks Santa

    This year, Open Web standards like WebGL, Web Workers, Typed Arrays, Fullscreen, and more will have a prominent role in NORAD’s annual mission to track Santa Claus as he makes his journey around the world. That’s because Analytical Graphics, Inc. used Cesium as the basis for the 3D Track Santa application.

    Cesium is an open source library that uses JavaScript, WebGL, and other web technologies to render a detailed, dynamic, and interactive virtual globe in a web browser, without the need for a plugin. Terrain and imagery datasets measured in gigabytes or terabytes are streamed to the browser on demand, and overlaid with lines, polygons, placemarks, labels, models, and other features. These features are accurately positioned within the 3D world and can efficiently move and change over time. In short, Cesium brings to the Open Web the kind of responsive, geospatial experience that was uncommon even in bulky desktop applications just a few years ago.

    The NORAD Tracks Santa web application goes live on December 24. Cesium, however, is freely available today for commercial and non-commercial use under the Apache 2.0 license.

    In this article, I’ll present how Cesium uses cutting edge web APIs to bring an exciting in-browser experience to millions of people on December 24.

    The locations used in the screenshots of the NORAD Tracks Santa application are based on test data. We, of course, won’t know Santa’s route until NORAD starts tracking him on Christmas Eve. Also, the code samples in this article are for illustrative purposes and do not necessarily reflect the exact code used in Cesium. If you want to see the official code, check out our GitHub repo.

    WebGL

    Cesium could not exist without WebGL, the technology that brings hardware-accelerated 3D graphics to the web.

    It’s hard to overstate the potential of this technology to bring a whole new class of scientific and entertainment applications to the web; Cesium is just one realization of that potential. With WebGL, we can render scenes like the above, consisting of hundreds of thousands of triangles, at well over 60 frames per second.

    Yeah, you could say I’m excited.

    If you’re familiar with OpenGL, WebGL will seem very natural to you. To oversimplify a bit, WebGL enables applications to draw shaded triangles really fast. For example, from JavaScript, we execute code like this:

    gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
     
    gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
     
    gl.drawElements(gl.TRIANGLES, numberOfIndices, gl.UNSIGNED_SHORT, 0);

    vertexBuffer is a previously-configured data structure holding vertices, or corners of triangles. A simple vertex just specifies the position of the vertex as X, Y, Z coordinates in 3D space. A vertex can have additional attributes, however, such as colors and the vertex’s coordinates within a 2D image for texture mapping.

    The indexBuffer links the vertices together into triangles. It is a list of integers where each integer specifies the index of a vertex in the vertexBuffer. Each triplet of indices specifies one triangle. For example, if the first three indices in the list are [0, 2, 1], the first triangle is defined by linking up vertices 0, 2, and 1.

    The drawElements call instructs WebGL to draw the triangles defined by the vertex and index buffers. The really cool thing is what happens next.

    For every vertex in vertexBuffer, WebGL executes a program, called a vertex shader, that is supplied by the JavaScript code. Then, WebGL figures out which pixels on the screen are “lit up” by each triangle – a process called rasterization. For each of these pixels, called fragments, another program, a fragment shader, is invoked. These programs are written in a C-like language called GLSL that executes on the system’s Graphics Processing Unit (GPU). Thanks to this low-level access and the impressive parallel computation capability of GPUs, these programs can do sophisticated computations very quickly, creating impressive visual effects. This feat is especially impressive when you consider that they are executed hundreds of thousands or millions of times per render frame.

    Cesium’s fragment shaders approximate atmospheric scattering, simulate ocean waves, model the reflection of the sun off the ocean surface, and more.

    WebGL is well supported in modern browsers on Windows, Linux and Mac OS X. Even Firefox for Android supports WebGL!

    While I’ve shown direct WebGL calls in the code above, Cesium is actually built on a renderer that raises the level of abstraction beyond WebGL itself. We never issue drawElements calls directly, but instead create command objects that represent the vertex buffers, index buffers, and other data with which to draw. This allows the renderer to automatically and elegantly solve esoteric rendering problems like the insufficient depth buffer precision for a world the size of Earth. If you’re interested, you can read more about Cesium’s data-driven renderer.

    For more information about some of the neat rendering effects used in the NORAD Tracks Santa application, take a look at our blog post on the subject.

    Typed Arrays and Cross-Origin Resource Sharing

    Virtual globes like Cesium provide a compelling, interactive 3D view of real-world situations by rendering a virtual Earth combined with georeferenced data such as roads, points of interest, weather, satellite orbits, or even the current location of Santa Claus. At the core of a virtual globe is the rendering of the Earth itself, with realistic terrain and satellite imagery.

    Terrain describes the shape of the surface: the mountain peaks, the hidden valleys, the wide open plains, and everything in between. Satellite or aerial imagery is then overlaid on this otherwise colorless surface and brings it to life.

    The global terrain data used in the NORAD Tracks Santa application is derived from the Shuttle Radar Topography Mission (SRTM), which has a 90-meter spacing between -60 and 60 degrees latitude, and the Global 30 Arc Second Elevation Data Set (GTOPO30), which has 1-kilometer spacing for the entire globe. The total size of the dataset is over 10 gigabytes.

    For imagery, we use Bing Maps, who is also a part of the NORAD Tracks Santa team. The total size of this dataset is even bigger – easily in the terabytes.

    With such enormous datasets, it is clearly impractical to transfer all of the terrain and imagery to the browser before rendering a scene. For that reason, both datasets are broken up into millions of individual files, called tiles. As Santa flies around the world, Cesium downloads new terrain and imagery tiles as they are needed.

    Terrain tiles describing the shape of the Earth’s surface are binary data encoded in a straightforward format. When Cesium determines that it needs a terrain tile, we download it using XMLHttpRequest and access the binary data using typed arrays:

    var tile = ...
     
    var xhr = new XMLHttpRequest();
     
    xhr.open('GET', terrainTileUrl, true);
     
    xhr.responseType = 'arraybuffer';
     
     
     
    xhr.onload = function(e) {
     
        if (xhr.status === 200) {
     
            var tileData = xhr.response;
     
            tile.heights = new Uint16Array(tileData, 0, heightmapWidth * heightmapHeight);
     
            var heightsBytes = tile.heights.byteLength;
     
            tile.childTileBits = new Uint8Array(tileData, heightsBytes, 1)[0];
     
            tile.waterMask = new Uint8Array(tileData, heightsBytes + 1, tileData.byteLength - heightsBytes - 1);
     
            tile.state = TileState.RECEIVED;
     
        } else {
     
            // ...
     
        }
     
    };
     
     
     
    xhr.send();

    Prior to the availability of typed arrays, this process would have been much more difficult. The usual course was to encode the data as text in JSON or XML format. Not only would such data be larger when sent over the wire(less), it would also be significantly slower to process it once it was received.

    While it is generally very straightforward to work with terrain data using typed arrays, two issues make it a bit trickier.

    The first is cross-origin restrictions. It is very common for terrain and imagery to be hosted on different servers than are used to host the web application itself, and this is certainly the case in NORAD Tracks Santa. XMLHttpRequest, however, does not usually allow requests to non-origin hosts. The common workaround of using script tags instead of XMLHttpRequest won’t work well here because we are downloading binary data – we can’t use typed arrays with JSONP.

    Fortunately, modern browsers offer a solution to this problem by honoring Cross-Origin Resource Sharing (CORS) headers, included in the response by the server, indicating that the response is safe for use across hosts. Enabling CORS is easy to do if you have control over the web server, and Bing Maps already includes the necessary headers on their tile files. Other terrain and imagery sources that we’d like to use in Cesium are not always so forward-thinking, however, so we’ve sometimes been forced to route cross-origin requests through a same-origin proxy.

    The other tricky aspect is that modern browsers only allow up to six simultaneous connections to a given host. If we simply created a new XMLHttpRequest for each tile requested by Cesium, the number of queued requests would grow large very quickly. By the time a tile was finally downloaded, the viewer’s position in the 3D world may have changed so that the tile is no longer even needed.

    Instead, we manually limit ourselves to six outstanding requests per host. If all six slots are taken, we won’t start a new request. Instead, we’ll wait until next render frame and try again. By then, the highest priority tile may be different than it was last frame, and we’ll be glad we didn’t queue up the request then. One nice feature of Bing Maps is that it serves the same tiles from multiple hostnames, which allows us to have more outstanding requests at once and to get the imagery into the application faster.

    Web Workers

    The terrain data served to the browser is, primarily, just an array of terrain heights. In order to render it, we need to turn the terrain tile into a triangle mesh with a vertex and index buffer. This process involves converting longitude, latitude, and height to X, Y, and Z coordinates mapped to the surface of the WGS84 ellipsoid. Doing this once is pretty fast, but doing it for each height sample, of which each tile has thousands, starts to take some measurable time. If we did this conversion for several tiles in a single render frame, we’d definitely start to see some stuttering in the rendering.

    One solution is to throttle tile conversion, doing at most N per render frame. While this would help with the stuttering, it doesn’t avoid the fact that tile conversion competes with rendering for CPU time while other CPU cores sit idle.

    Fortunately, another great new web API comes to the rescue: Web Workers.

    We pass the terrain ArrayBuffer downloaded from the remote server via XMLHttpRequest to a Web Worker as a transferable object. When the worker receives the message, it builds a new typed array with the vertex data in a form ready to be passed straight to WebGL. Unfortunately, Web Workers are not yet allowed to invoke WebGL, so we can’t create vertex and index buffers in the Web Worker; instead, we post the typed array back to the main thread, again as a transferable object.

    The beauty of this approach is that terrain data conversion happens asynchronously with rendering, and that it can take advantage of the client system’s multiple cores, if available. This leads to a smoother, more interactive Santa tracking experience.

    Web Workers are simple and elegant, but that simplicity presents some challenges for an engine like Cesium, which is designed to be useful in various different types of applications.

    During development, we like to keep each class in a separate .js file, for ease of navigation and to avoid the need for a time-consuming combine step after every change. Each class is actually a separate module, and we use the Asynchronous Module Definition (AMD) API and RequireJS to manage dependencies between modules at runtime.

    For use in production environments, it is a big performance win to combine the hundreds of individual files that make up a Cesium application into a single file. This may be a single file for all of Cesium or a user-selected subset. It may also be beneficial to combine parts of Cesium into a larger file containing application-specific code, as we’ve done in the NORAD Tracks Santa application. Cesium supports all of these use-cases, but the interaction with Web Workers gets tricky.

    When an application creates a Web Worker, it provides to the Web Worker API the URL of the .js file to invoke. The problem is, in Cesium’s case, that URL varies depending on which of the above use-cases is currently in play. Worse, the worker code itself needs to work a little differently depending on how Cesium is being used. That’s a big problem, because workers can’t access any information in the main thread unless that information is explicitly posted to it.

    Our solution is the cesiumWorkerBootstrapper. Regardless of what the WebWorker will eventually do, it is always constructed with cesiumWorkerBootstrapper.js as its entry point. The URL of the bootstrapper is deduced by the main thread where possible, and can be overridden by user code when necessary. Then, we post a message to the worker with details about how to actually dispatch work.

    var worker = new Worker(getBootstrapperUrl());
     
     
     
    //bootstrap
     
    var bootstrapMessage = {
     
        loaderConfig : {},
     
        workerModule : 'Workers/' + processor._workerName
     
    };
     
     
     
    if (typeof require.toUrl !== 'undefined') {
     
        bootstrapMessage.loaderConfig.baseUrl = '..';
     
    } else {
     
        bootstrapMessage.loaderConfig.paths = {
     
            'Workers' : '.'
     
        };
     
    }
     
    worker.postMessage(bootstrapMessage);

    The worker bootstrapper contains a simple onmessage handler:

    self.onmessage = function(event) {
     
        var data = event.data;
     
        require(data.loaderConfig, [data.workerModule], function(workerModule) {
     
            //replace onmessage with the required-in workerModule
     
            self.onmessage = workerModule;
     
        });
     
    };

    When the bootstrapper receives the bootstrapMessage, it uses the RequireJS implementation of require, which is also included in cesiumWorkerBootstrapper.js, to load the worker module specified in the message. It then “becomes” the new worker by replacing its onmessage handler with the required-in one.

    In use-cases where Cesium itself is combined into a single .js file, we also combine each worker into its own .js file, complete with all of its dependencies. This ensures that each worker needs to load only two .js files: the bootstrapper plus the combined module.

    Mobile Devices

    One of the most exciting aspects of building an application like NORAD Tracks Santa on web technologies is the possibility of achieving portability across operating systems and devices with a single code base. All of the technologies used by Cesium are already well supported on Windows, Linux, and Mac OS X on desktops and laptops. Increasingly, however, these technologies are becoming available on mobile devices.

    The most stable implementation of WebGL on phones and tablets is currently found in Firefox for Android. We tried out Cesium on several devices, including a Nexus 4 phone and a Nexus 7 tablet, both running Android 4.2.1 and Firefox 17.0. With a few tweaks, we were able to get Cesium running, and the performance was surprisingly good.

    We did encounter a few problems, however, presumably a result of driver bugs. One problem was that normalizing vectors in fragment shaders sometimes simply does not work. For example, GLSL code like this:

    vec3 normalized = normalize(someVector);

    Sometimes results in a normalized vector that still has a length greater than 1. Fortunately, this is easy to work around by adding another call to normalize:

    vec3 normalized = normalize(normalize(someVector));

    We hope that as WebGL gains more widespread adoption on mobile, bugs like this will be detected by the WebGL conformance tests before devices and drivers are released.

    The Finished Application

    As long-time C++ developers, we were initially skeptical of building a virtual globe application on the Open Web. Would we be able to do all the things expected of such an application? Would the performance be good?

    I’m pleased to say that we’ve been converted. Modern web APIs like WebGL, Web Workers, and Typed Arrays, along with the continual and impressive gains in JavaScript performance, have made the web a convenient, high-performance platform for sophisticated 3D applications. We’re looking forward to continuing to use Cesium to push the limits of what is possible in a browser, and to take advantage of new APIs and capabilities as they become available.

    We’re also looking forward to using this technology to bring a fun, 3D Santa tracking experience to millions of kids worldwide this Christmas as part of the NORAD Tracks Santa team. Check it out on December 24 at www.noradsanta.org.

  4. Fantastic front-end performance Part 1 – Concatenate, Compress & Cache – A Node.JS Holiday Season, part 4

    This is episode 4, out of a total 12, in the A Node.JS Holiday Season series from Mozilla’s Identity team. It’s the first post about how to achieve better front-end performance.

    In this part of our “A Node.JS Holiday Season” series we’ll talk about front-end performance and introduce you to tools we’ve built and use in Mozilla to make the Persona front-end be as fast as possible.

    We’ll talk about connect-cachify, a tool to automate some of the most important parts of front-end performance.

    Before we do that, though, let’s recap quickly what we can do as developers to make our solutions run on the machines of our users as smooth as possible. If you already know all about performance optimisations, feel free to proceed to the end and see how connect-cachify helps automate some of the things you might do by hand right now.

    Three Cs of client side performance

    The web is full of information related to performance best practices. While many advanced techniques exist to tweak every last millisecond from your site, three basic tools should form the foundation – concatenate, compress and cache.

    Concatenation

    The goal of concatenation is to minimize the number of requests made to the server. Server requests are costly. The amount of time needed to establish an HTTP connection is sometimes more expensive than the amount of time necessary to transfer the data itself. Every request adds to the overhead that it takes to view your site and can be especially problematic on mobile devices where there is significant connection latency. Have you ever browsed to a shopping site on your mobile phone while connected to the Edge network and grimaced as each image loaded one by one? That is connection latency rearing its head.

    SPDY is a new protocol built on top of HTTP that aims to reduce page load time by combining resource requests into a single HTTP connection. Unfortunately, at the present time only recent versions of Firefox, Chrome and Opera support this new protocol.

    Combining external resources wherever possible, though old fashioned, works across all browsers and does not degrade with the advent of SPDY. Tools exist to combine the three most common types of external resources – JavaScript, CSS and images.

    JavaScript & CSS

    A site with more than one external JavaScript inclusion should consider combining the scripts into a single file for production. Browsers have traditionally blocked all other rendering while JavaScript is downloaded and processed. Since each requested JavaScript resource carries with it a certain amount of latency, the following is slower than it needs to be:

      <script src="jquery.min.js"></script>
      <script src="main.js"></script>
      <script src="image-carousel.js"></script>
      <script src="widget.js"></script>

    By combining four requests into one, the total amount of time the browser is blocked due to latency will be significantly reduced.

      <script src="main.production.js"></script>

    Working with combined JavaScript while still in development can be very difficult so concatenation is usually only done for a production site.

    Like JavaScript, individual CSS files should be combined into a single resource for production. The process is the same.

    Images

    Data URIs and image sprites are the two primary methods that exist to reduce the number of requested images.

    data: URI

    A data URI is a special form of a URL used to embed images directly into HTML or CSS. Data URIs can be used in either the src attribute of an img tag or as the url value of a background-image in CSS. Because embedded images are base64 encoded, they require more bytes but one less HTTP request than the original external binary image. If the included image is small, the increased byte size is usually more than offset by the reduction in the number of HTTP requests. Neither IE6 nor IE7 support data URIs so know your target audience before using them.

    Image sprites

    Image sprites are a great alternative whenever a data URI cannot be used. An image sprite is a collection of images combined into a single larger image. Once the images are combined, CSS is used to show only the relevant portion of the sprite. Many tools exist to create a sprite out of a collection of images.

    A drawback to sprites comes in the form of maintenance. The addition, removal or modification of an image within the sprite requires a congruent change to the CSS.

    Sprite Cow helps you get the background-position, width and height of sprites within a spritesheet as a nice bit of copyable css.

    Removing extra bytes – minification, optimization & compression

    Combining resources to minimize the number of HTTP requests goes a long way to speeding up a site, but we can still do more. After combining resources, the number of bytes that are transferred to the user should be minimized. Minimizing bytes usually takes the form of minification, optimization and compression.

    JavaScript & CSS

    JavaScript and CSS are text resources that can be effectively minified. Minification is a process that transforms the original text by eliminating anything that is irrelevant to the browser. Transformations to both JavaScript and CSS start with the removal of comments and extra whitespace. JavaScript minification is much more complex. Some minifiers perform transforms that replace multi-character variable names with a single character, remove language constructs that are not strictly necessary and even go so far as to replace entire statements with shorter equivalent statements.

    UglifyJS, YUICompressor and Google Closure Compiler are three popular tools to minify JavaScript.

    Two CSS minifiers include YUICompressor and UglifyCSS.

    Images

    Images frequently contain data that can be removed without affecting its visual quality. Removing these extra bytes is not difficult, but does require specialized image handling tools. Our own Francois Marier has written two blog posts on working with PNGs and with GIFs.

    Smush.it from Yahoo! is an online optimization tool. ImageOptim is an equivalent offline tool for OSX – simply drag and drop your images into the tool and it will reduce their size automatically. You don’t need to do anything – ImageOptim simply replaces the original files with the much smaller ones.

    If a loss of visual quality is acceptable, re-compressing an image at a higher compression level is an option.

    The Server Can Help Too!

    Even after combining and minifying resources, there is more. Almost all servers and browsers support HTTP compression. The two most popular compression schemes are deflate and gzip. Both of these make use of efficient compression algorithms to reduce the number of bytes before they ever leave the server.

    Caching

    Concatenation and compression help first time visitors to our sites. The third C, caching, helps visitors that return. A user who returns to our site should not have to re-download all of the resources again. HTTP provides two widely adopted mechanisms to make this happen, cache headers and ETags.

    Cache headers come in two forms and are suitable for static resources that change infrequently, if ever. The two header options are Expires and Cache-Control: max-age. The Expires header specifies the date after which the resource must be re-requested. max-age specifies how many seconds the resource is valid for. If a resource has a cache header, the browser will only re-request that resource once the cache expiration date has passed.

    An ETag is essentially a resource version tag that is used to validate whether the local version of a resource is the same as the server’s version. An ETag is suitable for dynamic content or content can change at any time. When a resource has an ETag, it says to the browser “Check the server to see if the version is the same, if it is, use the version you already have.” Because an ETag requires interaction with the server, it is not as efficient as a fully cached resource.

    Cache-busting

    The advantage to using time/date based cache-control headers instead of ETags is that resources are only re-requested once the cache has expired. This is also its biggest drawback. What happens if a resource changes? The cache has to somehow be busted.

    Cache-busting is usually done by adding a version number to the resource URL. Any change to a resources URL causes a cache-miss which in turn causes the resource to be re-downloaded.

    For example if http://example.com/logo.png has a cache header set to expire in one year but the logo changes, users who have already downloaded the logo will only see the update a year from now. This can be fixed by adding some sort of version identifier to the URL.

    http://example.com/v8125/logo.png

    or

    http://example.com/logo.png?v8125

    When the logo is updated, a new version is used meaning the logo will be re-requested.

    http://example.com/v8126/logo.png

    or

    http://example.com/logo.png?v8126

    Connect-cachify – A NodeJS library to serve concatenated and cached resources

    Connect-cachify is a NodeJS middleware developed by Mozilla that makes it easy to serve up properly concatenated and cached resources.

    While in production mode, connect-cachify serves up pre-generated production resources with a cache expiration of one year. If not in production mode, individual development resources are served instead, making debugging easy. Connect-cachify does not perform concatenation and minification itself but instead relies on you to do this in your project’s build script.

    Configuration of connect-cachify is done through the setup function. Setup takes two parameters, assets and options. assets is a dictionary of production to development resources. Each production resource maps to a list its individual development resources.

    • options is an optional dictionary that can take the following values:
    • prefix – String to prepend to the hash in links. (Default: none)
    • production – Boolean indicating whether to serve development or production resources. (Defaults to true)
    • root – The fully qualified path from which static resources are served. This is the same value that you’d send to the static middleware. (Default: ‘.’)

    Example of connect-cachify in action

    First, let’s assume we have a simple HTML file we wish to use with connect-cachify. Our HTML file includes three CSS resources as well as three Javascript resources.

    ...
    <head>
      <title>Dashboard: Hamsters of North America</title>
      <link rel="stylesheet" type="text/css" href="/css/reset.css" />
      <link rel="stylesheet" type="text/css" href="/css/common.css" />
      <link rel="stylesheet" type="text/css" href="/css/dashboard.css" />
    </head>
    <body>
      ...
      <script type="text/javascript" src="/js/lib/jquery.js"></script>
      <script type="text/javascript" src="/js/magick.js"></script>
      <script type="text/javascript" src="/js/laughter.js"></script>
    </body>
    ...

    Set up the middleware

    Next, include the connect-cachify library in your NodeJS server. Create your production to development resource map and configure the middleware.

    ...
    // Include connect-cachify
    const cachify = require('connect-cachify');
     
    // Create a map of production to development resources
    var assets = {
      "/js/main.min.js": [
        '/js/lib/jquery.js',
        '/js/magick.js',
        '/js/laughter.js'
      ],
      "/css/dashboard.min.css": [
        '/css/reset.css',
        '/css/common.css',
        '/css/dashboard.css'
      ]
    };
     
    // Hook up the connect-cachify middleware
    app.use(cachify.setup(assets, {
      root: __dirname,
      production: your_config['use_minified_assets'],
    }));
    ...

    To keep code DRY, the asset map can be externalized into its own file and used as configuration to both connect-cachify and your build script.

    Update your templates to use cachify

    Finally, your templates must be updated to indicate where production JavaScript and CSS should be included. JavaScript is included using the “cachify_js” helper whereas CSS uses the “cachify_css” helper.

    ...
    <head>
      <title>Dashboard: Hamsters of North America</title>
      <%- cachify_css('/css/dashboard.min.css') %>
    </head>
    <body>
      ...
      <%- cachify_js('/js/main.min.js') %>
    </body>
    ...

    Connect-cachified output

    If the production flag is set to false in the configuration options, connect-cachify will generate three link tags and three script tags, exactly as in the original. However, if the production flag is set to true, only one of each tag will be generated. The URL in each tag will have the MD5 hash of the production resource prepended onto its URL. This is used for cache-busting. When the contents of the production resource change, its hash also changes, effectively breaking the cache.

    ...
    <head>
      <title>Dashboard: Hamsters of North America</title>
      <link rel="stylesheet" type="text/css" href="/v/2abdd020a6/css/dashboard.min.css" />
    </head>
    <body>
      ...
      <script type="text/javascript" src="/v/acdd2ab372/js/main.min.js"></script>
    </body>
    ...

    That’s all there is to setting up connect-cachify.

    Conclusion

    There are a lot of easy wins when looking to speed up a site. By going back to basics and using the three Cs – concatenation, compression and caching – you will go a long way towards improving the load time of your site and the experience for your users. Connect-cachify helps with concatenation and caching in your NodeJS apps but there is still more we can do. In the next installment of A NodeJS Holiday Season, we will look at how to generate dynamic content and make use of ETagify to still serve maximally cacheable resources.

    Previous articles in the series

    This was part four in a series with a total of 12 posts about Node.js. The previous ones are:

  5. Firefox OS Simulator 1.0 is here!

    Three weeks back, we introduced the Firefox OS Simulator, a tool that allows web developers to try out their apps in Firefox OS from the comfort of their current Windows/Mac/Linux computers. We’ve seen a number of comments from people who used the Simulator as an easy way to get a peek at Firefox OS today, which is fine, too.

    Since that last blog post, the team has been squashing bugs to get the Simulator ready for more use. Linux users in particular will be happy to know that the current release will run on many more systems. With today’s 1.0 release, we hope to see many more users!

    Installing

    We’re leaving the “Preview” tag on the Simulator for now, both because the Simulator is new and because Firefox OS itself is still in development. This is a perfect time to create apps that work on Firefox OS (and Android and the web at large!) because your apps can be ready for the big launch.

    Install the Firefox OS Simulator from addons.mozilla.org today!

    Screencast showing Firefox OS Simulator in action

    (If you’ve opted in to HTML5 video on YouTube you will get that, otherwise it will fallback to Flash)

    Getting help

    If you spot any bugs, please file them on GitHub. Got a question? You can ask us on the dev-webapps mailing list or on #openwebapps on irc.mozilla.org

  6. Firebug 1.11 New Features

    Firebug 1.11 has been released and so, let’s take a look at some of the new features introduced in this version.

    Firebug

    First of all, check out the following compatibility table:

    • Firebug 1.10 with Firefox 13.0 – 17.0
    • Firebug 1.11 with Firefox 17.0 – 20.0

    Firebug 1.11 is open source project surrounded by contributors and volunteers and so, let me also introduce all developers who contributed to Firebug 1.11

    • Jan Odvarko
    • Sebastian Zartner
    • Simon Lindholm
    • Florent Fayolle
    • Steven Roussey
    • Farshid Beheshti
    • Harutyun Amirjanyan
    • Bharath Thiruveedula
    • Nikhil Verma
    • Antanas Arvasevicius
    • Chris Coulson

    New Features

    SPDY Support

    Are you optimizing your page and using SPDY protocol? Cool, the Net panel is now indicating whether the protocol is in action.

    Performance Timing Visualization

    Another feature related to page load performance. If you are analyzing performance-timing you can simply log the timing data into the Console panel and check out nice interactive graph presenting all information graphically.

    Just execute the following expression on Firebug’s Command Line:

    performance.timing

    Read detailed description of this feature.

    CSS Query Selector Tool

    Firebug offers new side panel (available in the CSS panel) that allows quick execution of CSS selectors. You can either insert your own CSS Selector or get list of matching elements for an existing selector.

    In order to see list of matching elements for an existing CSS rule, just right click on the rule and pick Get Matching Elements menu item. The list of elements (together with the CSS selector) will be displayed in the side panel.

    New include() command

    Firebug supports a new command called include(). This command can be executed on the Command Line and used to include a JavaScript file into the current page.

    The simplest usage looks as follows:
    include("http://code.jquery.com/jquery-latest.min.js");

    If you are often including the same script (e.g. jqueryfying your page), you can create an alias.
    include("http://code.jquery.com/jquery-latest.min.js", "jquery");

    And use the alias as follow:
    include("jquery");

    In order to see list of all defined aliases, type: include(). Note, that aliases are persistent across Firefox restarts.

    Read detailed description of this command on Firebug wiki.

    Observing window.postMessage()

    Firebug improves the way how message events, those generated by window.postMessage() method, are displayed in the Console panel.

    The log now displays:

    • origin window/iframe URL
    • data associated with the message
    • target window/iframe object

    See detailed explanation of the feature.

    Copy & Paste HTML

    It’s now possible to quickly clone entire parts of HTML markup by Copy & Paste. Copy HTML action has been available for some time, but Paste HTML is new. Note that XML and SVG Copy & Paste is also supported!

    See detailed explanation of the feature.

    Styled Logging

    The way how console logs can be styled using custom CSS (%c formatting variable) has been enhanced. You can now use more style formatters in one log.

    console.log("%cred-text %cgreen-text", "color:red", "color:green");

    See detailed explanation of this feature.

    Log Function Calls

    Log Function Calls feature has been improved and it also shows the current stack trace of the place where monitored function is executed.

    See detailed explanation of this feature.

    Improved $() and $$() commands

    Firebug also improves existing commands for querying DOM elements.

    $() command uses querySelector()
    $$() command uses querySelectorAll()

    It also means that the argument passed in must be CSS selector.

    So, for example, if you need to get an element with ID equal to “content” you need to use # character.

    $("#content")

    If you forget, Firebug will nicely warn you :-)

    Autocompletion for built-in properties

    The Command Line supports auto-completion even for built-in members of String.prototype or Object.prototype and other objects.

    There are many other improvements and you can see the entire list in our release notes. You can also see the official announcement on getfirebug.com.

    Follow us on Twitter to be updated!

    Jan ‘Honza’ Odvarko

  7. Performance with JavaScript String Objects

    This article aims to take a look at the performance of JavaScript engines towards primitive value Strings and Object Strings. It is a showcase of benchmarks related to the excellent article by Kiro Risk, The Wrapper Object. Before proceeding, I would suggest visiting Kiro’s page first as an introduction to this topic.

    The ECMAScript 5.1 Language Specification (PDF link) states at paragraph 4.3.18 about the String object:

    String object member of the Object type that is an instance of the standard built-in String constructor

    NOTE A String object is created by using the String constructor in a new expression, supplying a String value as an argument.
    The resulting object has an internal property whose value is the String value. A String object can be coerced to a String value
    by calling the String constructor as a function (15.5.1).

    and David Flanagan’s great book “JavaScript: The Definitive Guide”, very meticulously describes the Wrapper Objects at section 3.6:

    Strings are not objects, though, so why do they have properties? Whenever you try to refer to a property of a string s, JavaScript converts the string value to an object as if by calling new String(s). [...] Once the property has been resolved, the newly created object is discarded. (Implementations are not required to actually create and discard this transient object: they must behave as if they do, however.)

    It is important to note the text in bold above. Basically, the different ways a new String object is created are implementation specific. As such, an obvious question one could ask is “since a primitive value String must be coerced to a String Object when trying to access a property, for example str.length, would it be faster if instead we had declared the variable as String Object?”. In other words, could declaring a variable as a String Object, ie var str = new String("hello"), rather than as a primitive value String, ie var str = "hello" potentially save the JS engine from having to create a new String Object on the fly so as to access its properties?

    Those who deal with the implementation of ECMAScript standards to JS engines already know the answer, but it’s worth having a deeper look at the common suggestion “Do not create numbers or strings using the ‘new’ operator”.

    Our showcase and objective

    For our showcase, we will use mainly Firefox and Chrome; the results, though, would be similar if we chose any other web browser, as we are focusing not on a speed comparison between two different browser engines, but at a speed comparison between two different versions of the source code on each browser (one version having a primitive value string, and the other a String Object). In addition, we are interested in how the same cases compare in speed to subsequent versions of the same browser. The first sample of benchmarks was collected on the same machine, and then other machines with a different OS/hardware specs were added in order to validate the speed numbers.

    The scenario

    For the benchmarks, the case is rather simple; we declare two string variables, one as a primitive value string and the other as an Object String, both of which have the same value:

      var strprimitive = "Hello";
      var strobject    = new String("Hello");

    and then we perform the same kind of tasks on them. (notice that in the jsperf pages strprimitive = str1, and strobject = str2)

    1. length property

      var i = strprimitive.length;
      var k = strobject.length;

    If we assume that during runtime the wrapper object created from the primitive value string strprimitive, is treated equally with the object string strobject by the JavaScript engine in terms of performance, then we should expect to see the same latency while trying to access each variable’s length property. Yet, as we can see in the following bar chart, accessing the length property is a lot faster on the primitive value string strprimitive, than in the object string strobject.


    (Primitive value string vs Wrapper Object String – length, on jsPerf)

    Actually, on Chrome 24.0.1285 calling strprimitive.length is 2.5x faster than calling strobject.length, and on Firefox 17 it is about 2x faster (but having more operations per second). Consequently, we realize that the corresponding browser JavaScript engines apply some “short paths” to access the length property when dealing with primitive string values, with special code blocks for each case.

    In the SpiderMonkey JS engine, for example, the pseudo-code that deals with the “get property” operation looks something like the following:

      // direct check for the "length" property
      if (typeof(value) == "string" && property == "length") {
        return StringLength(value);
      }
      // generalized code form for properties
      object = ToObject(value);
      return InternalGetProperty(object, property);

    Thus, when you request a property on a string primitive, and the property name is “length”, the engine immediately just returns its length, avoiding the full property lookup as well as the temporary wrapper object creation. Unless we add a property/method to the String.prototype requesting |this|, like so:

      String.prototype.getThis = function () { return this; }
      console.log("hello".getThis());

    then no wrapper object will be created when accessing the String.prototype methods, as for example String.prototype.valueOf(). Each JS engine has embedded similar optimizations in order to produce faster results.

    2. charAt() method

      var i = strprimitive.charAt(0);
      var k = strobject["0"];


    (Primitive value string vs Wrapper Object String – charAt(), on jsPerf)

    This benchmark clearly verifies the previous statement, as we can see that getting the value of the first string character in Firefox 20 is substiantially faster in strprimitive than in strobject, about x70 times of increased performance. Similar results apply to other browsers as well, though at different speeds. Also, notice the differences between incremental Firefox versions; this is just another indicator of how small code variations can affect the JS engine’s speed for certain runtime calls.

    3. indexOf() method

      var i = strprimitive.indexOf("e");
      var k = strobject.indexOf("e");


    (Primitive value string vs Wrapper Object String – IndexOf(), on jsPerf)

    Similarly in this case, we can see that the primitive value string strprimitive can be used in more operations than strobject. In addition, the JS engine differences in sequential browser versions produce a variety of measurements.

    4. match() method

    Since there are similar results here too, to save some space, you can click the source link to view the benchmark.

    (Primitive value string vs Wrapper Object String – match(), on jsPerf)

    5. replace() method

    (Primitive value string vs Wrapper Object String – replace(), on jsPerf)

    6. toUpperCase() method

    (Primitive value string vs Wrapper Object String – toUpperCase(), on jsPerf)

    7. valueOf() method

      var i = strprimitive.valueOf();
      var k = strobject.valueOf();

    At this point it starts to get more interesting. So, what happens when we try to call the most common method of a string, it’s valueOf()? It seems like most browsers have a mechanism to determine whether it’s a primitive value string or an Object String, thus using a much faster way to get its value; surprizingly enough Firefox versions up to v20, seem to favour the Object String method call of strobject, with a 7x increased speed.


    (Primitive value string vs Wrapper Object String – valueOf(), on jsPerf)

    It’s also worth mentioning that Chrome 22.0.1229 seems to have favoured too the Object String, while in version 23.0.1271 a new way to get the content of primitive value strings has been implemented.

    A simpler way to run this benchmark in your browser’s console is described in the comment of the jsperf page.

    8. Adding two strings

      var i = strprimitive + " there";
      var k = strobject + " there";


    (Primitive string vs Wrapper Object String – get str value, on jsPerf)

    Let’s now try and add the two strings with a primitive value string. As the chart shows, both Firefox and Chrome present a 2.8x and 2x increased speed in favour of strprimitive, as compared with adding the Object string strobject with another string value.

    9. Adding two strings with valueOf()

      var i = strprimitive.valueOf() + " there";
      var k = strobject.valueOf() + " there";


    (Primitive string vs Wrapper Object String – str valueOf, on jsPerf)

    Here we can see again that Firefox favours the strobject.valueOf(), since for strprimitive.valueOf() it moves up the inheritance tree and consequently creates a new wapper object for strprimitive. The effect this chained way of events has on the performance can also be seen in the next case.

    10. for-in wrapper object

      var i = "";
      for (var temp in strprimitive) { i += strprimitive[temp]; }
     
      var k = "";
      for (var temp in strobject) { k += strobject[temp]; }

    This benchmark will incrementally construct the string’s value through a loop to another variable. In the for-in loop, the expression to be evaluated is normally an object, but if the expression is a primitive value, then this value gets coerced to its equivalent wrapper object. Of course, this is not a recommended method to get the value of a string, but it is one of the many ways a wrapper object can be created, and thus it is worth mentioning.


    (Primitive string vs Wrapper Object String – Properties, on jsPerf)

    As expected, Chrome seems to favour the primitive value string strprimitive, while Firefox and Safari seem to favour the object string strobject. In case this seems much typical, let’s move on the last benchmark.

    11. Adding two strings with an Object String

      var str3 = new String(" there");
     
      var i = strprimitive + str3;
      var k = strobject + str3;


    (Primitive string vs Wrapper Object String – 2 str values, on jsPerf)

    In the previous examples, we have seen that Firefox versions offer better performance if our initial string is an Object String, like strobject, and thus it would be seem normal to expect the same when adding strobject with another object string, which is basically the same thing. It is worth noticing, though, that when adding a string with an Object String, it’s actually quite faster in Firefox if we use strprimitive instead of strobject. This proves once more how source code variations, like a patch to a bug, lead to different benchmark numbers.

    Conclusion

    Based on the benchmarks described above, we have seen a number of ways about how subtle differences in our string declarations can produce a series of different performance results. It is recommended that you continue to declare your string variables as you normally do, unless there is a very specific reason for you to create instances of the String Object. Also, note that a browser’s overall performance, particularly when dealing with the DOM, is not only based on the page’s JS performance; there is a lot more in a browser than its JS engine.

    Feedback comments are much appreciated. Thanks :-)

  8. Using secure client-side sessions to build simple and scalable Node.JS applications – A Node.JS Holiday Season, part 3

    This is episode 3, out of a total 12, in the A Node.JS Holiday Season series from Mozilla’s Identity team. It covers using sessions for scalable Node.js applications.

    Static websites are easy to scale. You can cache the heck out of them and you don’t have state to propagate between the various servers that deliver this content to end-users.

    Unfortunately, most web applications need to carry some state in order to offer a personalized experience to users. If users can log into your site, then you need to keep sessions for them. The typical way that this is done is by setting a cookie with a random session identifier and storing session details on the server under this identifier.

    Scaling a stateful service

    Now, if you want to scale that service, you essentially have three options:

    1. replicate that session data across all of the web servers,
    2. use a central store that each web server connects to, or
    3. ensure that a given user always hits the same web server

    These all have downsides:

    • Replication has a performance cost and increases complexity.
    • A central store will limit scaling and increase latency.
    • Confining users to a specific server leads to problems when that
      server needs to come down.

    However, if you flip the problem around, you can find a fourth option: storing the session data on the client.

    Client-side sessions

    Pushing the session data to the browser has some obvious advantages:

    1. the data is always available, regardless of which machine is serving a user
    2. there is no state to manage on servers
    3. nothing needs to be replicated between the web servers
    4. new web servers can be added instantly

    There is one key problem though: you cannot trust the client not to tamper with the session data.

    For example, if you store the user ID for the user’s account in a cookie, it would be easy for that user to change that ID and then gain access to someone else’s account.

    While this sounds like a deal breaker, there is a clever solution to work around this trust problem: store the session data in a tamper-proof package. That way, there is no need to trust that the user hasn’t modified the session data. It can be verified by the server.

    What that means in practice is that you encrypt and sign the cookie using a server key to keep users from reading or modifying the session data. This is what client-sessions does.

    node-client-sessions

    If you use Node.JS, there’s a library available that makes getting started with client-side sessions trivial: node-client-sessions. It replaces Connect‘s built-in session and cookieParser middlewares.

    This is how you can add it to a simple Express application:

    const clientSessions = require("client-sessions");
     
    app.use(clientSessions({
      secret: '0GBlJZ9EKBt2Zbi2flRPvztczCewBxXK' // set this to a long random string!
    }));

    Then, you can set properties on the req.session object like this:

    app.get('/login', function (req, res){
      req.session.username = 'JohnDoe';
    });

    and read them back:

    app.get('/', function (req, res){
      res.send('Welcome ' + req.session.username);
    });

    To terminate the session, use the reset function:

    app.get('/logout', function (req, res) {
      req.session.reset();
    });

    Immediate revocation of Persona sessions

    One of the main downsides of client-side sessions as compared to server-side ones is that the server no longer has the ability to destroy sessions.

    Using a server-side scheme, it’s enough to delete the session data that’s stored on the server because any cookies that remain on clients will now point to a non-existent session. With a client-side scheme though, the session data is not on the server, so the server cannot be sure that it has been deleted on every client. In other words, we can’t easily synchronize the server state (user logged out) with the state that’s stored on the client (user logged in).

    To compensate for this limitation, client-sessions adds an expiry to the cookies. Before unpacking the session data stored in the encrypted cookie, the server will check that it hasn’t expired. If it has, it will simply refuse to honour it and consider the user as logged out.

    While the expiry scheme works fine in most applications (especially when it’s set to a relatively low value), in the case of Persona, we needed a way for users to immediately revoke their sessions as soon as they learn that they password has been compromised.

    This meant keeping a little bit of state on the backend. The way we made this instant revocation possible was by adding a new token in the user table as well as in the session cookie.

    Every API call that looks at the cookie now also reads the current token value from the database and compares it with the token from the cookie. Unless they are the same, an error is returned and the user is logged out.

    The downside of this solution, of course, is the extra database read for each API call, but fortunately we already read from the user table in most of these calls, so the new token can be pulled in at the same time.

    Learn more

    If you want to give client-sessions a go, have a look at this simple demo application. Then if you find any bugs, let us know via our bug tracker.

    Previous articles in the series

    This was part three in a series with a total of 12 posts about Node.js. The previous ones are:

  9. Hacking Firefox OS

    This blog post is written by Luca Greco, a Mozilla community member who loves to hack, especially on JavaScript and other web-related technologies.

    A lot of developers are already creating mobile applications using Web technologies (powered by containers like Phonegap/Cordova as an example),
    usually to develop cross platform applications or leverage their current code and/or expertise.

    As a consequence Firefox OS is a very intriguing project for quite a few reasons:

    • Web apps have first class access to the platform
    • Web apps are native (less abstraction levels and better performance)
    • Platform itself is web-based (and customizable using web technologies)

    Mobile platforms based on web technologies can be the future, and we can now touch it, and even more important, we can help to define it and push it further thanks to a platform (Firefox OS) developed completely in the open. I could not resist the temptation, so I started to dive into Firefox OS code, scrape it using MXR and study documentation on the wiki.

    Hack On Firefox OS

    In a couple of weeks I was ready to put together an example app and an unofficial presentation on the topic (presented at the local LinuxDay 2012):



    Slides: “Hack On Firefox OS”

    Example App: Chrono for Gaia (user interface of Firefox OS):

    During this presentation I tried to highlight an important strength I believe Firefox OS and Firefox have in common:

    “It’s not a static build, it’s alive! a truly interactive environment. Like the Web”

    I can telnet inside a B2G instance and look around for interesting stuff or run JavaScript snippets to interactively experiment the new WebAPIs. There’s more than one option to get a remote JavaScript shell inside B2G:

    • Marionette, mainly used for automated tests
    • B2G Remote JS Shell, a minimal JavaScript shell optionally exposed on a tcp port (which I think will be deprecated in future releases)

    Unfortunately these tools currently lack integrated inspection utils (e.g. console.log/console.dir or MozRepl repl.inspect and repl.search), so during the presentation I opted to install MozRepl as extension on the B2G simulator, but in the last weeks Remote WebConsole landed into Firefox Nightly.

    Obviously RemoteWebConsole isn’t mature yet, so we need to be prepared for bugs and silent errors (e.g. we need to be sure B2G has “remote debugger” enabled or it will fail without errors), but it features objects inspection, network progress logging, JavaScript and CSS errors (like our local Firefox WebConsole).

    Developing for Firefox OS

    In my experience, developing an OpenWebApps for Firefox OS isn’t really different from hybrid applications based on Phonegap-like technologies:

    We’ll try to code and test major parts of our application on desktop browsers and their development tools, using mockups/shims in place of native functionalities. But during my 2 weeks of studying, I’ve collected an interesting amount of working notes that can be useful to share, so in next sections I’m going to:

    • Review development workflow and tools
    • Put together a couple of useful tips and tricks
    • Release to the public (independently and in the Mozilla Marketplace)

    As an example app, I’ve chosen to create a simple chronometer:

    Workflow and Tools

    During this experience my toolset was composed by:

    • VoloJS – development server and automated production build (minify js/css, generate manifest.appcache)
    • Firefox Nightly Web Developer Tools – Markup View, Tilt, Responsive Design View and Web Console
    • R2D2B2G – integrate/interoperate with B2G from a Firefox Addon

    Using Volo, I can test my app from volo integrated http server, split JavaScript code into modules using Require.js, and finally generate a production version, minified and optionally equipped of an auto-generated manifest.appcache.

    During my development cycle I iteratively:

    • Make a change
    • Reload and inspect changes on desktop browser
    • Try changes on b2g-simulator using R2D2B2G
    • Debug on desktop browser or remotely on b2g-simulator
    • Restart the loop :-)

    Using my favorite desktop browser (Firefox, obviously :-P), I have the chance to use very powerful inspection/debugging tools, not usually available on mobile web-runtimes:

    • Markup Viewer to inspect and change DOM tree state
    • Styles Editor to inspect and change CSS properties
    • Tilt to check where offscreen dom elements are located
    • Web Console to inspect and change the JavaScript environment

    Thanks to new Mozilla projects like “Firefox OS” and “Firefox for Android”, more and more of these tools are now available as “Remote Web Tools” and can be connected to a remote instance.

    Tips and Tricks

    Gaia UI Building Blocks

    Gaia isn’t only a B2G UI implementation, it’s a design style guide and a collection of pre-built CSS styles which implements these guidelines:

    We can import components’ styles from the repo above and apply them to our app to achieve a really nice native Look & Feel on Firefox OS. Some components are not stable, which means they can interact badly with other components’ styles or doesn’t work perfectly on all platforms (e.g. Firefox Desktop or Firefox for Android), but usually it’s nothing we can’t fix by using some custom and more specific CSS rules.

    It doesn’t feel like a mature and complete CSS framework (e.g. Bootstrap) but it’s promising and I’m sure it will get better.

    Using Responsive Design View we can test different resolutions (and orientations), which helps to reach a good and consistent result even without test our app on Firefox OS or Firefox for Android devices, but we should keep an eye on dpi related tweaks, because we currently can’t fully recognize how it will looks using our Desktop browser.

    App Panels

    A lot of apps needs more than one panel, so first I looked inside official Gaia applications to understand how native apps implements this almost mandatory feature. This is how Gaia Clock application appear from a “Tilt” eye:

    Panels are simple DOM elements (e.g. a section or a div tag) initially positioned offscreen and moved on screen using a CSS transition:

    In the “Chrono” app, you will find this strategy in the Drawer (an unstable Gaia UI Building Block):

    and in the Laps and About panels (combinated with :target pseudo class):

    The Fascinating -moz-element trick

    This is a very fascinating trick, used in the time selector component on Firefox OS:

    and in the Markup View on Firefox Nightly Desktop (currently disabled):

    Thanks to this non-standard CSS feature we can use a DOM Element as background image on another one, e.g. integrating a complex offscreen visual component into the visible space as a single DOM element.

    Fragments from index.html

    Fragments from chrono.css

    Fragments from chrono.js

    Using -moz-element and -moz-calc (compute components size into CSS rules and already included into CSS3 as calc) is really simple, but you can learn more on the subject on MDN:

    Release to the public

    WebApp Manifest

    During our development cycle we install our application into B2G simulator using an R2D2B2G menu option, so we don’t need a real manifest.webapp, but we have to create a real one when ready for a public release or to release it to test users.

    Creating a manifest.webapp is not difficult, it’s only a simple and well documented json file format: App/Manifest on MDN.

    Debugging problems related to this manifest file is still an unknown territory, and some tips can be useful:

    • If the manifest file contains a syntax error or cannot be downloaded and error will be silently reported into the old Error Console (no, they will not be reported inside the new Web Console)
    • If your application is accessible as a subdirectory in its domain, you have to include this path in resources path specified inside the manifest (e.g. launch_path, appcache_path, icons), more on this later
    • You can add an uninstall button in your app to help you as a developer (and test users) to uninstall your app in a platform independent way (because “how to uninstall” an installed webapp will be different if you are on Desktop, Android or Firefox OS)

    Using OpenWebApps APIs, I added to “Chrono” some code to give users the ability to install itself:

    Install an app from your browser to your desktop system

    Check if it’s already installed (as a webapp runtime app or a self-service installer in a browser tab):

    On a Linux Desktop, when you install an OpenWebApp from Firefox, it will create a new launcher (an “.desktop” file) in your “.local/share/applications” hidden directory:

    $ cat ~/.local/share/applications/owa-http\;alcacoop.github.com.desktop 
    [Desktop Entry]
    Name=Chrono
    Comment=Gaia Chronometer Example App
    Exec="/home/rpl/.http;alcacoop.github.com/webapprt-stub"
    Icon=/home/rpl/.http;alcacoop.github.com/icon.png
    Type=Application
    Terminal=false
    Actions=Uninstall;
    [Desktop Action Uninstall]
    Name=Uninstall App
    Exec=/home/rpl/.http;alcacoop.github.com/webapprt-stub -remove

    As you will notice, current conventions (and implementation) supports only one application per domain, if you give a look inside the hidden directory of our installed webapp you will find a single webapp.json config file:

    $ ls /home/rpl/.http\;alcacoop.github.com/
    Crash Reports  icon.png  profiles.ini  webapp.ini  webapp.json  webapprt-stub  
    z8dvpe0j.default
    

    Reasons for this limitations are documented on MDN: FAQs about app manifests

    To help yourself debugging problems when you app is running inside the webapp runtime you can run it from command line and enable the old (but still useful) Error Console:

    $ ~/.http\;alcacoop.github.com/webapprt-srt -jsconsole

    Uninstalling an OpenWebApp is really simple, you can manually removing it using the “wbapprt-stub” executable in the “OpenWebApp hidden directory” (platform dependent method):

    $ ~/.http\;alcacoop.github.com/webapprt-stub -remove

    Or from JavaScript code, as I’ve done in “Chrono” to give users the ability to uninstall the app from a Firefox Browser tab:

    AppCache Manifest

    This is a feature integrated a lot of time ago into major browsers, but now, thanks to OpenWebApps, it’s becoming really mandatory: without a manifest.appcache, and proper JavaScript code to handle upgrades, our WebApp will not works offline correctly, and will not feel like a real installed application.

    Currently Appcache it’s a kind of black magic and it deserves a “Facts Page” like Chuck Norris: AppCacheFacts.

    Thanks to volo-appcache command, generate a manifest.appcache it’s simple as a single line command:

      $ volo appache
      ...
      $ ls -l www-build/manifest.appcache
      ...
      $ tar cjvf release.tar.bz2 www-build
      ...

    Unfortunately when you need to debug/test your manifest.appcache, you’re on your own, because currently there isn’t any friendly debugging tool integrated into Firefox:

    • appcache downloading progress (and errors) are not currently reported into the WebConsole
    • appcache errors doesn’t contains an error message/description
    • Firefox for Android and Firefox OS doesn’t have an UI to clean your applicationCache

    Debug appcache problems can be very tricky, so here a couple of tricks I learned during this experiment:

    • Subscribe every window.applicationCache events (‘error’, ‘checking’, ‘noupdate’, ‘progress’, ‘downloading’, ‘cached’, ‘updateready’ etc) and log all received events / error messages during development and debugging
    • Add upgrade handling code in your first public release (or prepare yourself to go house by house to help your users to upgrade :-D)
    • On Firefox for Desktop you can remove applicationCache from the Preferences dialog
    • Analyze server side logs to understand where the “appcache updating” is stuck
    • Activate logging on applicationCache internals when you run Firefox or B2G to understand why it’s stuck
      (from https://mxr.mozilla.org/mozilla-central/source/uriloader/prefetch/nsOfflineCacheUpdate.cpp#62):

      export NSPR_LOG_MODULES=nsOfflineCacheUpdate:5
      export NSPR_LOG_FILE=offlineupdate.log
      firefox -no-remote -ProfileManager &
       
      tail -f offlineupdate.log
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::Init [7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::AddObserver [7fc56a9fcc08] to update [7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::AddObserver [7fc55c3264d8] to update [7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::Schedule [7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdateService::Schedule [7fc57428dac0, update=7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdateService::ProcessNextUpdate [7fc57428dac0, num=1]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::Begin [7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::NotifyState [7fc55959ce50, 2]
      -1614710976[7fc59e91f590]: 7fc559d0df00: Opening channel for http://html5dev:8888/gaia-chrono-app/manifest.appcache
      -1614710976[7fc59e91f590]: loaded 3981 bytes into offline cache [offset=0]
      -1614710976[7fc59e91f590]: Update not needed, downloaded manifest content is byte-for-byte identical
      -1614710976[7fc59e91f590]: done fetching offline item [status=0]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::LoadCompleted [7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::NotifyState [7fc55959ce50, 3]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::Finish [7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdateService::UpdateFinished [7fc57428dac0, update=7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdateService::ProcessNextUpdate [7fc57428dac0, num=0]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::NotifyState [7fc55959ce50, 10]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::RemoveObserver [7fc56a9fcc08] from update [7fc55959ce50]
      -1614710976[7fc59e91f590]: nsOfflineCacheUpdate::RemoveObserver [7fc55c3264d8] from update [7fc55959ce50]

    Adding applicationCache support to “Gaia Chrono App”, I used all this tricks to finally discover Firefox didn’t send me an “updateready” event, so I was not able to tell the user to reload the page to start using the new (and already cached) version. With a better understanding of the problem, and searching in the code on MXR and in tickets on Bugzilla, I finally found an existing ticket in the bugtracker: Bug 683794: onupdateready event not fired when an html5 app cache app is updated.

    Workaround this bug its really simple (simpler than tracking it down): add a dummy “updateready” listener on applicationCache object in a script tag to be sure it will be fired:

    If you are going to start to use this feature (and soon or later you will), be prepared to:

    • Implement as suggested from the standard
    • Debug reason why it’s not behave as it should
    • Search an existent bug or file a bug if it’s not reported (NOTE: this is really important!!! :-D)
    • Find a workaround

    This is definitely a features that needs more supports into web developer tools, because a regular web developer doesn’t want to debug his webapp from “browser internals” point of view.

    Port to Firefox for Android

    An interesting feature of OpenWebApps is “they can be installed on any supported platform without (almost) any changes”. As an example we can install our “Chrono” App on our Desktop using Firefox Nightly and on Android using Firefox for Android Nightly.

    In my own opinion, Firefox for Android can be a strategic platform for the future of OpenWebApps and even Firefox OS: Android is an already popular mobile platform and give developers the option to release their application on Firefox OS and Android from a single codebase is a big plus.

    The only problem I faced porting “Chrono” App on Android was related to the different rendering behaviour of Firefox for Android (and as a consequence in the WebAppRT, which contains our application):

    A GeckoScreenshot service will force a repaint only when and where it detects changes. This feature interacts badly with the -moz-element trick and it needs some help to understand what really needs to be repainted:

    Release to the public

    GitHub pages are a rapid and simple option to release our app to the public, even simpler thanks to the volo-ghdeploy command:

      $ volo appcache && volo ghdeploy
      ...

    When you deploy your OpenWebApp in a subdirectory of a given domain (e.g. using Github Pages) you should take into account that paths from manifest.webapp needs to be relative to your origin (protocol+host+port) and not your current url:

    We can install only a single OpenWebApp from every origin, so if you wants to deploy more than one app from your Github pages you need to configure Github pages to be exposed on a custom domain: Github Help – Setting up a custom domain with pages.

    When our app is finally online and public accessible we can submit it to the Mozilla Marketplace and gain more visibility.

    During the app submission procedure, your manifest.webapp will be validated and you will be warned if and how you need to tweak it to be able complete your submission:

    • Errors related to missing info (e.g. name or icons)
    • Errors related to invalid values (e.g. orientation)

    As in other mobile marketplaces you should collect and fill into your submission:

    • manifest.webapp url (NOTE: it will be read-only in the developer panel and you can’t change it)
    • A longer description and feature list
    • Short release description
    • One or more screenshots

    Mini Market

    Mozilla Marketplace goal is to help OpenWebApps to gain more visibility, as other mobile store currently do for their ecosystem, but Mozilla named this project OpenWebApps for a reason:

    Mozilla isn’t the only one that could create a Marketplace for OpenWebApps! Mozilla designed it to give us the same freedom we have on the Web, no more no less.

    This is a very powerful feature because it open to developers a lot of interesting use cases:

    • Carrier Apps Market on Firefox OS devices
    • Non-Public Apps Installer/Manager
    • Intranet Apps Installer/Manager

    Conclusion

    Obviously Firefox OS and OpenWebApps aren’t fully completed right now (but improves at an impressive speed), Firefox OS doesn’t have an official released SDK, but the web doesn’t have an official SDK and we already use it everyday to do awesome stuff.

    So if you are interested in mobile platforms and you wants to learn how a mobile platform born and grow, or you are a web developer and you wants more and more web technologies into mobile ecosystems…

    You should seriously take a look at Firefox OS:

    We deserve a more open mobile ecosystem, let’s start to push it now, let’s help OpenWebApps and Firefox OS to became our new powerful tools!

    Happy Hacking!

  10. Firefox OS – video presentations and slides on the OS, WebAPIs, hacking and writing apps

    In August, Mozilla’s Director of Research Andreas Gal, and one of the lead engineers for Firefox OS, Philipp von Weitershausen, gave a couple of presentations in Brazil about Firefox OS. We’re now happy to share both the videos and the slides, in various formats for you to see or use, giving your own presentations!

    Videos

    The videos are available on YouTube in the Mozilla Hacks channel, and they are split up into:

    Firefox OS – Introduction & Components

    FirefoxOS – Developer Enviroment, Apps, Marketplace

    FirefoxOS – WebAPIs & UI hacking

    Note: if you’ve opted in to HTML5 on YouTube you will get HTML5 video.

    Slides

    There were four different slide decks being used:

    They are also available in Portuguese:

    We’ve also made these slide decks available in other formats if you want to reuse them and give your own presentations about Firefox OS (and if you love talking about the Open Web, have you considered becoming an Evangelism Rep?).

    Keynote format

    PowerPoint format