Mozilla

Featured Articles

Sort by:

View:

  1. localForage: Offline Storage, Improved

    Web apps have had offline capabilities like saving large data sets and binary files for some time. You can even do things like cache MP3 files. Browser technology can store data offline and plenty of it. The problem, though, is that the technology choices for how you do this are fragmented.

    localStorage gets you really basic data storage, but it’s slow and can’t handle binary blobs. IndexedDB and WebSQL are asynchronous, fast, and support large data sets, but their APIs aren’t very straightforward. Even still, neither IndexedDB nor WebSQL have support from all of the major browser vendors and that doesn’t seem like something that will change in the near future.

    If you need to write a web app with offline support and don’t know where to start, then this is the article for you. If you’ve ever tried to start working with offline support but it made your head spin, this article is for you too. Mozilla has made a library called localForage that makes storing data offline in any browser a much easier task.

    around is an HTML5 Foursquare client that I wrote that helped me work through some of the pain points of offline storage. We’re still going to walk through how to use localForage, but there’s some source for those of you that like learn by perusing code.

    localForage is a JavaScript library that uses the very simple localStorage API. localStorage gives you, essentially, the features of get, set, remove, clear, and length, but adds:

    • an asynchronous API with callbacks
    • IndexedDB, WebSQL, and localStorage drivers (managed automatically; the best driver is loaded for you)
    • Blob and arbitrary type support, so you can store images, files, etc.
    • support for ES6 Promises

    The inclusion of IndexedDB and WebSQL support allows you to store more data for your web app than localStorage alone would allow. The non-blocking nature of their APIs makes your app faster by not hanging the main thread on get/set calls. Support for promises makes it a pleasure to write JavaScript without callback soup. Of course, if you’re a fan of callbacks, localForage supports those too.

    Enough talk; show me how it works!

    The traditional localStorage API, in many regards, is actually very nice; it’s simple to use, doesn’t enforce complex data structures, and requires zero boilerplate. If you had a configuration information in an app you wanted to save, all you need to write is:

    // Our config values we want to store offline.
    var config = {
        fullName: document.getElementById('name').getAttribute('value'),
        userId: document.getElementById('id').getAttribute('value')
    };
     
    // Let's save it for the next time we load the app.
    localStorage.setItem('config', JSON.stringify(config));
     
    // The next time we load the app, we can do:
    var config = JSON.parse(localStorage.getItem('config'));

    Note that we need to save values in localStorage as strings, so we convert to/from JSON when interacting with it.

    This appears delightfully straightforward, but you’ll immediately notice a few issues with localStorage:

    1. It’s synchronous. We wait until the data has been read from the disk and parsed, regardless of how large it might be. This slows down our app’s responsiveness. This is especially bad on mobile devices; the main thread is halted until the data is fetched, making your app seem slow and even unresponsive.

    2. It only supports strings. Notice how we had to use JSON.parse and JSON.stringify? That’s because localStorage only supports values that are JavaScript strings. No numbers, booleans, Blobs, etc. This makes storing numbers or arrays annoying, but effectively makes storing Blobs impossible (or at least VERY annoying and slow).

    A better way with localForage

    localForage gets past both these problems by using asynchronous APIs but with localStorage’s API. Compare using IndexedDB to localForage for the same bit of data:

    IndexedDB Code

    // IndexedDB.
    var db;
    var dbName = "dataspace";
     
    var users = [ {id: 1, fullName: 'Matt'}, {id: 2, fullName: 'Bob'} ];
     
    var request = indexedDB.open(dbName, 2);
     
    request.onerror = function(event) {
        // Handle errors.
    };
    request.onupgradeneeded = function(event) {
        db = event.target.result;
     
        var objectStore = db.createObjectStore("users", { keyPath: "id" });
     
        objectStore.createIndex("fullName", "fullName", { unique: false });
     
        objectStore.transaction.oncomplete = function(event) {
            var userObjectStore = db.transaction("users", "readwrite").objectStore("users");
        }
    };
     
    // Once the database is created, let's add our user to it...
     
    var transaction = db.transaction(["users"], "readwrite");
     
    // Do something when all the data is added to the database.
    transaction.oncomplete = function(event) {
        console.log("All done!");
    };
     
    transaction.onerror = function(event) {
        // Don't forget to handle errors!
    };
     
    var objectStore = transaction.objectStore("users");
     
    for (var i in users) {
        var request = objectStore.add(users[i]);
        request.onsuccess = function(event) {
            // Contains our user info.
            console.log(event.target.result);
        };
    }

    WebSQL wouldn’t be quite as verbose, but it would still require a fair bit of boilerplate. With localForage, you get to write this:

    localForage Code

    // Save our users.
    var users = [ {id: 1, fullName: 'Matt'}, {id: 2, fullName: 'Bob'} ];
    localForage.setItem('users', users, function(result) {
        console.log(result);
    });

    That was a bit less work.

    Data other than strings

    Let’s say you want to download a user’s profile picture for your app and cache it for offline use. It’s easy to save binary data with localForage:

    // We'll download the user's photo with AJAX.
    var request = new XMLHttpRequest();
     
    // Let's get the first user's photo.
    request.open('GET', "/users/1/profile_picture.jpg", true);
    request.responseType = 'arraybuffer';
     
    // When the AJAX state changes, save the photo locally.
    request.addEventListener('readystatechange', function() {
        if (request.readyState === 4) { // readyState DONE
            // We store the binary data as-is; this wouldn't work with localStorage.
            localForage.setItem('user_1_photo', request.response, function() {
                // Photo has been saved, do whatever happens next!
            });
        }
    });
     
    request.send()

    Next time we can get the photo out of localForage with just three lines of code:

    localForage.getItem('user_1_photo', function(photo) {
        // Create a data URI or something to put the photo in an img tag or similar.
        console.log(photo);
    });

    Callbacks and promises

    If you don’t like using callbacks in your code, you can use ES6 Promises instead of the callback argument in localForage. Let’s get that photo from the last example, but use promises instead of a callback:

    localForage.getItem('user_1_photo').then(function(photo) {
        // Create a data URI or something to put the photo in an <img> tag or similar.
        console.log(photo);
    });

    Admittedly, that’s a bit of a contrived example, but around has some real-world code if you’re interested in seeing the library in everyday usage.

    Cross-browser support

    localForage supports all modern browsers. IndexedDB is available in all modern browsers aside from Safari (IE 10+, IE Mobile 10+, Firefox 10+, Firefox for Android 25+, Chrome 23+, Chrome for Android 32+, and Opera 15+). Meanwhile, the stock Android Browser (2.1+) and Safari use WebSQL.

    In the worst case, localForage will fall back to localStorage, so you can at least store basic data offline (though not blobs and much slower). It at least takes care of automatically converting your data to/from JSON strings, which is how localStorage needs data to be stored.

    Learn more about localForage on GitHub, and please file issues if you’d like to see the library do more!

  2. CSS source map support, network performance analysis & more – Firefox Developer Tools Episode 29

    Firefox 29 was just uplifted to the Aurora release channel. This means that it is time to report some of the major changes that you can expect to see inside of the Developer Tools for this release.

    Better Looking Tools

    In addition to new features, we have been updating the look and feel of our dark and light themes. The light theme has been completely overhauled, and both themes feature a more consistent design throughout the toolbox. Your current theme can be changed from the Toolbox settings. (development notes)

    Network Monitor

    The Network Monitor now shows you how long it takes the browser to load different parts of your page. This will help measure the network performance of applications, both on first-run and with a primed cache. (development notes)

    To open the performance analysis tool, click the stopwatch icon in the network panel. For more information, watch the screencast below or read more on MDN.

    You can now copy an image request as a Data URI. Just right click on the image request, select the item from the context menu, and the Data URI will be on your clipboard. (development notes)

    Inspector

    We’ve updated the inspector highlighter behavior to bring the highlighting functionality more in line with other tools. (development notes)

    CSS transform preview tooltips have been added to the CSS rule view. Now, if you hover over a CSS transform, you will get a tooltip with a visualization of the transform. Grab a download of Firefox Nightly or Aurora and try it out on some live CSS transfom examples. (development notes)

    CSS rule view now supports pasting multiple CSS declarations at once, like background: #ccc; color: red. (development notes).

    Just like in the network panel, you can now copy <img> elements as Data URIs. (development notes)

    Style Editor

    CSS source map support has been added to the Style Editor. (development notes), and CSS properties and values will now be autocompleted in the Style Editor. (development notes)

    Want to read more? We have published a post with more information about using source maps in DevTools to live edit Sass and Less.

    Debugger

    We have added a classic call stack list in the debugger next to the list of sources. (development notes)

    There is a new ‘enable/disable all breakpoints’ button in the debugger. This will toggle the active state of all existing breakpoints at once, to allow switching between normal usage and debugging quickly. (development notes)

    You can now highlight and inspect DOM nodes from the debugger. If you hover a DOM node in the variables listing it will be highlighted on the page, and if you click on the inspect icon the node will be opened in the inspector tab. This feature is also available in the console output. (development notes)

    Pretty printing now preserves code comments. We are using the open source pretty-fast pretty printer, so it should be pretty fast. If it isn’t, be sure to let us know. (development notes)

    Console

    console.trace improvements. The call stack is shown inline with other output, and includes links to access each line in the debugger. (development notes)

    We’ve also improved console object output to show additional information based on the object type. (development notes)

    Code Editor

    The code editor can be seen throughout the tools in places like Scratchpad, Style Editor, and Debugger. Here are some of the updates you will see in this release:

    • Code folding in the editor. (development notes)
    • Emacs and VIM keybindings are now available in the code editor. To enable them, open about:config, and set “devtools.editor.keymap” to either “vim” or “emacs”, then restart DevTools. (development notes)
    • ES6 syntax highlighting support (development notes)

    Big thanks to all of our DevTools contributors this release (43 people)! Here is a list of all DevTools bugs resolved for Firefox 29.

    Do you have feedback, bug reports, feature requests, or questions? As always, you can comment here or get in touch with the team at @FirefoxDevTools.

  3. WebGL Deferred Shading

    WebGL brings hardware-accelerated 3D graphics to the web. Many features of WebGL 2 are available today as WebGL extensions. In this article, we describe how to use the WEBGL_draw_buffers extension to create a scene with a large number of dynamic lights using a technique called deferred shading, which is popular among top-tier games.

    live demosource code

    Today, most WebGL engines use forward shading, where lighting is computed in the same pass that geometry is transformed. This makes it difficult to support a large number of dynamic lights and different light types.

    Forward shading can use a pass per light. Rendering a scene looks like:

    foreach light {
      foreach visible mesh {
        if (light volume intersects mesh) {
          render using this material/light shader;
          accumulate in framebuffer using additive blending;
        }
      }
    }

    This requires a different shader for each material/light-type combination, which adds up. From a performance perspective, each mesh needs to be rendered (vertex transform, rasterization, material part of the fragment shader, etc.) once per light instead of just once. In addition, fragments that ultimately fail the depth test are still shaded, but with early-z and z-cull hardware optimizations and a front-to-back sorting or a z-prepass, this not as bad as the cost for adding lights.

    To optimize performance, light sources that have a limited effect are often used. Unlike real-world lights, we allow the light from a point source to travel only a limited distance. However, even if a light’s volume of effect intersects a mesh, it may only affect a small part of the mesh, but the entire mesh is still rendered.

    In practice, forward shaders usually try to do as much work as they can in a single pass leading to the need for a complex system of chaining lights together in a single shader. For example:

    foreach visible mesh {
      find lights affecting mesh;
      Render all lights and materials using a single shader;
    }

    The biggest drawback is the number of shaders required since a different shader is required for each material/light (not light type) combination. This makes shaders harder to author, increases compile times, usually requires runtime compiling, and increases the number of shaders to sort by. Although meshes are only rendered once, this also has the same performance drawbacks for fragments that fail the depth test as the multi-pass approach.

    Deferred Shading

    Deferred shading takes a different approach than forward shading by dividing rendering into two passes: the g-buffer pass, which transforms geometry and writes positions, normals, and material properties to textures called the g-buffer, and the light accumulation pass, which performs lighting as a series of screen-space post-processing effects.

    // g-buffer pass
    foreach visible mesh {
      write material properties to g-buffer;
    }
     
    // light accumulation pass
    foreach light {
      compute light by reading g-buffer;
      accumulate in framebuffer;
    }

    This decouples lighting from scene complexity (number of triangles) and only requires one shader per material and per light type. Since lighting takes place in screen-space, fragments failing the z-test are not shaded, essentially bringing the depth complexity down to one. There are also downsides such as its high memory bandwidth usage and making translucency and anti-aliasing difficult.

    Until recently, WebGL had a roadblock for implementing deferred shading. In WebGL, a fragment shader could only write to a single texture/renderbuffer. With deferred shading, the g-buffer is usually composed of several textures, which meant that the scene needed to be rendered multiple times during the g-buffer pass.

    WEBGL_draw_buffers

    Now with the WEBGL_draw_buffers extension, a fragment shader can write to several textures. To use this extension in Firefox, browse to about:config and turn on webgl.enable-draft-extensions. Then, to make sure your system supports WEBGL_draw_buffers, browse to webglreport.com and verify it is in the list of extensions at the bottom of the page.

    To use the extension, first initialize it:

    var ext = gl.getExtension('WEBGL_draw_buffers');
    if (!ext) {
      // ...
    }

    We can now bind multiple textures, tx[] in the example below, to different framebuffer color attachments.

    var fb = gl.createFramebuffer();
    gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
    gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT0_WEBGL, gl.TEXTURE_2D, tx[0], 0);
    gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT1_WEBGL, gl.TEXTURE_2D, tx[1], 0);
    gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT2_WEBGL, gl.TEXTURE_2D, tx[2], 0);
    gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT3_WEBGL, gl.TEXTURE_2D, tx[3], 0);

    For debugging, we can check to see if the attachments are compatible by calling gl.checkFramebufferStatus. This function is slow and should not be called often in release code.

    if (gl.checkFramebufferStatus(gl.FRAMEBUFFER) !== gl.FRAMEBUFFER_COMPLETE) {
      // Can't use framebuffer.
      // See http://www.khronos.org/opengles/sdk/docs/man/xhtml/glCheckFramebufferStatus.xml
    }

    Next, we map the color attachments to draw buffer slots that the fragment shader will write to using gl_FragData.

    ext.drawBuffersWEBGL([
      ext.COLOR_ATTACHMENT0_WEBGL, // gl_FragData[0]
      ext.COLOR_ATTACHMENT1_WEBGL, // gl_FragData[1]
      ext.COLOR_ATTACHMENT2_WEBGL, // gl_FragData[2]
      ext.COLOR_ATTACHMENT3_WEBGL  // gl_FragData[3]
    ]);

    The maximum size of the array passed to drawBuffersWEBGL depends on the system and can be queried by calling gl.getParameter(gl.MAX_DRAW_BUFFERS_WEBGL). In GLSL, this is also available as gl_MaxDrawBuffers.

    In the deferred shading geometry pass, the fragment shader writes to multiple textures. A trivial pass-through fragment shader is:

    #extension GL_EXT_draw_buffers : require
    precision highp float;
    void main(void) {
      gl_FragData[0] = vec4(0.25);
      gl_FragData[1] = vec4(0.5);
      gl_FragData[2] = vec4(0.75);
      gl_FragData[3] = vec4(1.0);
    }

    Even though we initialized the extension in JavaScript with gl.getExtension, the GLSL code still needs to include #extension GL_EXT_draw_buffers : require to use the extension. With the extension, the output is now the gl_FragData array that maps to framebuffer color attachments, not gl_FragColor, which is traditionally the output.

    g-buffers

    In our deferred shading implementation the g-buffer is composed of four textures: eye-space position, eye-space normal, color, and depth. Position, normal, and color use the floating-point RGBA format via the OES_texture_float extension, and depth uses the unsigned-short DEPTH_COMPONENT format.

    Position texture

    Normal texture

    Color texture

    Depth texture

    Light accumulation using g-buffers

    This g-buffer layout is simple for our testing. Although four textures is common for a full deferred shading engine, an optimized implementation would try to use the least amount of memory by lowering precision, reconstructing position from depth, packing values together, using different distributions, and so on.

    With WEBGL_draw_buffers, we can use a single pass to write each texture in the g-buffer. Compared to using a single pass per texture, this improves performance and reduces the amount of JavaScript code and GLSL shaders. As shown in the graph below, as scene complexity increases so does the benefit of using WEBGL_draw_buffers. Since increasing scene complexity requires more drawElements/drawArrays calls, more JavaScript overhead, and transforms more triangles, WEBGL_draw_buffers provides a benefit by writing the g-buffer in a single pass, not a pass per texture.

    All performance numbers were measured using an NVIDIA GT 620M, which is a low-end GPU with 96 cores, in FireFox 26.0 on Window 8. In the above graph, 20 point lights were used. The light intensity decreases proportionally to the square of the distance between the current position and the light position. Each Stanford Dragon is 100,000 triangles and requires five draw calls so, for example, when 25 dragons are rendered, 125 draw calls (and related state changes) are issued, and a total of 2,500,000 triangles are transformed.


    WEBGL_draw_buffers test scene, shown here with 100 Stanford Dragons.

    Of course, when scene complexity is very low, like the case of one dragon, the cost of the g-buffer pass is low so the savings from WEBGL_draw_buffers are minimal, especially if there are many lights in the scene, which drives up the cost of the light accumulation pass as shown in the graph below.

    Deferred shading requires a lot of GPU memory bandwidth, which can hurt performance and increase power usage. After the g-buffer pass, a naive implementation of the light accumulation pass would render each light as a full-screen quad and read the entirety of each g-buffer. Since most light types, like point and spot lights, attenuate and have a limited volume of effect, the full-screen quad can be replaced with a world-space bounding volume or tight screen-space bounding rectangle. Our implementation renders a full-screen quad per light and uses the scissor test to limit the fragment shader to the light’s volume of effect.

    Tile-Based Deferred Shading

    Tile-based deferred shading takes this a step farther and splits the screen into tiles, for example 16×16 pixels, and then determines which lights influence each tile. Light-tile information is then passed to the shader and the g-buffer is only read once for all lights. Since this drastically reduces memory bandwidth, it improves performance. The following graph shows performance for the sponza scene (66,450 triangles and 38 draw calls) at 1024×768 with 32×32 tiles.

    Tile size affects performance. Smaller tiles require more JavaScript overhead to create light-tile information, but less computation in the lighting shader. Larger tiles have the opposite tradeoff. Therefore, choosing a suitable tile is important for the performance. The figure below is shown the relationship between tile size and performance with 100 lights.

    A visualization of the number of lights in each tile is shown below. Black tiles have no lights intersecting them and white tiles have the most lights.


    Shaded version of tile visualization.

    Conclusion

    WEBGL_draw_buffers is a useful extension for improving the performance of deferred shading in WebGL. Checkout the live demo and our code on github.

    Acknowledgements

    We implemented this project for the course CIS 565: GPU Programming and Architecture, which is part of the computer graphics program at the University of Pennsylvania. We thank Liam Boone for his support and Eric Haines and Morgan McGuire for reviewing this article.

    References

  4. Write Elsewhere, Run on Firefox OS

    Over the last year we’ve been recruiting developers who’ve already built apps for various Open Web and HTML5 platforms: apps for native platforms built with PhoneGap, Appcelerator Titanium or hand-coded wrappers; HTML5 apps built for Amazon Appstore, Blackberry Webworks, Chrome Dev Store, Windows Phone, and WebOS; and C++ apps translated to JavaScript with Emscripten. We’ve supported hundreds of developers and shipped devices all over the world to test and demo Firefox Apps. If you have a well-rated, successful HTML5 app on any platform we urge you to port it to Firefox OS: We’re betting it will be worth your while.

    In November, Firefox OS launched in Hungary, Serbia, Montenegro and Greece. Earlier this month we launched in Italy. In 2014, we launch in new locales with new operators, with new devices in new form factors. We anticipate more powerful cross-platform tools in 2014, and closer integration with PhoneGap and Firefox Developer Tools. It is the best of times for first-mover advantage.

    Here’s a look at just a few successful ports. We spoke with skilled HTML5 app developers who’ve participated in “app port” programs and workshops and asked them to describe their Firefox app porting experience.

    RunOnFirefoxOSCaptain Rogers, by Andrzej Mazur; Reditr by Dimitry Vinogradov & David Zorycht; Aquarium Plants by Diego Lopez

    Aquarium Plants (Android w/ hand-coded native wrapper)

    App: Aquarium Plants
    Developer: Diego Lopez
    Original platform:
    Google Play Store

    Time:

    Aquarium Plants is written with our own JavaScript code and simple HTML5 code: we just used two common JavaScript frameworks (Zepto.js and TaffyDB.js). It took me about 3 hours or less. Most of the time was reading forums and documentation.

    Recommendations:

    It is extremely easy to build/port apps for Firefox OS. It is great that we can use web tools and frameworks to create native apps for Firefox OS. I recommend that other developers start developing Firefox Apps; it is almost effortless and a great experience.

    Challenges:

    The only challenge I had was finding out how to build the package. It was amazing when I realized that building for Firefox OS was as easy as creating a zip with the app files and manifest.

    Calc (iOS w/ hand-coded native wrapper)

    App: Calc
    Developer: Markus Greve
    Original platform:
    iOS w/ hand-coded native wrapper

    Time:

    It was really easy. That’s why I would sign the claim “Write Elsewhere, Run on Firefox OS.” I think it took about two evenings with night #1 getting the app to run and night #2 doing some fine-tuning for display size of the simulator and Keon. Afterwards another evening for app icon and deployment to the Marketplace ;-) .

    Recommendations:

    I would recommend that developers give it a try to see how easy it really is. I think Mozilla.org is doing a great job and your support for developers is excellent. I already made a short presentation about how to create FirefoxOS Apps as a documentation of my experiences. You can see it here: A Firefox OS app in five minutes (in English). There’s also a blog post called App Entwicklung fur den Feuerfuchs that documents my experience in German.

    Challenges:

    1. I use OSX 10.9 Mavericks, and the Simulator crashed with the old plugin and ran more stably with Firefox 1.26b and app-manager.
    2. The need to have a subdomain per app — it seems there’s an open issue with this.
    3. Data types in JSON Manifest (I tried “fullscreen”: true instead of “fullscreen”: “true”). My fault: Solved by sticking to the documentation.
    4. I have the feeling that the icon sizes are different in the documentation to the sizes really needed for the Marketplace. Also, it’s not sensible to offer a Photoshop-Template for a 60×60 pixel Icon when you need much bigger files elsewhere (256×256 for the store I think). It would be better if you provided a template for 1024 and the user then shrank the images afterwards.
    5. Suggestion: The correlation between navigator.mozApp.checkInstalled() and .install() could be explained more comprehensively.

    Calcula Hipoteca (Amazon Appstore)

    App: Calcula Hipoteca
    Developer: Emilio Baena
    Original platform:
    Amazon Appstore

    Time:

    It was easy. I took a few hours.

    Recommendations:

    You are doing good work with this platform.

    Challenges:

    I had problems with manifest.webapp, I think that I missed having more documentation about this type of file.

    Captain Rogers (HTML5 Desktop)

    App: Captain Rogers
    Developer: Andrzej Mazur
    Original platform:
    HTML5 Desktop

    Time:

    The whole process of porting Captain Rogers for the Firefox OS platform took me only two weeks of development with an additional two weeks of testing and bug-fixing. In the core concept the game was supposed to be simple so I could focus on the process of polishing it for Firefox OS devices and publishing it in the Firefox Marketplace.

    Challenges:

    There weren’t any big problems, because basically building for Firefox OS is just building for the Web itself. Firefox OS devices are the long-awaited hardware for the mobile web — it is built using JavaScript and HTML5 after all. My main focus on making the game playable on the Firefox OS device was put on the manifest file. I had some problems with getting the orientation to work properly, but during the Firefox OS App Workshop in Warsaw I received great help from Jason Weathersby and at the end of the day the first version of the game was working very smoothly and a few days later it was accepted into the Firefox Marketplace.

    For optimizing the code I recommend this handy article by Louis Stowasser and Harald Kirschner — it was very helpful for me. Also, the Web Audio API is still a big problem on mobile, but it works perfectly in Captain Rogers, launched on the Firefox OS phone.

    Recommendations:

    The main message is simple: if you’re building HTML5 games for Firefox OS, you’re building them for the Open Web. Firefox OS is the hardware platform that the Web needs and in the long run everybody will benefit from it, as the standards and APIs are battle-tested directly on the actual devices by millions of people.

    If you want to know more about HTML5 game development I can recommend my two creations: my Preparing for Firefox OS article and HTML5 Gamedev Starter list. You should also read this great introduction by Austin Hallock. The community is very helpful, so if you have any problems or issues, check out the HTML5GameDevs forums for help and you will definitely receive it. It’s a great time for HTML5 game development, so jump in and have fun!.

    Cartelera Panama (Appcelerator Titanium)

    App: Cartelera Panama
    Developer: Demostenes Garcia
    Original platform/wrapper:
    Appcelerator Titanium

    Time:

    It was pretty easy to get up to speed on the development flow for Cartelera. The development for Firefox OS is easy, since it’s well documented and HTML5 + JavaScript makes it *really* easy to make apps.

    Challenges:

    Our UX/UI team at Pixmat Studios is very centered on usability, and that’s why we choose Titanium over PhoneGap (better user experience) so our main challenge was to make it look as a real Firefox OS application. In the end, we decided to go with real Firefox OS Building Blocks, instead of using Titanium for the UI. We decided to:

    • Extract all the business logic for the App (which handles the schedules, movies, listings, etc.) to its own Git repository/module. Then we reuse everything on Titanium and Firefox OS, using native user experience for all the three platforms we now accept.
    • Another issue we had was to actually implement an OAuth client on our phone in a good way. However Jason Weathersby pointed out good resources on that matter for us.

    Recommendations:

    Firefox OS and the environment (debug, etc) is easy for a regular web developer, so we didn’t need extra knowledge for making real mobile apps.

    • Use Firefox OS Building Blocks for UI. It’s easy to get a native UI experience with no fuzz.
    • Firefox OS at the Mozilla MDN will be your best friend :-).
    • GitHub has some good projects to learn from, so create an account (if you still don’t have one) and look for demo projects.

    Fresh Food Finder (PhoneGap)

    App:Fresh Food Finder
    Developer: Andrew Trice
    Original platform/wrapper:
    PhoneGap

    Time:

    PhoneGap support is coming for Firefox OS, and in preparation I wanted to become familiar with the Firefox OS development environment and platform ecosystem. So… I ported the Fresh Food Finder, minus the specific PhoneGap API calls. The best part (and this really shows the power of web-standards based development) is that I was able to take the existing PhoneGap codebase, turn it into a Firefox OS app AND submit it to the Firefox Marketplace in under 24 hours!

    Recommendations:

    Basically, I commented out the PhoneGap-specific API calls, added a few minor bug fixes, and added a few Firefox-OS specific layout/styling changes (just a few minor things so that my app looked right on the device). Then you put in a mainfest.webapp configuration file, package it up, then submit it to the Marketplace. If you’re interested, you can check out progress on Firefox OS support in the Cordova project, and it will be available on PhoneGap.com once it’s actually released. (Editor’s note: Look for release news in early 2014.)

    Picross (WebOS)

    App: Picross
    Developer: Owen Swerkstrom
    Original platform:
    WebOS

    Time:

    Picross was installed and running on FxOS within minutes of getting my hands on a device.

    Challenges:

    I did have to make some touch API tweaks, but that just meant some quick reading.

    Recommendations:

    My favorite quote was “You’re not porting to FirefoxOS, you’re porting to the Web!” Also, Firefox OS (and Firefox browser) Canvas support is sensational. My next game will use Canvas, for sure. In my case, the “porting” path was really simple. My game already ran in a browser, and it had originally been designed to run on the Palm Pre, a device with the exact same resolution as the lowest-end FirefoxOS phone. But the concept of installing an app involves nothing more than visiting a website: that’s what really made it painless, and that is what’s really unique about FirefoxOS. It has an “official” app marketplace, but unlike every other smartphone ever made, it doesn’t _need_ one! As a user, this is liberating. I can install any web app, from any site I want. As a developer, the story’s even better. I don’t even need Mozilla’s permission to publish an app on this platform. I don’t need to put my device into a special developer mode, or buy any toolkit, or build on any specific platform. If I can put up a website, I’m good to go.

    Even though I just called it “unnecessary”, the Firefox Marketplace has given me nothing but good experiences. I don’t need to bundle up, sign, and upload any package — I just enter a URL and type up some descriptive blurbs. Review times have always been short, and are even quicker now than when I started. When I submitted Halloween Artist, it was approved almost immediately, and the reviewer suggested that I write up a blog entry for Mozilla Hacks about how I’d put it together. This is both a traditional “marketplace” with the featured apps etc, and a community of geeks celebrating interesting code and novel ideas. That is absolutely the world I want my apps to live in.

    Before I went to the Mozilla workshop, I was considering what approach to take for an action/adventure game that I’m still working on. I was torn between writing some C OpenGL code, waiting for the not-yet-released-at-the-time SDL 2.0, or, as a longshot, doing at least some prototyping in HTML5 canvas. After learning that Canvas operations on a FirefoxOS phone basically go right to the framebuffer, and after trying some experiments and seeing for myself the insane performance I could get out of these cheap little ARM devices, I knew I’d be ditching C for the project and writing the game as a web app. I’d highly recommend investigating Canvas for your next game, even if you’ve mostly done Flash or SDL etc. before.

    Reditr (Chrome Dev Store)


    App: Reditr
    Developer: Dimitry Vinogradov & David Zorycht
    Original platform:
    Chrome Dev Store

    Time:

    It took us about a week to port everything over from Chrome to Firefox OS.

    Challenges:

    One challenge we faced when creating Reditr for Firefox OS was the content security policy (CSP). One of the great things about web applications for users is that they can see an entire overview of the permissions that a given app requires. For developers though, it can take some time to learn how to create a content security policy. For Reditr, it took us several days of trial and error to learn how the security policy effected our app.

    Recommendations:

    Make sure you understand how your app deals with the content security policy. Check to see if the APIs you use have oAuth support since this can help remedy CSP issues.

    Speed Cube Timer – Blackberry Webworks


    App: Speed Cuber Timer
    Developer: Konstantinos Mavrodis
    Original platform:
    Blackberry Webworks

    Time:

    It took about half an hour to make the exported web app compatible with Firefox OS. (Insanely fast, don’t you think? :) )

    Challenges:

    No real challenges here that I can recall of. Everything was almost a piece of cake!

    Recommendations:

    My recommendation to fellow devs would be them to port their HTML5 apps to Firefox OS as soon as possible! Porting a Web app has never been easier. Firefox OS is a promising OS that shows the great power of Web Development!

    Squarez (C++)


    App: Squarez
    Developer: Patrick Nicolas
    Original platform: C++ with Emscripten

    Emscripten and C++ usage:

    I have always preferred statically typed and compiled languages as they fit better with my way of organizing software. There are several options with Emscripten, and I have decided to write the game logic in C++ and the user interface in HTML/CSS/JavaScript. All the logic code is in C++, and for multiplayer mode, server can reuse everything and only has roughly 500 additional lines of code.

    Time:

    The game has always been a web project: the initial development, from scratch, took nearly 1 month. Porting it to be a Firefox application only took a couple of days, in order to make the manifests and icons. Performance tweaking has been a complex task and took nearly as much time as the rest of development.

    Challenges:

    Squarez is the first game I have ever written and it is the first time I have used Emscripten, that was the first challenge. Writing a web application is not really a challenge any more, because good quality documentation is available everywhere. Second challenge was tweaking performance, once I had received the Geeksphone Keon, I saw that the game was so slow that it was almost unplayable. I have used performance tools from native compiled code and both desktop Firefox and Chromium to find the bottlenecks.

    Because of Emscripten JavaScript generation, it was not very easy to track the faulty functions, I had to deal with partially mangled names in profile view. The following step was to optimize CSS, and I have only found tools to check time spent in selector processing. It is impossible to know which specific animation is taking time, I had to use very manual methods and try disabling individual ones, guess what is taking time and change it blindly.

    Recommendations:

    My recommendation for other developers is to choose technologies they like; I wanted structured and statically typed code, so Emscripten/C++ was a great solution. I would also discourage creating an application that only works in Firefox OS, but take advantage of standards to make it available to any browser and have Web APIs used to enhance the application. If you want to also include a link to the code repository, it is available at squarez.git. Everything is GPLv3+, fork it if you want.

    Touch 12i (Windows Phone)


    App: Touch 12i
    Developer: Elvis Pfutzenreuter
    Original platform:
    HTML5 for Windows Phone (and other platforms)

    Time:

    It took one day to do the bulk of the work, and another day for final details, including registration in Marketplace. Later, I added payment receipt checking which took me the best part of a day since it needs to be well tested. But this effort could be reused readily for another app (Touch 11i).

    Challenges:

    In general, porting was very easy. I put it to work in one day. The biggest problem was the viewport: that works differently in other platforms (all Webkit-based). For Firefox, I had to use a CSS transform-based approach to fit the calculator in screen. Touch 12i is an HTML5 app that runs on Windows Phones and it works in the same way for Android and iOS too: a “shell” of native code with a Web component that cradles the main program. Currently the Firefox version is the only one that is “purely” HTML5. (I used to have a pure Web-based version for iOS as well, but this model has been toned down for iOS.) I’ve written a detailed blog post called Playing With Firefox OS, about my experience participating in the Phones for App Ports program.

    And that’s a wrap. Thanks for reading this far. If you’re a Firefox OS App porter, a web developer or simply a curious reader, we’d love to hear from you. Leave a comment or send us a note at appsdev@mozilla.com.

  5. Ember Inspector on a Firefox near you

    … or Cross-Browser Add-ons for Fun or Profit

    Browser add-ons are clearly an important web browser feature, at least on the desktop platform, and for a long time Firefox was the browser add-on authors’ preferred target. When Google launched Chrome, this trend on the desktop browsers domain was pretty clear, so their browser provides an add-on api as well.

    Most of the Web DevTools we are used to are now directly integrated into our browser, but they were add-ons not so long time ago, and it’s not strange that new web developer tools are born as add-ons.

    Web DevTools (integrated or add-ons) can motivate web developers to change their browser, and then web developers can push web users to change theirs. So, long story short, it would be interesting and useful to create cross-browser add-ons, especially web devtools add-ons (e.g. to preserve the web neutrality).

    With this goal in mind, I chose Ember Inspector as the target for my cross-browser devtool add-ons experiment, based on the following reasons:

    • It belongs to an emerging and interesting web devtools family (web framework devtools)
    • It’s a pretty complex / real world Chrome extension
    • It’s mostly written in the same web framework by its own community
    • Even if it is a Chrome extension, it’s a webapp built from the app sources using grunt
    • Its JavaScript code is organized into modules and Chrome-specific code is mostly isolated in just a couple of those
    • Plan & Run Porting Effort

      Looking into the ember-extension git repository, we see that the add-on is built from its sources using grunt:

      Ember Extension: chrome grunt build process

      The extension communicates between the developer tools panel, the page and the main extension code via message passing:

      Ember Extension: High Level View

      Using this knowledge, planning the port to Firefox was surprisingly easy:

      • Create new Firefox add-on specific code (register a devtool panel, control the inspected tab)
      • Polyfill the communication channel between the ember_debug module (that is injected into the inspected tab) and the devtool ember app (that is running in the devtools panel)
      • Polyfill the missing non-standard inspect function, which open the DOM Inspector on a DOM Element selected by a defined Ember View id
      • Minor tweaks (isolate remaining Chrome and Firefox specific code, fix CSS -webkit prefixed rules)

      In my opinion this port was particularly pleasant to plan thanks to two main design choices:

      • Modular JavaScript sources which helps to keep browser specific code encapsulated into replaceable modules
      • Devtool panel and code injected into the target tab collaborate exchanging simple JSON messages and the protocol (defined by this add-on) is totally browser agnostic

      Most of the JavaScript modules which compose this extension were already browser independent, so the first step was to bootstrap a simple Firefox Add-on and register a new devtool panel.

      Create a new panel into the DevTools is really simple, and there’s some useful docs about the topic in the Tools/DevToolsAPI page (work in progress).

      Register / unregister devtool panel

      From https://github.com/tildeio/ember-extension/blob/master/dist_firefox/lib/main.js

      Devtool panel definition

      From https://github.com/tildeio/ember-extension/blob/master/dist_firefox/lib/devtool-panel.js#L26

      Then, moving to the second step, adapt the code used to create the message channels between the devtool panel and injected code running in the target tab, using content scripts and the low level content worker from the Mozilla Add-on SDK, which are well documented on the official guide and API reference:

      EmberInspector - Workers, Content Scripts and Adapters

      DevTool Panel Workers

      From https://github.com/tildeio/ember-extension/blob/master/dist_firefox/lib/devtool-panel.js

      Inject ember_debug

      From https://github.com/tildeio/ember-extension/blob/master/dist_firefox/lib/devtool-panel.js

      Finally hook browser specific code needed to activate the DOM Inspector on a defined DOM Element:

      Inspect DOM element request handler

      From https://github.com/tildeio/ember-extension/blob/master/dist_firefox/lib/devtool-panel.js#L178

      Evaluate its features and dive into the exchanged messages

      At this point one could wonder: how much useful is a tool like this?, Do I really need it? etc.

      I must admit that I’ve started and completed this port without being an experienced EmberJS developer, but to be able to check if all the original features were working correctly on Firefox and to really understand how this browser add-on helps EmberJS developers during app development/debugging phases (its most important use cases), I’ve started to experiment with EmberJS and I have to say that EmberJS is a very pleasant framework to work with and Ember Inspector is a really important tool to put into our tool belts.

      I’m pretty sure that every medium or large sized JavaScript framework need this kind of DevTool; clearly it will never be an integrated one, because it’s framework-specific and we will get used to this new family of DevTool Add-ons from now on.

      List Ember View, Model Components and Routes

      The first use case is being able to immediately visualize Routes, Views/Components, Models and Controllers our EmberJS app instantiate for us, without too much webconsole acrobatics.

      So its immediately available (and evident) when we open its panel on an EmberJS Apps active in the current browser tab:

      Ember Inspector - ViewTree

      Using these tables we can then inspect all the properties (even computed ones) defined by us or inherited from the ember classes in the actual object hierarchy.

      Using an approach very similar to the Mozilla Remote Debugging Protocol from the integrated DevTools infrastructure (e.g. even when we use devtools locally, they exchange JSON messages over a pipe), the ember_debug component injected into the target tab sends the info it needs about the instantiated EmberJS objects to the devtool panel component, each identified by internally generated reference IDs (similar to the grips concept from the Mozilla Remote Debugging Protocol.

      Ember Extension - JSON messages

      Logging the exchanged messages, we can learn more about the protocol.

      Receive updates about EmberJS view tree info (EmberDebug -> DevtoolPanel):

      Request inspect object (DevtoolPanel -> EmberDebug):

      Receive updates about the requested Object info (DevtoolPanel -> EmberDebug):

      Reach every EmberJS object in the hierarchy from the webconsole

      A less evident but really useful feature is “sendToConsole”, to be able to reach any object/property that we can inspect from the webconsole, from the tables described above.

      When we click the >$E link, which is accessible in the right split panel:

      Ember Inspector - sendToConsole

      The ember devtool panel asks to ember_debug to put the defined object/property into a variable accessible globally in the target tab and named $E, then we can switch to the webconsole and interact with it freely:

      Ember Inspector - sendToConsole

      Request send object to console (DevtoolPanel -> EmberDebug):

      Much more

      These are only some of the feature already present in the Ember Inspector and more features are coming in its upcoming versions (e.g. log and inspect Ember Promises).

      If you already use EmberJS or if you are thinking about trying it, I suggest you to give Ember Inspector a try (on both Firefox or Chrome, if you prefer), it will turn inspecting your EmberJS webapp into a fast and easy task.

      Integrate XPI building into the grunt-based build process

      The last challenge in the road to a Firefox add-on fully integrated into the ember-extension build workflow was xpi building for an add-on based on the Mozilla Add-on SDK integrated into the grunt build process:

      Chrome crx extensions are simply ZIP files, as are Firefox XPI add-ons, but Firefox add-ons based on the Mozilla Add-on SDK needs to be built using the cfx tool from the Add-on SDK package.

      If we want more cross-browser add-ons, we have to help developers to build cross-browser extensions using the same approach used by ember-extension: a webapp built using grunt which will run into a browser add-on (which provides glue code specific to the various browsers supported).

      So I decided to move the grunt plugin that I’ve put together to integrate Add-on SDK common and custom tasks (e.g. download a defined Add-on SDK release, build an XPI, run cfx with custom parameters) into a separate project (and npm package), because it could help to make this task simpler and less annoying.

      Ember Extension: Firefox and Chrome Add-ons grunt build

      Build and run Ember Inspector Firefox Add-on using grunt:

      Following are some interesting fragments from grunt-mozilla-addon-sdk integration into ember-extension (which are briefly documented in the grunt-mozilla-addon-sdk repo README):

      Integrate grunt plugin into npm dependencies: package.json

      Define and use grunt shortcut tasks: Gruntfile.js

      Configure grunt-mozilla-addon-sdk tasks options

      Conclusion

      Especially thanks to the help from the EmberJS/EmberInspector community and its maintainers, Ember Inspector Firefox add-on is officially merged and integrated in the automated build process, so now we can use it on Firefox and Chrome to inspect our EmberJS apps!

      Stable:

      Latest Build

      In this article we’ve briefly dissected an interesting pattern to develop cross-browser devtools add-ons, and introduced a grunt plugin that simplifies integration of Add-on SDK tools into projects built using grunt: https://npmjs.org/package/grunt-mozilla-addon-sdk

      Thanks to the same web first approach Mozilla is pushing in the Apps domain creating cross-browser add-ons is definitely simpler than what we thought, and we all win :-)

      Happy Cross-Browser Extending,
      Luca

  6. The Mozilla Developer Network has a New Face

    Last summer the Mozilla Developer Network (MDN) underwent a massive platform change, moving from a hosted third-party solution to our own custom Django application code-named Kuma. That move laid the ground work for our latest major MDN upgrade: a complete front-end redesign, included many new features as well as usability and accessibility enhancements. Let me provide you with a quick overview of what you can expect to see on the new MDN and what features we’re cooking up for the future!

    New MDN Features

    Increased Commitment to Search

    The majority of MDN users are looking to find documentation the moment they land on the MDN homepage, so we’ve placed search front and center:

    We’ve also added search filters to the mix, allowing users to narrow down search results to their specific needs:

    From a technical perspective, we’ve moved to Elasticsearch for search, allowing us to continue making indexing and filtering improvements, as well as add new search features at will. We anticipate fine-tuning search as we receive feedback so we’ll continue the push to get you to better documentation faster.

    Ease in Navigation

    Getting from document to document was a pain point in the previous design, so we’ve fixed that in two ways. The first was creating Content Zones, a method for creating navigation for a given topic. We’ve started with the most prominent parts of MDN, including App Center, Firefox, Firefox OS, Firefox Marketplace, Add-ons, and Persona:

    Content Zones

    MDN’s new Content Zones provide a complete collection of documentation about a given topic, encompassing the very basics of a topic to API details and advanced techniques. We’ll be kicking off with the following zones:

    Firefox OS

    Highlights of the Firefox OS zone include:

    • A detailed Platform Guide
    • Build and Install details
    • Hacking Firefox OS
    • App Design & Development
    Firefox Marketplace

    Highlights of this zone include:

    • App submission and review
    • App publishing and monetization
    • Marketplace API information
    App Center

    Highlights of the App Center zone include:

    • Quickstart Guide
    • Design and Build tips
    • App publishing guidelines
    • API references
    Persona

    Highlights of the Persona zone include:

    • Guide to using Persona on your site
    • Becoming an identify provider
    • Details on the Persona project
    Firefox

    Highlights of the Firefox zone include:

    • A complete Firefox Add-on overview
    • Information on Firefox internals
    • Detailed instructions for building Firefox and contributing
    Add-ons

    Highlights of the Add-ons zone include:

    • XUL extension information
    • Best practice tips
    • Theming
    • Add-on publishing guidelines

    “See Also” Links

    We’ve also implemented “See Also” links which may appear in any wiki page, linking to documents which may be relevant based on the document you’re currently viewing.

    Both the zone subnavigation and “See Also” link sidebar widgets are built from basic link lists in the wiki document, so adding links and shuffling navigation is easy for anyone looking to contribute to MDN. These link lists can also be built using MDN’s macro language, Kumascript, and our writing team has done a great job automating “See Also” links so that contributors can save on the manual labor of hunting down other relevant documents.

    Top level navigation

    In the top level navigation, you will have access to five distinct areas:

    • The above-mentioned Content Zones
    • Web Platform, including direct links to more information on technologies, references and guides
    • Developer Program – To be able to help developers and establish long-term relationships and channels, we have created the Mozilla Developer Program. We have a lot of plans and ideas for iteratively expanding the Program, and we want you involved as we do so! So, sign up! You will get a membership, be able to subscribe to our newsletter and get access to features over time as we roll them out.
    • Tools – more information on the Firefox Developer Tools and their features
    • Demos, being a direct link to the Demo Studio

    Enhanced Kumascript Macro Features

    Kumascript, MDN’s dynamic macro language, was also outfitted with the ability to read external RSS feeds. At present MDN is using the feed reader capability to pull forum posts from StackOverflow and blog posts from the Mozilla Hacks blog. Check out the MDN:Common macro to view the fetchJSONResource and fetchHTTPResource methods which aid in displaying feed content in wiki documents.

    Future Features

    This visual redesign is just the beginning of our push to make MDN more dynamic and usable. The MDN development and UX teams have plenty more coming in 2014. Here are a few peeks into what you can expect to see!

    Dynamic Search Filtering

    To improve the efficiency in user search, we plan to implement hashtag-prefixed text filtering which may be added in the initial search — doing so will prevent the need for additional filtering when the user lands on the search results page.

    Holly Habstritt Gaal has detailed this query system in detail on her blog. Check out her blog post to see implementation details.

    Docs Navigator

    So you’ve completed a search and you click the first link you thought would be applicable, but you want to move onward and view other results. Instead of backing out to the search results page again, the wiki page (if the user came from search) will display a doc navigator to move to the next or previous result, or you can view the entire list of results from your original search:

    image

    Just another handy way of finding what you need faster!

    Demo Studio and Dev Derby Redesign

    A redesign to the MDN Demo Studio and Dev Derby will be coming shortly. We have an excellent design in review and we hope to roll that out in early 2014.

    If you have a suggestion or find any bugs within the new MDN, please let us know.

    Look forward to more from MDN in 2014 and beyond. The MDN platform promises to expand and improve the way we view, write, and experience documentation and web technologies!

  7. How the Manana app was built

    We saw Firefox OS as a great opportunity and a challenge to deliver a product true to the values of open web and best standards. We found it exciting to be able to deliver an app that will offer a smooth UX even for people using lower end devices. As many of the users have a very limited data plan we decided that a proper reading app offering the possibility to enjoy their favourite articles offline and without unnecessary distractions would be a key product. And so we decided to build Manana.

    Here is the story of how we did it.

    On a technological side, we decided to go vanilla but with a little help from ZeptoJS to handle the DOM manipulation. We also integrated Web Activities to hook into the sharing options in the browser, and to share the selected links via email. We are using also the Readability API to have a nice formatted version of the link. The app is simple and that’s how we wanted it to be. but we have a great plus: it is internationalized in English, Spanish, Portuguese, Polish, German, Italian. Hungarian and Greek are coming soon.

    You can install Manana from the Firefox Marketplace.

    Building the layout and managing transitions between the views

    To provide a branded UI/UX, we use a small framework built in-house. With some clever CSS properties and a few lines of JavaScript (to handle the visibility of the Views) the whole thing was quite easy to set up. Basically, we have each View arranged like cards in a deck. Every time we need one of them, we pull it out while the old one goes in the deck again (becoming hidden). When the animation is ended, we capture the animationend event, so right after that we remove the class that brings the animation properties and at the same time we add a current class, which lets the card stay in place and be visible. The old one is hidden automatically since that’s its default state.

    Below you will find a small example of how it works with a moveRight animation.

    Showing the way Views move

    Optimize CSS3 transitions

    Every smartphone comes with at least a small GPU, so that the CPU doesn’t have to do all the graphical calculations. It’s very much in fashion to take advantage of the Hardware Acceleration and so we do. There are some CSS3 properties that are automatically handled by the GPU, but others are not – usually the ones that are related to 3D. This is because the element (tied to the animated property) is promoted to a new layer that doesn’t suffer from repaints imposed by other properties (like left, for example. But this can change in the future).

    tl;dr Where you can, avoid expensive CPU work, especially on big or complicated elements. Here is a small extract of how we implemented such properties.

    /* moveLeftIn directive */
    .moveLeftIn {
        animation: 0.5s moveLeftIn cubic-bezier() forwards; /* 1 */
        backface-visibility: hidden; /* 2 */
        transform: translateY(0) translateX(99%); /* 3 */
        /* 1. `forwards` let the animation stop at the last keyframe */
        /* 2. avoid some possible glitches caused by 3D rendering */
        /* 3. take advantage of the GPU with 3D-related properties */
    }
     
    /* moveLeftIn animation */
    @keyframes moveLeftIn {
        0%   { transform: translateY(0) translateX(99%); z-index: 999; }
        100% { transform: translateY(0) translateX(0);   z-index: 999; }
    }

    Hooking to the Firefox OS specific Web Activities

    One of our favourite APIs of the WebAPIs, is Web Activities. An application emits an Activity (for example, save bookmark) then the user is prompted to chose among other applications that handles that activity. At the moment it is possible to register an application as an activity handler through the app manifest (declaration registration), but in the future this could be done inside the app itself (dynamic registration).

    Saving a link from the browser to Manana is an example of Web Activity (save-bookmark). Registering an activity is trivial:

    manifest.webapp

    "activities": {
        "save-bookmark": {
          "disposition": "inline",
            "index.html#/savebookmark",
            "returnValue": true
        }
    },

    returnValue is of great importance, otherwise the handler application (Manana in our case) will never return to the caller (Browser). The handling in the app is also very easy:

    navigator.mozSetMessageHandler("activity", function(activity) {
        ... check if activity is savebookmark
     
        var data = activity.source.data;
     
        ... do stuff with activity data
     
        activity.postResult("saved");
    });

    The applications included in your phone are part of Gaia, the UI of Firefox OS, and you can actually browse the source. They provide a lot of examples on how to use WebAPIs. If you are interested in the save-bookmark activity, have a look at the Homescreen app source code.

    Manana users can also share their resources via email, directly from the app. To achieve that, we’ve implemented the email Web Activity. It’s easy to pre-populate the email subject and body, just follow the mailto URI scheme and you are done.

    DB.Get(id, function (rs) {
        var subject       = rs.title,
            body          = rs.url,
            shareActivity = new MozActivity({
                name: "new",
                data: {
                    type: "mail",
                    url : "mailto:?body=" + encodeURIComponent(body) +
                          "&subject=" + encodeURIComponent(subject)
                }
            });
    });

    Retrieving and storing data with IndexedDB

    IndexedDB is a client side NoSQL DB. It has some nice features which make it easier working with app updates and is easy to integrate in your app, even if you’re using a framework like AngularJS. Setting up models is fairly easy and extensively documented on the web and in Gaia Apps.

    You don’t have a quota limit, although the Firefox OS Simulator seems to have a limit when dealing with datasets larger than 5mb. Each database is tied to an app and a version. Only one version can run at a time, which makes it easier not to break compatibility, especially if you rely on some external API.

    Since it is asynchronous, dealing with IndexedDB operations requires having callback. Below is an example on how we deal with the action of adding a link:

    1. Call Link.Add function, passing a link object and a success callback
    2. Link.Add get a DB connection by calling getDB
      • In the case a DB does not exist, the onupgradeneeded callback is executed
        • Create DB
        • Create collections
        • Create indexes
      • getDB check if the connection is already open, otherwise it opens a new one
    3. Link.Add fills the missing property of links with default values
    4. We call the objectStore put method to save/update new data into the collection
      • If the Link object we are going to save already contains a primary key attribute, IndexedDB does an update
      • Otherwise, an insert
    5. If the put method is successful it calls a success callback, returning the primary key in event.target.result

    Below is an example of DB action defined in our models: (for the sake of brevity we omitted error checking code)

    Add: function(data, onsuccess) {
        getDB(function(db) {
                var transaction = db.transaction(["links"], "readwrite"),
                    objectStore = transaction.objectStore("links"),
                    request = objectStore.put(data);
     
                request.onsucces = function(e) {
                    onsuccess(e.target.result) // e.target.result is the id of the new Link
                }
        });
    },

    This is how you call it from the other part of the app:

    var newLink = {url: "http://zombo.com",
                   title: "Welcome to zombo com"};
    Link.Add(newLink, function(linkId) {
        window.location.hash = "#/read/:" + linkId;
    });

    Enhance localization with Markdown support

    Localization is a key feature in our app. Manana currently supports 6 languages (soon it will be 8). There is no official recommendation, but after some research we decided to use the same technique that Gaia is using. This is a great feature of working with Firefox OS: the whole source code is there, and you can get inspiration by reading it.

    L10n has been used in two different cases: to translate the text content of DOM elements and to translate internal help pages. To make the Localization process as simple as possible, we avoided putting HTML tags in the strings and in the internal help pages; we used markdown instead. By design, we avoided having strings in the Javascript code, all messages and alerts are contained in the HTML.

    To support markdown we added some glue between the l10n.js library and marked.js.

    To load the help resources, Manana first does a lookup in the current installation to find the file named <resourceName>.<locale>.md. If the lookup fails, the fallback is the English version of the resource. Once the file is retrieved succesfully, the content is applied to the marked function to generate the HTML, and then added to the DOM.

    function resourceInit($oldView, $newView, resource) {
        var fallback  = "/locales/resources/" + resource + ".en.md",
            localized = "/locales/resources/" + resource + "." + getLanguage() + ".md",
            renderMd  = function (txt) {
                $(".js-subpage", $newView).html(marked(txt));
            },
            renderError  = function (txt) {
                $(".js-subpage", $newView).text("ouch, cannot find: " + resource);
            };
     
        getFile(localized,
                renderMd,
                function () { getFile(fallback, renderMd, renderError); });
    }

    A note on retrieving local files with a XMLHttpRequest

    Retrieving files asynchronously from your local app is as easy as doing a XMLHttpRequest, but there is a case that took us some time to sort out. If the local file you are retrieving is missing, the request won’t fire the success callback (of course), but neither the error callback. It will just hang there forever.

    After some research, we noticed a try { ... } catch block in the l10n.js library. Basically, “in Firefox OS with the app:// protocol, trying to XHR a non-existing URL will raise an exception.”

    function getFile (url, success, error) {
        var xhr = new XMLHttpRequest();
     
        xhr.open("get", url, true);
        xhr.setRequestHeader("Content-Type", "text/plain;charset=UTF-8");
     
        xhr.onreadystatechange = function() {
            if (xhr.readyState == 4) {
                if (xhr.status == 200 || xhr.status === 0)
                    success(xhr.responseText);
                else
                    error && error();
            }
        };
     
        xhr.onerror = error;
        xhr.ontimeout = error;
     
        // Here it is!
        try {
            xhr.send(null);
        } catch (e) {
            error && error();
        }
    }

    Keeping the UI synchronized with the Database

    One of the most challenging aspect of an HTML app is synchronization of state between a data source (db, file, api, etc) and the UI. Frameworks like Backbone and Angular make your life easier but they are not easy beasts to tame. Going vanilla is fun, mostly because JavaScript gives you a lot of flexibility when deciding how to do things.

    We really like OO programming by message passing, so we came up with a message based architecture. The body element acts as our global dispatcher while view root element is the local dispatcher. Global events get registered when the application starts and live as long as the app. Local events get registered when entering a view and live until we leave that view.

    An example of global events are the CRUD operations we want to perform on our model.

    Each action on the UI which changes the state of models is tied to a global event, registered on document.body. On entering a new view, we call a constructor function, which executes some initialization code.

    var events = {
            // global events are never unbinded
            global: {
                "save-link": function() { ... },
                "remove-link": function() { ... },
                ...
     
                "update-link-collecetion": function { ... },
     
                ...
            },
     
            // view specific events are unbinded everytime we leave that view
            home: {
                "update-link-collection-complete": function(e) {
                    var $caller = e.detail.$caller;
     
                    ...
                }
            },
     
            ...
    }
     
    function homeInit($oldView, $newView, dispatch) {
            // $oldView is the previous view
            // $newView is the current view
     
            // dispatch in an helper which wrap
            // document.body.dispatchEvent(new CustomEvent(...))
            dispatch("update-link-collection")
    }

    Leaving a view calls a destructor on that view, and unbinds all the local events. At the end of the day we can open the app source code without spending hours trying to figure out how things work because of framework related black magic.

    Integration with git

    When it comes to Git, our hearts melt. We love git. Seriously. Git and Vim are the best pieces of software ever written. Love.

    Ok, lets get serious again.

    Our application is composed from two components, the app itself and an API Server. The API server is a Golang API proxy to Readability API. So our app directory layout looks like this:

    manana/
     |- distrib.sh
     |- srv/
     |- app/

    distrib.sh is a simple bash script which perform three simple tasks:

    • Install a post-{commit, merge, rebase} hook, if not already installed
    • Update app/js/revision.js writing the current git revision hash.
    • Create a clean zip containing the app, ready for the app store or your device.

    This makes our lives easier.

    Conclusion. What Next?

    Manana relies on the Readability API to get the stripped down version of the web page. Readability is doing a really good job, but we’d like to experiment with other solutions, like parsing the page directly in the device. It’s a hard job, but it’d be cool to do it on the client side, and some work has already been done.

    We had a great time developing for Firefox OS. The most important thing for us, techwise, is that we developed a webapp, not an OS-specific app. We can use all the know-how we gained in the past years, all the great open source libraries we want, and we can even use the browser debug tools we use every day.

    To help newcomers in building a Firefox OS app, we have created a list of resources we found indispensable.

  8. Launching developer Q&A on Stack Overflow

    One thing that is very important for us at Mozilla is the need to directly interact with you developers and help you with challenges and issues while developing using open technologies. We are now happy to announce our presence on Stack Overflow!

    Stack Overflow is one of – if not the – most well-known question and answer sites among developers, and we believe it’s very important to go where developers are and choose to have their discussions while developing.

    Initially our main focus is on Firefox OS app development and Open Web Apps in general. There are three tags that we are monitoring:

    All tags and posts are available at our landing page on Stack Overflow: http://stackoverflow.com/r/mozilla

    Take part!

    We’d love for you to participate with both questions and answers, so we together can collaborate and make the Open Web and open technologies the first choice for developers!

    A little extra motivation might be that if you answer a lot of questions, you’ll show up in the Top Users page for that tag, e.g. http://stackoverflow.com/tags/firefox-os/topusers

    Resources that might be good for you about HTML5, Apps, Developer Tools and more are:

    See you on Stack Overflow!

  9. Introducing the Whiteboard Drum – WebRTC and Web Audio API magic

    Browser functionality has expanded rapidly, way beyond merely “browsing” a document. Recently, Web browsers finally gained audio processing abilities with the Web Audio API. It is powerful to the point of building serious music applications.

    Not only that, but it is also very interesting when combined with other APIs. One of these APIs is getUserMedia(), which allows us to capture audio and/or video from the local PC’s microphone / camera devices. Whiteboard Drum (code on GitHub) is a music application, and a great example of what can be achieved using Web Audio API and getUserMedia().

    I demonstrated Whiteboard Drum at the Web Music Hackathon Tokyo in October. It was a very exciting event on the subject of Web Audio API and Web MIDI API. Many instruments can collaborate with the browser, and it can also create new interfaces to the real world.

    I believe this suggests further possibilities of Web-based music applications, especially using Web Audio API in conjunction with other APIs. Let’s explain how the key features of Whiteboard Drum work, showing relevant code fragments as we go.

    Overview

    First of all, let me show you a picture from the Hackathon:

    And a easy movie demo:

    As you can see, Whiteboard Drum plays a rhythm according to the matrix pattern on the whiteboard. The whiteboard has no magic; it just needs to be pointed at by a WebCam. Though I used magnets in the demo, you can draw the markers using pen if you wish. Each row represents the corresponding instruments of Cymbal, Hi-Hat, Snare-Drum and Bass-Drum, and each column represents timing steps. In this implementation, the sequence has 8 steps. The color blue will activate the grid normally, and the red will activate with accent.

    The processing flow is:

    1. The whiteboard image is captured by the WebCam
    2. The matrix pattern is analysed
    3. This pattern is fed to the drum sound generators to create the corresponding sound patterns

    Although it uses nascent browser technologies, each process itself is not so complicated. Some key points are described below.

    Image capture by getUserMedia()

    getUserMedia() is a function for capturing video/audio from webcam/microphone devices. It is a part of WebRTC and a fairly new feature in web browsers. Note that the user’s permission is required to get the image from the WebCam. If we were just displaying the WebCam image on the screen, it would be trivially easy. However, we want to access the image’s raw pixel data in JavaScript for more processing, so we need to use canvas and the createImageData() function.

    Because pixel-by-pixel processing is needed later in this application, the captured image’s resolution is reduced to 400 x 200px; that means one rhythm grid is 50 x 50 px in the rhythm pattern matrix.

    Note: Though most recent laptops/notebooks have embedded WebCams, you will get the best results on Whiteboard Drum from an external camera, because the camera needs to be precisely aimed at the picture on the whiteboard. Also, the selection of input from multiple available devices/cameras is not standardized currently, and cannot be controlled in JavaScript. In Firefox, it is selectable in the permission dialog when connecting, and it can be set up from “contents setup” option of the setup screen in Google Chrome.

    Get the WebCam video

    We don’t want to show these parts of the processing on the screen, so first we hide the video:

    <video id="video" style="display:none"></video>

    Now to grab the video:

    video = document.getElementById("video");
    navigator.getUserMedia=navigator.getUserMedia||navigator.webkitGetUserMedia||navigator.mozGetUserMedia;
    navigator.getUserMedia({"video":true},
        function(stream) {
            video.src= window.URL.createObjectURL(stream);
            video.play();
        },
        function(err) {
            alert("Camera Error");
        });

    Capture it and get pixel values

    We also hide the canvas:

    <canvas id="capture" width=400 height=200 style="display:none"></canvas>

    Then capture our video data on the canvas:

    function Capture() {
        ctxcapture.drawImage(video,0,0,400,200);
        imgdatcapture=ctxcapture.getImageData(0,0,400,200);
    }

    The video from the WebCam will be drawn onto the canvas at periodic intervals.

    Image analyzing

    Next, we need to get the 400 x 200 pixel values with getImageData(). The analyzing phase analyses the 400 x 200 image data in an 8 x 4 matrix rhythm pattern, where a single matrix grid is 50 x 50 px. All necessary input data is stored in the imgdatcapture.data array in RGBA format, 4 elements per pixel.

    var pixarray = imgdatcapture.data;
    var step;
    for(var x = 0; x < 8; ++x) {
        var px = x * 50;
        if(invert)
            step=7-x;
        else
            step=x;
        for(var y = 0; y < 4; ++y) {
            var py = y * 50;
            var lum = 0;
            var red = 0;
            for(var dx = 0; dx < 50; ++dx) {
                for(var dy = 0; dy < 50; ++dy) {
                    var offset = ((py + dy) * 400 + px + dx)*4;
                    lum += pixarray[offset] * 3 + pixarray[offset+1] * 6 + pixarray[offset+2];
                    red += (pixarray[offset]-pixarray[offset+2]);
                }
            }
            if(lum < lumthresh) {
                if(red > redthresh)
                    rhythmpat[step][y]=2;
                else
                    rhythmpat[step][y]=1;
            }
            else
                rhythmpat[step][y]=0;
        }
    }

    This is honest pixel-by-pixel analysis of grid-by-grid loops. In this implementation, the analysis is done for the luminance and redness. If the grid is “dark”, the grid is activated; if it is red, it should be accented.

    The luminance calculation uses a simplified matrix — R * 3 + G * 6 + B — that will get ten times the value – meaning – which means getting the value of range 0 to 2550 for each pixel. And the redness R – B is a experimental value because all that is required is a decision of Red or Blue. The result is stored in the rhythmpat array, with a value of 0 for nothing, 1 for blue or 2 for red.

    Sound generation through the Web Audio API

    Because the Web Audio API is a very cutting edge technology, it is not yet supported by every web browser. Currently, Google Chrome/Safari/Webkit-based Opera and Firefox (25 or later) support this API. Note: Firefox 25 is the latest version released at the end of October.

    For other web browsers, I have developed a polyfill that falls back to Flash: WAAPISim, available on GitHub. It provides almost all functions of the Web Audio API to unsupported browsers, for example Internet Explorer.

    Web Audio API is a large scale specification, but in our case, the sound generation part requires just a very simple use of the Web Audio API: load one sound for each instrument and trigger them at the right times. First we create an audio context, taking care of vendor prefixes in the process. Prefixes currently used are webkit or no prefix.

    audioctx = new (window.AudioContext||window.webkitAudioContext)();

    Next we load sounds to buffers via XMLHttpRequest. In this case, different sounds for each instrument (bd.wav / sd.wav / hh.wav / cy.wav) are loaded into the buffers array:

    var buffers = [];
    var req = new XMLHttpRequest();
    var loadidx = 0;
    var files = [
        "samples/bd.wav",
        "samples/sd.wav",
        "samples/hh.wav",
        "samples/cy.wav"
    ];
    function LoadBuffers() {
        req.open("GET", files[loadidx], true);
        req.responseType = "arraybuffer";
        req.onload = function() {
            if(req.response) {
                audioctx.decodeAudioData(req.response,function(b){
                    buffers[loadidx]=b;
                    if(++loadidx < files.length)
                        LoadBuffers();
                },function(){});
            }
        };
        req.send();
    }

    The Web Audio API generates sounds by routing graphs of nodes. Whiteboard Drum uses a simple graph that is accessed via AudioBufferSourceNode and GainNode. AudioBufferSourceNode play back the AudioBuffer and route to destination(output) directly (for normal *blue* sound), or route to destination via the GainNode (for accented *red* sound). Because the AudioBufferSourceNode can be used just once, it will be newly created for each trigger.

    Preparing the GainNode as the output point for accented sounds is done like this.

    gain=audioctx.createGain();
        gain.gain.value=2;
        gain.connect(audioctx.destination);

    And the trigger function looks like so:

    function Trigger(instrument,accent,when) {
        var src=audioctx.createBufferSource();
        src.buffer=buffers[instrument];
        if(accent)
            src.connect(gain);
        else
            src.connect(audioctx.destination);
        src.start(when);
    }

    All that is left to discuss is the accuracy of the playback timing, according to the rhythm pattern. Though it would be simple to keep creating the triggers with a setInterval() timer, it is not recommended. The timing can be easily messed up by any CPU load.

    To get accurate timing, using the time management system embedded in the Web Audio API is recommended. It calculates the when argument of the Trigger() function above.

    // console.log(nexttick-audioctx.currentTime);
    while(nexttick - audioctx.currentTime < 0.3) {
        var p = rhythmpat[step];
        for(var i = 0; i < 4; ++i)
            Trigger(i, p[i], nexttick);
        if(++step >= 8)
            step = 0;
        nexttick += deltatick;
    }

    In Whiteboard Drum, this code controls the core of the functionality. nexttick contains the accurate time (in seconds) of the next step, while audioctx.currentTime is the accurate current time (again, in seconds). Thus, this routine is getting triggered every 300ms – meaning look ahead to 300ms in the future (triggered in advance while nextticktime – currenttime < 0.3). The commented console.log will print the timing margin. While this routine is called periodically, the timing is collapsed if this value is negative.

    For more detail, here is a helpful document: A Tale of Two Clocks – Scheduling Web Audio with Precision

    About the UI

    Especially in music production software like DAW or VST plugins, the UI is important. Web applications do not have to emulate this exactly, but something similar would be a good idea. Fortunately, the very handy WebComponent library webaudio-controls is available, allowing us to define knobs or sliders with just a single HTML tag.

    NOTE: webaudio-controls uses Polymer.js, which sometimes has stability issues, causing unexpected behavior once in a while, especially when combining it with complex APIs.

    Future work

    This is already an interesting application, but it can be improved further. Obviously the camera position adjustment is an issue. Analysis can be more smarter if it has an auto adjustment of the position (using some kind of marker?), and adaptive color detection. Sound generation could also be improved, with more instruments, more steps and more sound effects.

    How about a challenge?

    Whiteboard Drum is available at http://www.g200kg.com/whiteboarddrum/, and the code is on GitHub.

    Have a play with it and see what rhythms you can create!

  10. Firefox Developer Tools: Episode 27 – Edit as HTML, Codemirror & more

    Firefox 27 was just uplifted to the Aurora release channel which means we are back to report on new features in Firefox Developer Tools. Below are just some of the new features, you can also take a look at all bugs resolved in DevTools for this release).

    JS Debugger: Break on DOM Events

    You can now automatically break on a variety of DOM events, without needing to manually set a breakpoint. To do this, click on the “Expand Panes” button on the top right of the debugger panel (right next to the search box). Then flip over to the events tab. Click on an event name to automatically pause the next time it happens. This will only show events that currently have listeners bound from your code. If you click on one of the headings, like “Mouse” or “Keyboard”, then all of the relevant events will be selected.

    Inspector improvements

    We’ve listened to feedback from web developers and made a number of enhancements to the Inspector:

    Edit as HTML

    Now in the inspector, you can right click on an element and open up an editor that allows you to set the outerHTML on an element.

    Select default color format

    You now have an option to select the default color format in the option panel:

    Color swatch previews

    The Developer Tools now offer color swatch previews that show up in the rule view:

    Image previews for background image urls

    Highly requested, we now offer image previews for background image URLs:

    In addition to above improvements, Mutated DOM elements are now highlighted in the Inspector.

    Keep an eye out for more tooltips coming soon, and feel free to chime in if you have any others you’d like to see!

    Codemirror

    Codemirror is a popular HTML5-based code editor component used on web sites. It is customizable and theme-able. The Firefox Devtools now use CodeMirror in various places: Style editor, Debugger, Inspector (Edit as HTML) and Scratchpad.

    From the Option panel, the user can select which theme to use (dark or light).

    WebConsole: Reflow Logging

    When the layout is invalidated (CSS or DOM changes), gecko needs to re-compute the position of some nodes. This computation doesn’t happen immediatly. It’s triggered for various reasons. For example, if you do “node.clientTop”, gecko needs to do this computation. This computation is called a “reflow”. Reflows are expensive. Avoiding reflows is important for responsiveness.

    To enable reflow logging, check the “Log” option under the “CSS” menu in the Console tab. Now, everytime a reflow happens, a log will be printed with the name of the JS function that triggered this reflow (if caused by JS).

    That’s all for this time. Hope you like the new improvements!