Articles by Robert Nyman [Editor]

Sort by:


  1. Progress report on cross-platform Open Web Apps

    Here in the Hacks blog we’ve written a lot about building apps for Firefox OS using HTML, JS, and CSS. We’re working to ensure that those same apps can also run on Android, Windows, Mac OS X, and Linux devices. If your app can adapt to those screen sizes, CPU’s, and device capabilities, then we’ve got a plan to ensure that your apps install, launch, quit, and uninstall as native apps on each of those platforms.

    I’ve created a short video that shows how Open Web Apps from Firefox OS will work on any platform where Gecko is available.

    Firefox OS is our benchmark platform for Open Web Apps. On Firefox OS, users can discover apps in the Firefox Marketplace and install them directly onto the phone’s home screen. As an example I’m using my app Shotclock, an open web app for computing sun angles for outdoor photographers. Let’s find out what happens when we install this app on other platforms.


    Android users discover apps in Firefox Marketplace using the Firefox for Android browser. Firefox Marketplace has approved Shotclock for Android, so we just click the install button as we did on Firefox OS. We will automatically repackage the Open Web App as a native Android app to give our users a native app experience for Open Web Apps.

    Because we installed it from an android APK, we can manage it from the recent app list and we find it in the app drawer like every other app.


    Windows users discover apps in the Firefox Marketplace using desktop Firefox. Firefox Marketplace has approved Shotclock for Windows laptops too, so we just click the Marketplace install button. We will automatically repackage the open web app as a native Windows app.

    Here’s Shotclock running on Windows, just like a real app. Our repackaging will mean that users can launch their open web apps from the Windows Start menu and quit them from the File menu. Users will also uninstall them from the Programs control panel.

    Mac OS X

    Mac OS X users also discover apps in the Firefox Marketplace using desktop Firefox. We will automatically repackage the open web app as a native Mac OS X app. When the user clicks the install button, we install Shotclock in the Mac OS X Applications folder.

    From there, it launches and runs just like a real app. The native packaging means users can switch between open web apps by pressing Control-Tab, and quit them from the File menu. How much code did the app developer rewrite? Zero.

    Privileged Apps

    So far we’ve looked at unprivileged Apps. We will also support privileged apps on all these platforms. Here is Kitchen Sink, our app for testing the Firefox OS privileged APIs. What happens when we install it on Android?

    The experience of discovering and installing privileged apps will follow the Android convention of presenting a list of permissions to the user at install time. These permissions are copied from the open web app manifest. After the user completes the installation process, the App is ready to use, and is able to access the phone hardware.

    Linux Desktop

    The email application that comes with Firefox OS is basically a privileged App that uses the Socket API for networking. Marco Castelluccio, our open web apps intern, got it running on it on his Linux laptop.

    He copied over the app package from Gaia and made one tweak to the app manifest. So, if you like the apps that come with your firefox os phone and want to run them on your other devices, cross-platform open web apps can make that happen.


    We’d love to support Open Web Apps on iOS devices, but iOS does not, at this time, include the option to install a Gecko-based web browser, which is currently needed to support Open Web Apps.

    Edit: We’re working with the Cordova community, both to allow Cordova apps to run unmodified on Firefox OS and to allow Open Web Apps packaged by Cordova to run on iOS. For more details see the Cordova Firefox OS project page and the Cordova Firefox OS GitHub repository.


    Desktop — You can install hosted, unprivileged apps on your desktops and laptops using Firefox 16 or newer. Privileged app support should land in Firefox Nightly in the next two months.

    Android — You can install apps using Mobile Firefox Aurora, but you won’t get a native app experience yet. The native app experience should land in Mobile Firefox Nightly in December.

  2. Building a Firefox OS App for my favorite Internet radio station

    I recently created a Firefox OS app for my favourite radio station — radio paradise. It was a lot of fun making this app, so I thought it would be good to share some notes about how I built it.

    The audio tag

    It started by implementing the main functionality of the app, playing an ogg stream I got from the Internet radio station, using the HTML5 audio element

    <audio src="" controls preload></audio>

    That was easy! At this point our app is completely functional. If you don’t believe me, checkout this jsfiddle. But please continue reading, since there will be a few more sweet features added. In fact, checkout the short video below to see how it will turn out.

    Because this content belongs to radio paradise, before implementing the app, I contacted them to ask for their permission to make a Firefox OS app for their radio station; they responded:

    Thanks. We’d be happy to have you do that. Our existing web player is html5-based. That might be a place to start. Firefox should have native support for our Ogg Vorbis streams.

    I couldn’t have asked for a more encouraging response, and that was enough to set things in motion.

    Features of the app

    I wanted the app to be very minimal and simple — both in terms of user experience and the code backing it. Here is a list of the features I decided to include:

    • A single, easy to access, button to play and pause the music
    • Artist name, song title and album cover for the current song playing should fill up the interface
    • Setting option to select song quality (for situation when bandwidth is not enough to handle highest quality)
    • Setting option to start app with music playing or paused
    • Continue playing even when the app is sent to the background
    • Keep the screen on when the app is running in the forground

    Instead of using the HTML tag, I decided to create the audio element and configure it in JavaScript. Then I hooked up an event listener for a button to play or stop music.

      var btn = document.getElementById('play-btn');
      var state = 'stop';
      btn.addEventListener('click', stop_play);
      // create an audio element that can be played in the background
      var audio = new Audio();
      audio.preload = 'auto';
      audio.mozAudioChannelType = 'content';
      function play() {;
        state = 'playing';
      function stop() {
        state = 'stop';
      // toggle between play and stop state
      function stop_play() {
        (state == 'stop') ? play() : stop();

    Accessing current song information

    The first challenge I faced was accessing the current song information. Normally we should not need any special privilege to access third party API’s as long as they provide correct header information. However, the link radio paradise provided me for getting the current song information did not allow for cross origin access. Luckily FirefoxOS has a special power reserved for this kind of situation — systemXHR comes to the rescue.

    function get_current_songinfo() {
      var cache_killer = Math.floor(Math.random() * 10000);
      var playlist_url =
        '' +
      var song_info = document.getElementById('song-info-holder');
      var crossxhr = new XMLHttpRequest({mozSystem: true});
      crossxhr.onload = function() {
        var infoArray = crossxhr.responseText.split('|');
        song_info.innerHTML = infoArray[1];
        next_song = setInterval(get_current_songinfo, infoArray[0]);
      crossxhr.onerror = function() {
        console.log('Error getting current song info', crossxhr);
        nex_song = setInterval(get_current_singinfo, 200000);
      };'GET', playlist_url);

    This meant that the app would have to be privileged and thus packaged. I normally would try to keep my apps hosted, because that is very natural for a web app and has several benefits including the added bonus of being accessible to search engines. However, in cases such as this we have no other option but to package the app and give it the special privileges it needs.

      "version": "1.1",
      "name": "Radio Paradise",
      "launch_path": "/index.html",
      "description": "An unofficial app for radio paradise",
      "type": "privileged",
      "icons": {
        "32": "/img/rp_logo32.png",
        "60": "/img/rp_logo60.png",
        "64": "/img/rp_logo64.png",
        "128": "/img/rp_logo128.png"
      "developer": {
        "name": "Aras Balali Moghaddam",
        "url": ""
      "permissions": {
        "systemXHR": {
          "description" : "Access current song info on"
        "audio-channel-content": {
          "description" : "Play music when app goes into background"
      "installs_allowed_from": ["*"],
      "default_locale": "en"

    Updating song info and album cover

    That XHR call to radio paradise proides me with three important pieces of information:

    • Name of the current song playing and it’s artist
    • An image tag containing the album cover
    • Time left to the end of current song in miliseconds

    Time left to the end of current song is very nice to have. It means that I can execute the XHR call and update the song information only once for every song. I first tried using the setTimeout function like this:

    //NOT working example. Can you spot the error?
    crossxhr.onload = function() {
      var infoArray = crossxhr.responseText.split('|');
      song_info.innerHTML = infoArray[1];
      setTimeout('get_current_songinfo()', infoArray[0]);

    To my surprise, that did not work, and I got a nice error in logcat about a CSP restriction. It turns out that any attempt at dynamically executing code is banned for security reasons. All we have to do in this scenario to avoid the CSP issue is to pass a callable object, instead of a string.

      // instead of passing a string to setTimout we pass
      // a callable object to it
      setTimeout(get_current_songinfo, infoArray[0]);

    Update: Mindaugas pointed out in the comments below that using innerHTML to parse unknown content in this way, introduces some security risks. Because of these security implications, we should retrieve the remote content as text instead of HTML. One way to do this is to use song_info.textContent which does not interpret the passed content as HTML. Another option, as Frederik Braun pointed out is to use a text node which can not render HTML.

    radio paradise mobile web app running on FirefoxOS

    With a bit of CSS magic, things started to fall into place pretty quickly

    Adding a unique touch

    One of the great advantages of developing mobile applications for the web is that you are completely free to design your app in any way you want. There is no enforcement of style or restriction on interaction design innovation. Knowing that, it was hard to hold myself back from trying to explore new ideas and have some fun with the app. I decided to hide the settings behind the main content and then add a feature so user can literally cut open the app in the middle to get to setting. That way they are tucked away, but still can be discovered in an intuitive way. For UI elements in the setting page to toggle options I decided to give Brick a try., with a bit of custom styling added.

    radio paradise app settings

    User can slide open the cover image to access app settings behind it

    Using the swipe gesture

    As you saw in the video above, to open and close the cover image I use pan and swipe gestures. To implement that, I took gesture detector from Gaia. It was very easy to integrated the gesture code as a module into my app and hook it up to the cover image.

    Organizing the code

    For an app this small, we do not have to use modular code. However, since I have recently started to learn about AMD practices, I decided to use a module system. I asked James Burke about implications of using requirejs in an app like this. He suggested I use Alameda instead, since it is geared toward modern browsers.

    Saving app settings

    I wanted to let users choose stream quality as well as whether they want the app to start playing music as soon as it opens. Both of these options need to be persisted somewhere and retrieved when the app starts. I just needed to save a couple of key/value pairs. I went to #openwebapps irc channel and asked for advice. Fabrice pointed me to a nice piece of code in Gaia (again!) that is used for asynchronous storing of key/value pairs and even whole objects. That was perfect for my use case, so I took it as well. Gaia appears to be a goldmine. Here is the module I created for settings.

    define(['helper/async_storage'], function(asyncStorage) {
      var setting = {
        values: {
          quality: 'high',
          play_on_start: false
        get_quality: function() {
          return setting.values.quality;
        set_quality: function(q) {
          setting.values.quality = q;
        get_play_on_start: function() {
          return setting.values.play_on_start;
        set_play_on_start: function(p) {
          setting.values.play_on_start = p;
        save: function() {
          asyncStorage.setItem('setting', setting.values);
        load: function(callback) {
          asyncStorage.getItem('setting', function(values_obj) {
            if (values_obj) setting.values = values_obj;
      return setting;

    Splitting the cover image

    Now we get to the really fun part that is splitting the cover image in half. To achieve this effect, I made two identical overlapping canvas element both of which are sized to fit the device width. One canvas clips the image and keeps the left portion of it while the other keeps the right side.

    Each canvas clips and renders half of the image

    Each canvas clips and renders half of the image

    Here is the code for draw function where most of the action is happening. Note that this function runs only once for each song, or when user changes the orientation of the device from portrait to landscape and vice versa.

    function draw(img_src) {
      width = cover.clientWidth;
      height = cover.clientHeight;
      draw_half(left_canvas, 'left');
      draw_half(right_canvas, 'right');
      function draw_half(canvas, side) {
        canvas.setAttribute('width', width);
        canvas.setAttribute('height', height);
        var ctx = canvas.getContext('2d');
        var img = new Image();
        var clip_img = new Image();
        // opacity 0.01 is used to make any glitch in clip invisible
        ctx.fillStyle = 'rgba(255,255,255,0.01)';
        if (side == 'left') {
          ctx.moveTo(0, 0);
          // add one pixel to ensure there is no gap
          var center = (width / 2) + 1;
        } else {
          ctx.moveTo(width, 0);
          var center = (width / 2) - 1;
        ctx.lineTo(width / 2, 0);
        // Draw a wavy pattern down the center
        var step = 40;
        var count = parseInt(height / step);
        for (var i = 0; i < count; i++) {
          ctx.lineTo(center, i * step);
          // alternate curve control point 20 pixels, every other time
          ctx.quadraticCurveTo((i % 2) ? center - 20 :
            center + 20, i * step + step * 0.5, center, (i + 1) * step);
        ctx.lineTo(center, height);
        if (side == 'left') {
          ctx.lineTo(0, height);
          ctx.lineTo(0, 0);
        } else {
          ctx.lineTo(width, height);
          ctx.lineTo(width, 0);
        img.onload = function() {
          var h = width * img.height / img.width;
          ctx.drawImage(img, 0, 0, width, h);
        img.src = img_src;

    Keeping the screen on

    The last feature I needed to add was keeping the screen on when the app is running in foreground and that turned out to be very easy to implement as well. We need to request a screen wake lock

      var lock = window.navigator.requestWakeLock(resourceName);

    The screen wake lock is actually pretty smart. It will be automatically released when app is sent to the background, and then will given back to your app when it comes to the foreground. Currently in this app I have not provided an option to release the lock. If in future I get requests to add that option, all I have to do is release the lock that has been obtained before setting the option to false


    Getting the app

    If you have a FirefoxOS device and like great music, you can now install this app on your device. Search for “radio paradise” in the marketplace, or install it directly from this link. You can also checkout the full source code from github. Feel free to fork and modify the app as you wish, to create your own Internet Radio apps! I would love it if you report issues, ask for features or send pull requests.


    I am more and more impressed by how quickly we can create very functional and unique mobile apps using web technologies. If you have not build a mobile web app for Firefox OS yet, you should definitely give it a try. The future of open web apps is very exciting, and Firefox OS provides a great platform to get a taste of that excitement.

    Now it is your turn to leave a comment. What is your favourite feature of this app? What things would you have done differently if you developed this app? How could we make this app better (both code and UX)?

  3. Working with receipts for paid apps

    You’ve put your hard work into building a great app. If you want to get paid for your app then the Firefox Marketplace supports app receipts and verification of those receipts.

    Receipt verification is how we ensure that your app has been paid for in the case of both hosted and packaged apps. It’s important to point out that we don’t limit the installation or distribution of apps from the Marketplace. They can be installed by anyone without limitation, they just won’t have the receipt.

    That means as developers, it is essential to check the receipt in your app. This is the only way to ensure that the payment has been processed.

    Receipt format

    Receipts are based on the Web Application Receipt specification. A receipt is a JSON object that contains information about the purchase. That JSON is then signed using the JSON Web Token specification.

    When the App is installed after purchase, the receipt is installed along with the app. To check that a receipt is installed you can access the receipt through the mozApps API, for example:

    var request = window.navigator.mozApps.getSelf();
    request.onsuccess = function(){
        for (let k = 0; k < request.result.receipts.length; k++) {
            let receipt = request.result.receipts[k];
            console.log('Got receipt: ' + receipt);

    Once you’ve decoded and expanded a receipt it would look something like the following:

      "product": {
        "url": "",
        "storedata": "id=111111"
      "iss": "",
      "verify": "", // The verify URL
      "detail": "",
      "reissue": "",
      "user": {
        "type": "directed-identifier",
        "value": "1234-abcd99ef-a123-456b-bbbb-cccc11112222"
      "exp": 1353028900,
      "iat": 1337304100,
      "typ": "purchase-receipt",
      "nbf": 1337304100

    Receipt verifier library

    Just checking for the presence of the receipt is not enough, there are a few checks that should be performed as well:

    • That the receipt is correctly signed and has not be tampered with
    • That the receipt is from a marketplace you approve of
    • That the receipt is for your app

    There are two optional steps you can perform:

    • That the crypto signing on the receipt is correct
    • That the receipt is still valid with the store

    These last steps require the app to have internet access to be able to call servers.

    An easy way to perform most of these checks is to use the receipt verifier library. After including it:

    var verifier = new mozmarket.receipts.Verifier({
      // checks that the receipt is for your app.
      productURL: '',
      // only allow apps that came from the firefox marketplace.
      installs_allowed_from: ['']
    verifier.verify(function (verifier) {
      if (verifier.state instanceof verifier.states.OK) {
        // everything is good.
      } else {
        // something went wrong.

    See the docs for a full list of the options.

    The receipt verifier returns a state object that tells you the exact error. We don’t prescribe what the app should do under those circumstances, that is left completely to the developer. For example a NetworkError indicates that we couldn’t connect to the verification server. That may, or may not be, a fatal condition for your app.

    The receipt verifier library also includes a basic user interface for showing errors to the user. It is great for testing, but since the user interface in your app is going to be different, the chances are you’ll want to display messages back to the user in your own style. If you include receiptverifier-ui.js, then you can use the prompter like this:

      storeURL: "",
      supportHTML: '&lt;a href=""&gt;email;/a&gt;',
      verify: true

    If you ran the app without the receipt installed you would see a message like this:

    One last thing that the receipt verifier library won’t do is verify that a receipt has not been shared between users. That is left as something the app developer will need to implement: an example might be using the django-receipts library.

    Getting help

    If you need more help then there’s a mailing list or a helpful IRC channel at #marketplace.

  4. Fast retro gaming on mobile

    Emulation is the cool technique that makes retro gaming possible, i.e. play old video games on modern devices. It allows pixel lovers to revive gaming experiences from the past. In this article we will demonstrate that the web platform is suitable for emulation, even on mobile where by definition everything is limited.

    Emulation is a challenge

    Emulation consists of recreating all the internals of a game console in JavaScript. The original CPU and its functions are totally reimplemented. It communicates with both video and sound units whilst listening to the gamepad inputs.

    Traditionally, emulators are built as native apps, but the web stack is equally powerful, provided the right techniques are used. On web based OSes, like Firefox OS, the only way to do retro gaming is to use HTML and JavaScript.

    Emulators are resource intensive applications. Running them on mobile is definitely a challenge. Even more so that Firefox OS is designed to power low-end devices where computational resources are further limited. But fear not because techniques are available to make full speed retro gaming a reality on our beloved handhelds.

    In the beginning was the ROM

    Video game emulation starts with ROM image files (ROM files for short). A ROM file is the representation of a game cartridge chip obtained through a process called dumping. In most video game systems, a ROM file is a single binary file containing all aspects of the game, including:

    • The logic (player movements, enemies’ artificial intelligence, level designs…)
    • The characters and backgrounds sprite
    • The music

    Let’s now consider the Sega Master System and Game Gear consoles. Take the homebrew game Blockhead as an example and examine the beginning of the file:

    0xF3 0xED 0×56 0xC3 0x6F 0×00 0x3F 0×00 0x7D 0xD3 0xBF 0x7C 0xD3 0xBF 0xC9 0×00
    0x7B 0xD3 0xBF 0x7A 0xD3 0xBF 0xC9 0×00 0xC9 0×70 0×72 0x6F 0×70 0×70 0×79 0×00
    0xC9 0×00 0×00 0×00 0×00 0×00 0×00 0×00 0xC9 0×62 0x6C 0x6F 0×63 0x6B 0×68 0×65

    The elements listed above are mixed together in the ROM. The difficulty consists of telling apart the different bytes:

    • opcodes (for operation code, they are CPU instructions, similar to basic JavaScript functions)
    • operands (think of it as parameters passed to opcodes)
    • data (for example, the sprites used by the game)

    If we highlight these elements differently according to their types, this is what we get:

    0xF3 0xED 0×56 0xC3 0x6F 0×00 0x3F 0×00 0x7D 0xD3 0xBF 0x7C 0xD3 0xBF 0xC9 0×00
    0x7B 0xD3 0xBF 0x7A 0xD3 0xBF 0xC9 0×00 0xC9 0×70 0×72 0x6F 0×70 0×70 0×79 0×00
    0xC9 0×00 0×00 0×00 0×00 0×00 0×00 0×00 0xC9 0×62 0x6C 0x6F 0×63 0x6B 0×68 0×65
    Opcode Operand Data

    Start small with an interpreter

    Let’s start playing this ROM, one instruction at a time. First we put the binary content into an ArrayBuffer (you can use XMLHttpRequest or the File API for that). As we need to access data in different types, like 8 or 16 bit integers, the easiest way is to pass this buffer to a DataView.

    In Master System, the entry point is the instruction located at index 0. We create a variable called pc for program counter and set it to 0. It will keep a track of the location of the current instruction. We then read the 8 bit unsigned integer located at the current position of pc and place it into a variable called opcode. The instruction associated to this opcode will be executed. From there, we just repeat the process.

    var rom = new DataView(romBuffer);
    var pc = 0x0000;
    while (true) {
      var opcode = rom.getUint8(pc++);
      switch(opcode) {
        // ... more to come here!

    For example, the 3rd instruction, located at index 3, has value 0xC3. It matches opcode `JP (nn)` (JP stands for jump). A jump transfers the execution of the program to somewhere else in the ROM. In terms of logic, that means update the value of pc. The target address is the operand. We simply read the next 2 bytes as a 16 bit unsigned integer (0x006F in this case). Let’s put it all together:

    var rom = new DataView(romBuffer);
    var pc = 0x0000;
    while (true) {
      var opcode = rom.getUint8(pc++);
      switch(opcode) {
        case 0xC3:
          // Code for opcode 0xC3 `JP (nn)`.
          pc = rom.getUint16(pc);
        case 0xED:
          // @todo Write code for opcode 0xED 0x56 `IM 1`.
        case 0xF3:
          // @todo Write code for opcode 0xF3 `DI`.

    Of course, for the sake of simplicity, many details are omitted here.

    Emulators working this way are called interpreters. They are relatively easy to develop, but the fetch/decode/execute loop adds significant overhead.

    Recompilation, the secret to full speed

    Interpreters are just a first step to fast emulation, using them ensures everything else is working: video, sound, and controllers. Interpreters can be fast enough on desktop, but are definitely too slow on mobile and drain battery.

    Let’s step back a second and examine the code above. Wouldn’t it be great if we could generate JavaScript code to mimic the logic? We know that when pc equals 0×0000, the next 3 instructions will always be executed one after another, until the jump is reached.

    In other words, we want something like this:

    var blocks = {
      0x0000: function() {
        // @todo Write code for opcode 0xF3 `DI`.
        // @todo Write code for opcode 0xED 0x56 `IM 1`.
        // Code for opcode 0xC3 `JP (nn)`.
        this.pc = 0x006F;
      0x006F: function() {
        // @todo Write code for this opcode...
    pc = 0x0000;
    while (true) {

    This technique is called recompilation.

    The reason why it is fast is because each opcode and operand is only read once when the JavaScript code is compiled. It is then easier for the JavaScript VM to optimise the generated code.

    Recompilation is said to be static when it uses static analysis to generate code. On the other hand, dynamic recompilation creates new JavaScript functions at runtime.

    In jsSMS, the emulator in which I implemented these techniques, the recompiler is made of 4 components:

    • Parser: determines what part of the ROM is opcode, operand and data
    • Analyser: groups instructions into blocks (e.g. a jump instruction closes a block and open a new one) and output an AST (abstract syntax tree)
    • Optimiser: apply several passes to make the code even faster
    • Generator: convert the AST to JavaScript code

    Generating functions on the fly can take time. That’s why one of the approaches is to use static recompilation and generate as much JavaScript code as possible before the game even starts. Then, because static recompilation is limited, whenever we find unparsed instructions at runtime, we generate new functions as the game is being played.

    So it is faster, but how faster?

    According to the benchmarks I ran on mobile, recompilers are about 3-4 times faster than interpreters.

    Here are some benchmarks on different browser / device pairs:

    • Firefox OS v.1.1 Keon
    • Firefox OS v.1.1 Peak
    • Firefox 24 Samsung Galaxy S II
    • Firefox 24 LG Nexus 4

    Optimisation considerations

    When developing jsSMS, I applied many optimisations. Of course, the first thing was to implement the improvements suggested by this article about games for Firefox OS.

    Before being more specific, keep in mind that emulators are a very particular type of gaming app. They have a limited number of variables and objects. This architecture is static, limited and as such is easy to optimise for performance.

    Use typed arrays wherever possible

    Resources of old consoles are limited and most concepts can be mapped to typed arrays (stack, screen data, sound buffer…). Using such arrays makes it easier for the VM to optimise.

    Use dense arrays

    A dense array is an array without holes. The most usual way is to set the length at creation and fill it with default values. Of course it doesn’t apply to arrays with unknown or variable size.

    // Create an array of 255 items and prefill it with empty strings.
    var denseArray = new Array(255);
    for (var i = 0; i < 255; i++) {
      denseArray[i] = '';

    Variables should be type stable

    The type inferrer of the JavaScript VM tags variables with their type and uses this information to apply optimisations. You can help it by not changing the types of variables as the game runs. This implies the following consequences:

    • Set a default value at declaration. ‘var a = 0;` instead of `var a;` Otherwise, the VM considers that the variable can be either number or undefined.
    • Avoid recycling a variable for different types. E.g. number then string.
    • Make Boolean variables real Boolean. Avoid truthy or falsey values and use `!!` or `Boolean()` to coerce.

    Some syntaxes are ambiguous to the VM. For example, the following code was tagged as unknown arithmetic type by SpiderMonkey:

    pc += d < 128 ? d : d - 256;

    A simple fix was to rewrite this to:

    if (d >= 128) {
      d = d - 256;
    pc += d;

    Keep numeric types stable

    SpiderMonkey stores all JavaScript numeric values differently depending on what they look like. It tries to map numbers to internal types (like u32 or float). The implication of this is that maintaining the same underlying type is very likely to help the VM.

    To target these type changes, I used to use JIT inspector, an extension for Firefox that exposes some internals of SpiderMonkey. However, it is not compatible with the latest versions of Firefox and no longer produce a useful output. There is a bug to follow the issue, but don’t expect any changes soon:

    … and as usual profile and optimise

    Using a JavaScript profiler will help you in finding the most frequently called functions. These are the ones you should focus on and optimise first.

    Digging deeper in code

    If you want to learn more about mobile emulation and recompilation, have a look at this talk in which the slides are actually a ROM running inside the emulator!


    Mobile emulation shows how fast the web platform is, even on low-end devices. Using the right techniques and applying optimisations allows your games to run smoothly and at full speed. The documentation about emulation on the browser is scarce on the net, specially using modern JavaScript APIs. May this article address this lack.

    There are so many video game consoles and so few web based emulators, so now, enough with the theory, and let’s start making apps for the sake of retro gaming!

  5. awsbox, a PaaS layer for Node.js: An Update on Latest Developments

    This is the 2nd time we’ve talked about awsbox on the Mozilla Hacks blog. In the first article we gave you a quick introduction to awsbox as part of the Node.js Holiday Season set of articles. Here we’d like to tell you about some recently added features to awsbox.

    To briefly recap, awsbox is a minimalist PaaS layer for Node.js applications which is built on top of Amazon EC2. It is a DIY solution which allows you to create instances, setup DNS, run application servers, push new code to and eventually destroy your instances in a matter of minutes.

    Since we first released awsbox it’s usage has been steadily increasing and is now downloaded from npm over 3,000 times every month. This has blown away our initial expectations so perhaps we’ve plugged a gap between the ‘Infrastructure’ and the ‘Platform’ services currently available.

    Aim of awsbox

    The aim of awsbox is to be an easy to use but configurable abstraction on top of the Amazon APIs to create your own PaaS solution. However, it should also allow you to do more than a PaaS service does – but only if you want to.

    With that in mind, we have added a number of new features recently which allows you to take more control of your deployments. In general we’re aiming for speedy setup of development environments which can help quicken the process of development rather than for production deployments (but that doesn’t mean awsbox couldn’t be battle hardened more for this either).

    New Features

    Nginx is now being used as the reverse proxy to your application. Since your webserver is run by an unprivileged user on the box (ie. the ‘app’ user) we need a way to listen on port 80 or 443 and proxy through to your app. Whilst this job was admirably done in the past with http-proxy (on npm) we decided Nginx would compliment awsbox more now and in the future. Having the ability to drop config files into Nginx means we can start to add more features such as multiple webservers, multiple apps or serving from multiple subdomains.

    Another new feature is the ability to automatically point a subdomain to your instance using Route53. By congregating around another AWS service rather than a separate service means we only have to worry about having one set of credentials in your environment. Subdomain creation and deletion is automatically performed whenever you create or destroy an instance using awsbox and this helps keep things clean.

    Some of our team work in Europe as well as North America and a few of us are on the other side of the world. Instead of taking our requests half way around the world to our instances, we decided to bring our instances to us. Our base AMI which used to live only in the ‘us-east-1′ region is now available in all AWS regions and that includes both ‘eu-west-1′ for our European contingent and ‘ap-southeast-2′ too. Being able to create an instance in Sydney makes me very happy. :)

    With so many people constantly creating, re-using and destroying instances we also thought it would be fair to be able to search for any instance with whatever criteria. As well as being able to list all VMs you can now find them by IP Address, AMI, Instance Id, Name, Tags or in fact any of 12 different criteria. This makes it super easy to find the instance you’re looking for.

    And finally … no-one likes to spend any more money than they need to, so we now have the ability help figure out who launched which instance (so we can ask them if we can terminate it!). The AWS_EMAIL env var is added as a tag to the instance upon creation so that we know who to chat to if we need to reduce our bill.

    New Commands

    With these recent architectural changes we’ve also added a number of extra commands to help you with managing both your instances and your DNS. Now that we’re multi-region, there are a few new commands related to that:

    # lists all AWS regions for EC2
    $ awsbox.js regions
    # lists all regions and their availability zones
    $ awsbox.js zones

    We can also now list all of our domains and their subdomains in Route53 as well as being able to search which records
    point to an IP Address:

    # lists all domains in Route53
    $ awsbox.js listdomains
    # lists all resource records in this zone
    $ awsbox.js listhosts
    # find which subdomains/domains point to an ip address
    $ awsbox.js findbyip
    # delete this subdomain
    $ awsbox.js deleterecord

    To help with AMI management, there is now a command which helps you create an AMI from an existing instance, tidies it up, creates an AMI and then copies it to all of the other available regions:

    # create an ami and copy to all regions
    $ awsbox.js createami ami-adac0de1

    And finally a few commands which can help with determining who owns which instance:

    # search instance metadata for some text
    $ awsbox.js search
    $ awsbox.js search ami-adac0de1
    $ awsbox.js search persona
    # show meta info related to an instance
    $ awsbox.js describe i-1cec001
    # claim an instance as your own
    $ awsbox.js claim i-b10cfa11
    # list all unclaimed instances
    $ awsbox.js unclaimed

    These are all in addition to the existing commands which now total 21, all to help manage your own deployments.

    The Future

    There has been a small renaissance in awsbox development recently with many people chipping in with new features. It is a valuable tool for the Persona team since it enables us to stand-up instances rather quickly, have a poke around either informally or formally and throw them away as quick as we created them (if not quicker)! And we don’t have to feel guilty about this either since acquiring a server on demand is par for the course in these days of IaaS.

    We’ve also congregated around using more services within AWS itself. By moving the backend AWS API library to AwsSum we’re now able to talk to more AWS services than before and hopefully can leverage these to help make development deploys quicker and easier too.

    However, we also feel that awsbox can get better still. We have some ideas for the future but we always welcome ideas or code from you guys too. Feel free to take a look around the docs and issues and leave a comment or two. If you’ve got great code to go with those ideas then we’ll be happy to review a pull request too – after all awsbox is open source.

  6. Introducing TogetherJS

    What is TogetherJS?

    We’d like to introduce TogetherJS, a real-time collaboration tool out of Mozilla Labs.

    TogetherJS is a service you add to an existing website to add real-time collaboration features. Using the tool two or more visitors on a website or web application can see each other’s mouse/cursor position, clicks, track each other’s browsing, edit forms together, watch videos together, and chat via audio and WebRTC.

    Some of the features TogetherJS includes:

    • See the other person’s cursor and clicks
    • See scroll position
    • Watch the pages a person visits on a page
    • Text chat
    • Audio chat using WebRTC
    • Form field synchronization (text fields, checkboxes, etc)
    • Play/pause/track videos in sync
    • Continue sessions across multiple pages on a site

    How to integrate

    Many of TogetherJS’s features require no modification of your site. TogetherJS looks at the DOM and determines much of what it should do that way – it detects the form fields, detects some editors like CodeMirror and Ace, and injects its toolbar into your page.

    All that’s required to try TogetherJS out is to add this to your page:

    <script src=""></script>

    And then create a button for your users to start TogetherJS:

    <button id="collaborate" type="button">Collaborate</button>
      .addEventListener("click", TogetherJS, false);

    If you want to see some of what TogetherJS does, jsFiddle has enabled TogetherJS:

    jsfiddle with Collaborate highlighted

    Just click on Collaboration and it will start TogetherJS. You can also use TogetherJS in your fiddles, as we’ll show below.

    Extending for your app

    TogetherJS can figure out some things by looking at the DOM, but it can’t synchronize your JavaScript application. For instance, if you have a list of items in your application that is updated through JavaScript, that list won’t automatically be in sync for both users. Sometimes people expect (or at least hope) that it will automatically update, but even if we did synchronize the DOM across both pages, we can’t synchronize your underlying JavaScript objects. Unlike products like Firebase or the Google Drive Realtime API TogetherJS does not give you realtime persistence – your persistence and the functionality of your site is left up to you, we just synchronize sessions in the browser itself.

    So if you have a rich JavaScript application you will have to write some extra code to keep sessions in sync. We do try to make it easier, though!

    To give an example we’d like to use a simple drawing application. We’ve published the complete example as a fiddle which you can fork and play with yourself.

    A Very Small Drawing Application

    We start with a very simple drawing program. We have a simple canvas:

    <canvas id="sketch" 
            style="height: 400px; width: 400px; border: 1px solid #000">

    And then some setup:

    // get the canvas element and its context
    var canvas = document.querySelector('#sketch');
    var context = canvas.getContext('2d');
    // brush settings
    context.lineWidth = 2;
    context.lineJoin = 'round';
    context.lineCap = 'round';
    context.strokeStyle = '#000';

    We’ll use mousedown and mouseup events on the canvas to register our move() handler for the mousemove event:

    var lastMouse = {
      x: 0,
      y: 0
    // attach the mousedown, mousemove, mouseup event listeners.
    canvas.addEventListener('mousedown', function (e) {
        lastMouse = {
            x: e.pageX - this.offsetLeft,
            y: e.pageY - this.offsetTop
        canvas.addEventListener('mousemove', move, false);
    }, false);
    canvas.addEventListener('mouseup', function () {
        canvas.removeEventListener('mousemove', move, false);
    }, false);

    And then the move() function will figure out the line that needs to be drawn:

    function move(e) {
        var mouse = {
            x: e.pageX - this.offsetLeft,
            y: e.pageY - this.offsetTop
        draw(lastMouse, mouse);
        lastMouse = mouse;

    And lastly a function to draw lines:

    function draw(start, end) {
        context.moveTo(start.x, start.y);
        context.lineTo(end.x, end.y);

    This is enough code to give us a very simple drawing application. At this point if you enable TogetherJS on this application you will see the other person move around and see their mouse cursor and clicks, but you won’t see drawing. Let’s fix that!

    Adding TogetherJS

    TogetherJS has a “hub” that echoes messages between everyone in the session. It doesn’t interpret messages, and everyone’s messages travel back and forth, including messages that come from a person that might be on another page. TogetherJS also lets the application send their own messages like:

      type: "message-type", 
      ...any other attributes you want to send...

    to send a message (every message must have a type), and to listen:

    TogetherJS.hub.on("message-type", function (msg) {
      if (! msg.sameUrl) {
        // Usually you'll test for this to discard messages that came
        // from a user at a different page

    The message types are namespaced so that your application messages won’t accidentally overlap with TogetherJS’s own messages.

    To synchronize drawing we’d want to watch for any lines being drawn and send those to the other peers:

    function move(e) {
        var mouse = {
            x: e.pageX - this.offsetLeft,
            y: e.pageY - this.offsetTop
        draw(lastMouse, mouse);
        if (TogetherJS.running) {
            TogetherJS.send({type: "draw", start: lastMouse end: mouse});
        lastMouse = mouse;

    Before we send we check that TogetherJS is actually running (TogetherJS.running). The message we send should be self-explanatory.

    Next we have to listen for the messages:

    TogetherJS.hub.on("draw", function (msg) {
        if (! msg.sameUrl) {
        draw(msg.start, msg.end);

    We don’t have to worry about whether TogetherJS is running when we register this listener, it can only be called when TogetherJS is running.

    This is enough to make our drawing live and collaborative. But there’s one thing we’re missing: if I start drawing an image, and you join me, you’ll only see the new lines I draw, you won’t see the image I’ve already drawn.

    To handle this we’ll listen for the togetherjs.hello message, which is the message each client sends when it first arrives at a new page. When we see that message we’ll send the other person an image of our canvas:

    TogetherJS.hub.on("togetherjs.hello", function (msg) {
        if (! msg.sameUrl) {
        var image = canvas.toDataURL("image/png");
            type: "init",
            image: image

    Now we just have to listen for this new init message:

    TogetherJS.hub.on("init", function (msg) {
        if (! msg.sameUrl) {
        var image = new Image();
        image.src = msg.image;
        context.drawImage(image, 0, 0);

    With just a few lines of code TogetherJS let us make a live drawing application. Of course we had to do some of the code, but here’s some of the things TogetherJS handles for us:

    • Gives users a URL to share with another user to start the session Screenshot of invitation window
    • Establishes a WebSocket connection to our hub server, which echoes messages back and forth between clients
    • Let’s users set their name and avatar, and see who else is in the session Screenshot of avatar/name setting
    • Keeps track of who is available, who has left, and who is idle
    • Simple but necessary features like text chat are available Screenshot of chat window
    • Session initialization and tracking is handled by TogetherJS

    Some of the things we didn’t do in this example:

    • We used a fixed-size canvas so that we didn’t have to deal with two clients and two different resolutions. Generally TogetherJS handles different kinds of clients and using resolution-independent positioning (and even works with responsive design). One approach to fix this might be to ensure a fixed aspect ratio, and then use percentages of the height/width for all the drawing positions.
    • We don’t have any fun drawing tools! Probably you wouldn’t want to synchronize the tools themselves – if I’m drawing with a red brush, there’s no reason you can’t be drawing with a green brush at the same time.
    • But something like clearing the canvas should be synchronized.
    • We don’t save or load any drawings. Once the drawing application has save and load you may have to think more about what you want to synchronize. If I have created a picture, saved it, and then return to the site to join your session, will your image overwrite mine? Putting each image at a unique URL will make it clearer whose image everyone is intending to edit.

    Want To Look At More?

    • Curious about the architecture of TogetherJS? Read the technology overview.
    • Try TogetherJS out on jsFiddle
    • Find us via the button in the documentation: “Get Live Help” which will ask to start a TogetherJS session with one of us.
    • Find us on IRC in #togetherjs on
    • Find the code on GitHub, and please open an issue if you see a bug or have a feature request. Don’t be shy, we are interested in lots of kinds of feedback via issues: ideas, potential use cases (and challenges coming from those use cases), questions that don’t seem to be answered via our documentation (each of which also implies a bug in our documentation), telling us about potentially synergistic applications.
    • Follow us on Twitter: @togetherjs.

    What kind of sites would you like to see TogetherJS on? We’d love to hear in the comments.

  7. An AR Game: Technical Overview

    An AR Game is the winning entry for the May 2013 Dev Derby. It is an augmented reality game, the objective being to transport rolling play pieces from a 2D physics world into a 3D space. The game is playable on GitHub, and demonstrated on YouTube. The objective of this article is to describe the underlying approaches to the game’s design and engineering.

    Technically the game is a simple coupling of four sophisticated open source technologies: WebRTC, JSARToolkit, ThreeJS, and Box2D.js. This article describes each one, and explains how we weaved them together. We will work in a stepwise fashion, constructing the game from the ground up. The code discussed in this article is available on github, with a tag and live link for each tutorial step. Specific bits of summarized source will be referenced in this document, with the full source available through the ‘diff’ links. Videos demonstrating application behaviour are provided where appropriate.

    git clone

    This article will first discuss the AR panel (realspace), then the 2D panel (flatspace), and conclude with a description of their coupling.

    Panel of Realspace

    Realspace is what the camera sees — overlaid with augmented units.

    Begin with a Skeleton

    git checkout example_0
    live, diff, tag

    We will organize our code into modules using RequireJS. The starting point is a main module with two skeletal methods common to games. They are initialize() to invoke startup, and tick() for rendering every frame. Notice that the gameloop is driven by repeated calls to requestAnimationFrame:

    requirejs([], function() {
        // Initializes components and starts the game loop
        function initialize() {
        // Runs one iteration of the game loop
        function tick() {
            // Request another iteration of the gameloop
        // Start the application

    The code so far gives us an application with an empty loop. We will build up from this foundation.

    Give the Skeleton an Eye

    git checkout example_1
    live, diff, tag

    AR games require a realtime video feed: HTML5‘s WebRTC provides this through access to the camera, thus AR games are possible in modern browsers like Firefox. Good documentation concerning WebRTC and getUserMedia may be found on, so we won’t include the basics here.

    A camera library is provided in the form of a RequireJS module named webcam.js, which we’ll incorporate into our example.

    First the camera must be initialized and authorized. The webcam.js module invokes a callback on user consent, then for each tick of the gameloop a frame is copied from the video element to a canvas context. This is important because it makes the image data accessible. We’ll use it in subsequent sections, but for now our application is simply a canvas updated with a video frame at each tick.

    Something Akin to a Visual Cortex

    git checkout example_2
    live, diff, tag

    JSARToolkit is an augmented reality engine. It identifies and describes the orientation of fiducial markers in an image. Each marker is uniquely associated with a number. The markers recognized by JSARToolkit are available here as PNG images named according to their ID number (although as of this writing the lack of PNG extensions confuses Github.) For this game we will use #16 and #32, consolidated onto a single page:

    JSARToolkit found its beginnings as ARToolkit, which was written in C++ at the Univeristy of Washington‘s HITLab in Seattle. From there it has been forked and ported to a number of languages including Java, then from Java to Flash, and finally from Flash to JS. This ancestry causes some idiosyncrasies and inconsistent naming, as we’ll see.

    Let’s take a look at the distilled functionality:

     // The raster object is the canvas to which we are copying video frames.
     var JSARRaster = NyARRgbRaster_Canvas2D(canvas);
     // The parameters object specifies the pixel dimensions of the input stream.
     var JSARParameters = new FLARParam(canvas.width, canvas.height);
     // The MultiMarkerDetector is the marker detection engine
     var JSARDetector = new FLARMultiIdMarkerDetector(FLARParameters, 120);
     // Run the detector on a frame, which returns the number of markers detected.
     var threshold = 64;
     var count = JSARDetector.detectMarkerLite(JSARRaster, threshold);

    Once a frame has been processed by JSARDetector.detectMarkerLite(), the JSARDetector object contains an index of detected markers. JSARDetector.getIdMarkerData(index) returns the ID number, and JSARDetector.getTransformMatrix(index) returns the spatial orientation. Using these methods is somewhat complicated, but we’ll wrap them in usable helper methods and call them from a loop like this:

    var markerCount = JSARDetector.detectMarkerLite(JSARRaster, 90); 
    for( var index = 0; index &lt; markerCount; index++ ) {
        // Get the ID number of the detected marker.
        var id = getMarkerNumber(index);
        // Get the transformation matrix of the detected marker.
        var matrix = getTransformMatrix(index);

    Since the detector operates on a per-frame basis it is our responsibility to maintain marker state between frames. For example, any of the following may occur between two successive frames:

    • a marker is first detected
    • an existing marker’s position changes
    • an existing marker disappears from the stream.

    The state tracking is implemented using ardetector.js. To use it we instantiate a copy with the canvas receiving video frames:

    // create an AR Marker detector using the canvas as the data source
    var detector = ardetector.create( canvas );

    And with each tick the canvas image is scanned by the detector, triggering callbacks as needed:

    // Ask the detector to make a detection pass. 
    detector.detect( onMarkerCreated, onMarkerUpdated, onMarkerDestroyed );

    As can be deduced from the code, our application now detects markers and writes its discoveries to the console.

    Reality as a Plane

    git checkout example_3
    live, diff, tag

    An augmented reality display consists of a reality view overlaid with 3D models. Rendering such a display normally consists of two steps. The first is to render the reality view as captured by the camera. In the previous examples we simply copied that image to a canvas. But we want to augment the display with 3D models, and that requires a WebGL canvas. The complication is that a WebGL canvas has no context into which we can copy an image. Instead we render a textured plane into the WebGL scene, using images from the webcam as the texture. ThreeJS can use a canvas as a texture source, so we can feed the canvas receiving the video frames into it:

    // Create a texture linked to the canvas.
    var texture = new THREE.Texture(canvas);

    ThreeJS caches textures, therefore each time a video frame is copied to the canvas a flag must be set to indicate that the texture cache should be updated:

    // We need to notify ThreeJS when the texture has changed.
    function update() {
        texture.needsUpdate = true;

    This results in an application which, from the perspective of a user, is no different than example_2. But behind the scenes it’s all WebGL; the next step is to augment it!

    Augmenting Reality

    git checkout example_4
    live, diff, tag, movie

    We’re ready to add augmented components to the mix: these will take the form of 3D models aligned to markers captured by the camera. First we must allow the ardector and ThreeJS to communicate, and then we’ll be able to build some models to augment the fiducial markers.

    Step 1: Transformation Translation

    Programmers familiar with 3D graphics will know that the rendering process requires two matrices: the model matrix (transformation) and a camera matrix (projection). These are supplied by the ardetector we implemented earlier, but they cannot be used as is — the matrix arrays provided by ardetector are incompatible with ThreeJS. For example, the helper method getTransformMatrix() returns a Float32Array, which ThreeJS does not accept. Fortunately the conversion is straightforward and easily done through a prototype extension, also known as monkey patching:

    // Allow Matrix4 to be set using a Float32Array
    THREE.Matrix4.prototype.setFromArray = function(m) {
     return this.set(
      m[0], m[4], m[8],  m[12],
      m[1], m[5], m[9],  m[13],
      m[2], m[6], m[10], m[14],
      m[3], m[7], m[11], m[15]

    This allows us to set the transformation matrix, but in practice we’ll find that updates have no effect. This is because of ThreeJS’s caching. To accommodate such changes we construct a container object and set the matrixAutoUpdate flag to false. Then for each update to the matrix we set matrixWorldNeedsUpdate to true.

    Step 2: Cube Marks the Marker

    Now we’ll use our monkey patches and container objects to display colored cubes as augmented markers. First we make a cube mesh, sized to fit over the fiducial marker:

    function createMarkerMesh(color) {
        var geometry = new THREE.CubeGeometry( 100,100,100 );
        var material = new THREE.MeshPhongMaterial( {color:color, side:THREE.DoubleSide } );
        var mesh = new THREE.Mesh( geometry, material );                      
        //Negative half the height makes the object appear "on top" of the AR Marker.
        mesh.position.z = -50; 
        return mesh;

    Then we enclose the mesh in the container object:

    function createMarkerObject(params) {
        var modelContainer = createContainer();
        var modelMesh = createMarkerMesh(params.color);
        modelContainer.add( modelMesh );
        function transform(matrix) {
            modelContainer.transformFromArray( matrix );

    Next we generate marker objects, each one corresponding to a marker ID number:

    // Create marker objects associated with the desired marker ID.
        var markerObjects = {
            16: arobject.createMarkerObject({color:0xAA0000}), // Marker #16, red.
            32: arobject.createMarkerObject({color:0x00BB00}), // Marker #32, green.

    The ardetector.detect() callbacks apply the transformation matrix to the associated marker. For example, here the onCreate handler adds the transformed model to the arview:

    // This function is called when a marker is initally detected on the stream
    function onMarkerCreated(marker) {
        var object = markerObjects[];
        // Set the objects initial transformation matrix.
        object.transform( marker.matrix );
        // Add the object to the scene.
        view.add( object );

    Our application is now a functioning example of augmented reality!

    Making Holes

    In An AR Game the markers are more complex than coloured cubes. They are “warpholes”, which appear to go -into- the marker page. The effect requires a bit of trickery, so for the sake of illustration we’ll construct the effect in three steps.

    Step 1: Open the Cube

    git checkout example_5
    live, diff, tag, movie

    First we remove the top face of the cube to create an open box. This is accomplished by setting the face’s material to be invisible. The open box is positioned behind/underneath the marker page by adjusting the Z coordinate to half of the box height.

    The effect is interesting, but unfinished — and perhaps it is not immediately clear why.

    Step 2: Cover the Cube in Blue

    git checkout example_6
    live, diff, tag, movie

    So what’s missing? We need to hide the part of the box which juts out from ‘behind’ the marker page. We’ll accomplish this by first enclosing the box in a slightly larger box. This box will be called an “occluder”, and in step 3 it will become an invisibility cloak. For now we’ll leave it visible and colour it blue, as a visual aid.

    The occluder objects and the augmented objects are rendered into the same context, but in separate scenes:

    function render() {
        // Render the reality scene
        // Render the occluder scene
        renderer.render( occluder.scene,;
        // Render the augmented components on top of the reality scene.

    This blue jacket doesn’t yet contribute much to the “warphole” illusion.

    Step 3: Cover the Cube In Invisibility

    git checkout example_7
    live, diff, tag, movie

    The illusion requires that the blue jacket be invisible while retaining its occluding ability — it should be an invisible occluder. The trick is to deactivate the colour buffers, thereby rendering only to the depth buffer. The render() method now becomes:

    function render() {
        // Render the reality scene
        // Deactivate color and alpha buffers, leaving only depth buffer active.
        // Render the occluder scene
        renderer.render( occluder.scene,;
        // Reactivate color and alpha buffers.
        // Render the augmented components on top of the reality scene.

    This results in a much more convincing illusion.

    Selecting Holes

    git checkout example_8
    live, diff, tag

    An AR Game allows the user to select which warphole to open by positioning the marker underneath a targeting reticule. This is a core aspect of the game, and it is technically known as object picking. ThreeJS makes this a fairly simple thing to do. The key classes are THREE.Projector() and THREE.Raycaster(), but there is a caveat: despite the key method having a name of Raycaster.intersectObject(), it actually takes a THREE.Mesh as the parameter. Therefore we add a mesh named “hitbox” to createMarkerObject(). In our case it is an invisible geometric plane. Note that we are not explicitly setting a position for this mesh, leaving it at the default (0,0,0), relative to the markerContainer object. This places it at the mouth of the warphole object, in the plane of the marker page, which is where the face we removed would be if we hadn’t removed it.

    Now we have a testable hitbox, we make a class called Reticle to handle intersection detection and state tracking. Reticle notifications are incorporated into the arview by including a callback when we add an object with arivew.add(). This callback will be invoked whenever the object is selected, for example:

    view.add( object, function(isSelected) {
        onMarkerSelectionChanged(, isSelected);

    The player is now able to select augmented markers by positioning them at the center of the screen.


    git checkout example_9
    live, diff, tag

    Our augmented reality functionality is essentially complete. We are able to detect markers in webcam frames and align 3D objects with them. We can also detect when a marker has been selected. We’re ready to move on to the second key component of An AR Game: the flat 2D space from which the player transports play pieces. This will require a fair amount of code, and some preliminary refactoring would help keep everything neat. Notice that a lot of AR functionality is currently in the main application.js file. Let’s excise it and place it into a dedicated module named realspace.js, leaving our application.js file much cleaner.

    Panel of Flatspace

    git checkout example_10
    live, diff, tag

    In An AR Game the player’s task is to transfer play pieces from a 2D plane to a 3D space. The realspace module implemented earlier serves as the the 3D space. Our 2D plane will be managed by a module named flatspace.js, which begins as a skeletal pattern similar to those of application.js and realspace.js.

    The Physics

    git checkout example_11
    live, diff, tag

    The physics of the realspace view comes free with nature. But the flatspace pane uses simulated 2D physics, and that requires physics middleware. We’ll use a JavaScript transpilation of the famous Box2D engine named Box2D.js. The JavaScript version is born from the original C++ via LLVM, processed by emscripten.

    Box2D is a rather complex piece of software, but well documented and well described. Therefore this article will, for the most part, refrain from repeating what is already well-documented in other places. We will instead describe the common issues encountered when using Box2D, introduce a solution in the form of a module, and describe its integration into flatspace.js.

    First we build a wrapper for the raw Box2D.js world engine and name it boxworld.js. This is then integrated into flatspace.

    This does not yield any outwardly visible affects, but in reality we are now simulating an empty space.

    The Visualization

    It would be helpful to be able to see what’s happening. Box2D thoughtfully provides debug rendering, and Box2D.js facilitates it through something like virtual functions. The functions will draw to a canvas context, so we’ll need to create a canvas and then supply the VTable with draw methods.

    Step 1: Make A Metric Canvas

    git checkout example_12
    live, diff, tag

    The canvas will map a Box2D world. A canvas uses pixels as its unit of measurement, whereas Box2D describes its space using meters. We’ll need methods to convert between the two, using a pixel-to-meter ratio. The conversion methods use this constant to convert from pixels to meters, and from meters to pixels. We also align the coordinate origins. These methods are associated with a canvas and all are wrapped into the boxview.js module. This makes it easy to incorporate it into flatspace:

    It is instantiated during initialization, its canvas then added to the DOM:

    view = boxview.create({
    document.getElementById("flatspace").appendChild( view.canvas );

    There are now two canvases on the page ‐ the flatspace and the realspace. A bit of CSS in application.css puts them side-by-side:

    #realspace {
    #flatspace {

    Step 2: Assemble A Drafting Kit

    git checkout example_13
    live, diff, tag

    As mentioned previously, Box2D.js provides hooks for drawing a debug sketch of the world. They are accessed via a VTable through the customizeVTable() method, and subsequently invoked by b2World.DrawDebugData(). We’ll take the draw methods from kripken’s description, and wrap them in a module called boxdebugdraw.js.

    Now we can draw, but have nothing to draw. We need to jump through a few hoops first!

    The Bureaucracy

    A Box2D world is populated by entities called Bodies. Adding a body to the boxworld subjects it to the laws of physics, but it must also comply with the rules of the game. For this we create a set of governing structures and methods to manage the population. Their application simplifies body creation, collision detection, and body destruction. Once these structures are in place we can begin to implement the game logic, building the system to be played.


    git checkout example_14
    live, diff, tag

    Let’s liven up the simulation with some creation. Box2D Body construction is somewhat verbose, involving fixtures and shapes and physical parameters. So we’ll stow our body creation methods in a module named boxbody.js. To create a body we pass a boxbody method to boxworld.add(). For example:

    function populate() {
        var ball = world.add(

    This yields an undecorated ball in midair experiencing the influence of gravity. Under contemplation it may bring to mind a particular whale.


    git checkout example_15
    live, diff, tag

    We must be able to keep track of the bodies populating flatworld. Box2D provides access to a body list, but it’s a bit too low level for our purposes. Instead we’ll use a field of b2Body named userData. To this we assign a unique ID number subsequently used as an index to a registry of our own design. It is implemented in boxregistry.js, and is a key aspect of the flatspace implementation. It enables the association of bodies with decorative entities (such as sprites), simplifies collision callbacks, and facilitates the removal of bodies from the simulation. The implementation details won’t be described here, but interested readers can refer to the repo to see how the registry is instantiated in boxworld.js, and how the add() method returns wrapped-and-registered bodies.


    git checkout example_16
    live, diff, tag

    Box2D collision detection is complicated because the native callback simply gives two fixtures, raw and unordered, and all collisions that occur in the world are reported, making for a lot of conditional checks. The boxregistry.js module avails itself to managing the data overload. Through it we assign an onContact callback to registered objects. When a Box2D collision handler is triggered we query the registry for the associated objects and check for the presence of a callback. If the object has a defined callback then we know its collision activity is of interest. To use this functionality in flatspace.js, we simply need to assign a collision callback to a registered object:

    function populate() {
        var ground = world.add(
        var ball = world.add(
        ball.onContact = function(object) {
            console.log("The ball has contacted:", object);


    git checkout example_17
    live, diff, tag

    Removing bodies is complicated by the fact that Box2D does not allow calls to b2World.DestroyBody() from within b2World.Step(). This is significant because usually you’ll want to delete a body because of a collision, and collision callbacks occur during a simulation step: this is a conundrum! One solution is to queue bodies for deletion, then process the queue outside of the simulation step. The boxregistry addresses the problem by furnishing a flag, isMarkedForDeletion, for each object. The collection of registered objects is iterated and listeners are notified of the deletion request. The iteration happens after a simulation step, so the deletion callback cleanly destroys the bodies. Perceptive readers may notice that we are now checking the isMarkedForDeletion flag before invoking collision callbacks.

    This happens transparently as far as flatspace.js is concerned, so all we need to do is set the deletion flag for a registered object:

    ball.onContact = function(object) {
        console.log("The ball has contacted:", object);
        ball.isMarkedForDeletion = true;

    Now the body is deleted on contact with the ground.


    git checkout example_18
    live, diff, tag

    When a collision is detected An AR Game needs to know what the object has collided with. To this end we add an is() method for registry objects, used for comparing objects. We will now add a conditional deletion to our game:

    ball.onContact = function(object) {
        console.log("The ball has contacted:", object);
        if( ground ) ) {
            ball.isMarkedForDeletion = true;

    A 2D Warphole

    git checkout example_19
    live, diff, tag

    We’ve already discussed the realspace warpholes, and now we’ll implement their flatspace counterparts. The flatspace warphole is simply a body consisting of a Box2D sensor. The ball should pass over a closed warphole, but through an open warphole. Now imagine an edge case where a ball is over a closed warphole which is then opened up. The problem is that Box2D’s onBeginContact handler behaves true to its name, meaning that we detected warphole contact during the closed state but have since opened the warphole. Therefore the ball is not warped, and we’re left with a bug. Our fix is to use a cluster of sensors. With a cluster there will be a series of BeginContact events as the ball moves across the warphole. Thus we can be confident that opening a warphole while the ball is over it will result in a warp. The sensor cluster generator is named hole and is implemented in boxbody.js. The generated cluster looks like this:

    The Conduit

    At this point we’ve made JSARToolkit and Box2D.js into usable modules. We’ve used them to create warpholes in realspace and flatspace. The objective of An AR Game is to transport pieces from flatspace to the realspace, so it is necessary that the warpholes communicate. Our approach is as follows:

    1. git checkout example_20
      live, diff, tag

      Notify the application when a realspace warphole’s state changes.

    2. git checkout example_21
      live, diff, tag

      Set flatspace warphole states according to realspace warphole states.

    3. git checkout example_22
      live, diff, tag

      Notify the application when a ball transits an open flatspace warphole.

    4. git checkout example_23
      live, diff, tag

      Add a ball to realspace when the application receives a notification of a transit.


    This article has shown the technical underpinnings of An AR Game. We have constructed two panes of differing realities and connected them with warpholes. A player may now entertain him or herself by transporting a ball from flatspace to realspace. Technically this is interesting, but generally it is not fun!

    There is still much to be done before this application becomes a game, but they are outside the scope of this article. Among the remaining tasks are:

    • Sprites and animations.
    • Introduce multiple balls and warpholes.
    • Provide a means of interactively designing levels.

    Thanks for reading! We hope this has inspired you to delve into this topic more!

  8. Firefox Developer Tools and Firebug

    If you haven’t tried the Firefox Developer Tools in the last 6 months, you owe it to yourself to take another look. Grab the latest Aurora browser, and start up the tools from the Web Developer menu (a submenu of Tools on some platforms).

    The tools have improved a lot lately: black-boxing lets you treat sources as system libraries that won’t distract your debugging flow. Source maps let you debug source generated by transpilers or minimizers. The inspector has paint flashing, a new font panel, and a greatly improved style inspector with tab completion and pseudo-elements. The network monitor helps you debug your network activity. The list goes on, and you can read more about recent developments in our series of Aurora Posts.

    After getting to know the tools, start the App Manager. Install the Firefox OS Simulator to see how your app will behave on a device. If you have a Firefox OS device running the latest 1.2 nightlies, you can connect the tools directly to the phone.

    Why The Built-in Tools?

    The Web owes a lot to Firebug. For a long time, Firebug was the best game in town. It introduced the world to visual highlighting of the DOM, inplace editing of styles, and the console API.

    Before the release of Firefox 4 we decided that Firefox needed a set of high-quality built-in tools. Baking them into the browser let us take advantage of the existing Mozilla community and infrastructure, and building in mozilla-central makes a huge difference when working with Gecko and Spidermonkey developers. We had ambitious platform changes planned: The JSD API that Firebug uses to debug Javascript had aged badly, and we wanted to co-evolve the tools alongside a new Spidermonkey Debugger API.

    We thought long and hard about including Firebug wholesale and considered several approaches to integrating it. An early prototype of the Inspector even included a significant portion of Firebug. Ultimately, integration proved to be too challenging and would have required rewrites that would have been equivalent to starting over.

    How is Firebug Doing?

    Firebug isn’t standing still. The Firebug Working Group continues to improve it, as you can see in their latest 1.12 release. Firebug is working hard to move from JSD to the new Debugger API, to reap the performance and stability benefits we added for the Firefox Developer Tools.

    After that? Jan Odvarko, the Firebug project leader, had this to say:

    Firebug has always been maintained rather as an independent project outside of existing processes and Firefox environment while DevTools is a Mozilla in-house project using standard procedures. Note that the Firebug goal has historically been to complement Firefox features and not compete with them (Firebug is an extension after all) and we want to keep this direction by making Firebug a unique tool.

    Everyone wants to figure out the best way for Firebug’s community of users, developers, and extension authors to shape and complement the Firefox Developer Tools. The Firebug team is actively discussing their strategy here, but hasn’t decided how they want to accomplish that.

    Follow the Firebug blog and @firebugnews account to get involved.

    What’s Next for Firefox Developer Tools?

    We have more exciting stuff coming down the pipe. Some of this will be new feature work, including great performance analysis and WebGL tools. Much of it will be responding to feedback, especially from developers giving the tools a first try.

    We also want to find out what you can add to the tools. Recently the Ember.js Chrome Extension was ported to Firefox Developer Tools as a Sunday hack, but we know there are more ideas out there. Like most Firefox code you can usually find a way to do what you want, but we’re also working to define and publish a Developer Tools API. We want to help developers build high quality, performant developer tools extensions. We’d love to hear from developers writing extensions for Firebug or Chrome Devtools about how to best go about that.

    Otherwise, keep following the Hacks blog to learn more about how the Firefox Developer Tools are evolving. Join in at the dev-developer-tools mailing list, the @FirefoxDevTools Twitter account, or #devtools on

  9. Who moved my geolocation?

    One of the questions we often get when we are talking about Firefox OS is: “What about the GPS on some devices”? You may have noticed that on some devices, the GPS position is not quite accurate or can take a long time to report even when you are outside. Let me start by explaining how it works first. After, we’ll see what the issue is right now, and how we can, as developers, continue to work our magic, and create awesome applications using geolocation.

    How the devices give you geolocation

    Most smartphones use two techniques to help you get the longitude and latitude of the phone: the GPS itself, but also something called A-GPS (Assisted GPS) servers. When you are outside, the GPS connects with satellite signals, and gets you the coordinates of the device: latitude and longitude. It works well as long as the GPS can connect with satellites, but it can take some time to achieve this, or to give you something more accurate.

    To help the device achieve its goal faster, often you’ll get a location from an A-GPS server: this is why most of the time, you’ll first get a location within maybe 50 meters, and if you wait a little longer, you’ll get something more accurate. It’s also why, when you are using dedicated GPS devices (like the one you use for hiking or geocaching), it takes longer: they just use the GPS, and need to connect to more satellite, no assisted GPS connection.

    What about Firefox OS?

    Mozilla doesn’t provide any Firefox OS images; we provide source code to chip manufacturer, and OEMs like Geeksphone. These parties customize various parts and create binary images for devices. The final Firefox OS image is mostly representative of what we have in the public source repositories, but with some modifications. This is an important distinction because the configuration of some parts (like Linux config, device setup, etc.) is not in Mozilla’s hands.

    With that in mind, some devices have configuration problems for A-GPS. We are actively working with OEMs to solve that issue, but it’s not something we can fix by ourselves. Once it is fixed for specific devices with A-GPS problems, we’ll let you know about the procedure to fix your device on this blog.

    But I need geolocation for my application

    There are many ways to develop applications needing geolocation information even with this little A-GPS issue. First, you can use the simulator to test your application. There is a nice little option right in the simulator to let you emulate any coordinates you need.

    Screenshot of the Firefox OS Simulator

    Of course, while the simulator is perfect for the development part and the first series of tests, you’ll need to test your new masterpiece on a real device. If you are using Linux or OS X (I’m working on a solution for Windows users), our friend, Doug Turner, created a mock location provider which you can install on your (rooted) phone to do some tests. It can hardcode the latitude and longitude that Firefox OS itself returns to your phone. You can change those coordinates by editing MockGeolocationProvider.js file in the components folder of the project. Of course, you could hardcode this yourself in your code, but you won’t be able to see how well your code handles what the device returns to you.

    Last but not least, you can also use free services like It’s a database that you can use to detect geolocation from IP addresses. It’s not perfect, but it’s a good start to give a more accurate location to the user and a good fallback solution for any applications. You never know when there will be a problem with A-GPS or GPS.

    Best practices for apps using GPS

    There are a couple of things you need to keep in mind when you are building an application that needs geolocation. First, you need to think about the accuracy of the result you’ll receive. What you need to know is that using getCurrentPosition tries to return a result as fast as possible: sometimes it means using wifi or the IP address to get the result. When using the GPS device, it may take minutes before it connects to satellites, so in that situation, you have two choices:

    1. You can get the accuracy of the result, in meters, by getting accuracy for the coordinates returned by getCurrentPosition (see code below);
    2. Alternatively, you can define a HighAccuracy option when you call getCurrentPosition (see code below).
    var options = {
        enableHighAccuracy: true,
        timeout: 5000,
        maximumAge: 0
    function success(pos) {
        var crd = pos.coords;
        console.log('Your current position is:');
        console.log('Latitude : ' + crd.latitude);
        console.log('Longitude: ' + crd.longitude);
        console.log('More or less ' + crd.accuracy + ' meters.');
    function error(err) {
        console.warn('ERROR(' + err.code + '): ' + err.message);
    navigator.geolocation.getCurrentPosition(success, error, options);

    You also need to think about the fact that the user may move, so you need to re-estimate the user’s coordinates every so often, depending on what you are trying to achieve. You can do this either manually or by using the watchPosition method of the geolocation API in Firefox OS.

    var watchID = navigator.geolocation.watchPosition(function(position) {
        do_something(position.coords.latitude, position.coords.longitude);

    In that situation, if the position changes, either because the devices moves or because more accurate geolocation information arrives, your function will be called, and you’ll be able to handle the new information.

    If you want more information about how to use geolocation in your application, you can always check the Mozilla Developer Network documentation on using geolocation. If you have any questions about using geolocation in your Firefox OS application, please leave a question in the comments’ section.

  10. WebRTC: Update and Workarounds

    As you’ve probably noticed, we’ve been making lots of progress on our WebRTC implementation, and we expect additional improvements over the next few releases.

    We have work in the pipeline to improve audio quality issues (yes, we know we still have some!) and to assist with troubleshooting NAT traversal issues (you can follow the progress in Bug 904622).

    Existing limitations

    But beyond these upcoming improvements, I’d like to take a moment to look at a couple of our existing limitations that you might have noticed, and offer some advice for writing apps that work within these limitations.

    The first issue, described in Bug 857115, is that mozRTCPeerConnection does not currently support renegotiation of an ongoing session. Once a session is set up, its parameters are fixed. In all practicality, this means that you can’t, for example, start an audio-only call and then add video to that same PeerConnection later in that session. We have another similar limitation in that we don’t currently support more than one each audio and video stream on a single PeerConnection (see Bug 784517 and Bug 907339).

    Solutions for now

    We’re going to fix these limitations as soon as we can, but it’s going to take a few months for our code changes to ride the Firefox train out into release. Until that happens, I want to give you a couple of workarounds so you can continue to use Firefox to make awesome things.

    Muting audio and video streams

    Media renegotiation has two main use cases: muting and unmuting media in the middle of a session; and adding/removing video in the middle of a session. For muting and unmuting, the trick is to make judicious use of the “enabled” attribute on the MediaStreamTrack object: simply set a track’s enabled to “false” when you want to mute it.

    var pc = new mozRTCPeerConnection;
    navigator.mozGetUserMedia({video: true},
      function (mediaStream) {
        // Create a new self-view video element
        var video = document.createElement("video");
        video.setAttribute("width", 640);
        video.setAttribute("height", 480);
        video.setAttribute("style", "transform: scaleX(-1)");
        video.src = window.URL.createObjectURL(mediaStream);
        // Add a button to hold/unhold video stream
        var button = document.createElement("button");
        button.appendChild(document.createTextNode("Toggle Hold"));
        button.onclick = function(){
          mediaStream.getVideoTracks()[0].enabled =
        // Add the mediaStream to the peer connection
        // At this point, you're ready to start the call with
        // pc.setRemoteDescription() or pc.createOffer()
      function (err) { alert(err); }

    Note that setting a MediaStreamTrack’s “enabled” attribute to “false” will not stop media from flowing, but it will change the media that’s being encoded into a black square (for video) and silence (for audio), both of which compress very well. Depending on your application, it may also make sense to use browser-to-browser signaling (for example, WebSockets or DataChannels) to let the other browser know that it should hide or show the video window when the corresponding video is muted.

    Adding video mid-call

    For adding video mid-call, the most user-friendly work-around is to destroy the audio-only PeerConnection, and create a new PeerConnection, with both audio and video. When you do this, it will prompt the user for both the camera and the microphone; but, since Firefox does this in a single dialog, the user experience is generally pretty good. Once video has been added, you can either remove it by performing this trick in reverse (thus releasing the camera), or you can simply perform the “mute video” trick I describe above (which will leave the camera going — this might upset some users).

    Send more than one audio or video stream

    To send more than one audio or video stream, you can use multiple simultaneous peer connections between the browsers: one for each audio/video pair you wish to send. You can also use this technique as an alternate approach for adding and removing video mid-session: set up an initial audio-only call; and, if the user later decides to add video, you can create a new PeerConnection and negotiate a separate video-only connection.

    One subtle downside to using the first approach for adding video is that it restarts the audio connection when you add video, which may lead to some noticeable glitches in the audio stream. However, once we have audio and video synchronization landed, making sure that audio and video tracks are in the same MediaStream will ensure that they remain in sync. This synchronization isn’t guaranteed for multiple MediaStreams or multiple PeerConnections.

    Temporary workarounds and getting there

    We recognize that these workarounds aren’t ideal, and we’re working towards spec compliance as quickly as we can. In the meanwhile, we hope that this information proves helpful in building out applications today. The good news is that these techniques should continue to work even after we’ve addressed the limitations described above, so you can migrate to a final solution at your leisure.

    Finally, I would suggest that anyone interested in renegotiation and/or multiple media streams keep a eye on the bugs I mention above. Once we’ve implemented these features, they should appear in the released version of Firefox within about 18 weeks. After that happens, you’ll want to switch over to the “standard” way of doing things to ensure the best possible audio and video quality.

    Thanks for your patience. Go out and make great things!