Sort by:


  1. SVG & colors in OpenType fonts

    Sample of a colorfont


    Until recently having more than one color in a glyph of a vector font was technically not possible. Getting a polychrome letter required multiplying the content for every color. Like it happened with many other techniques before, it took some time for digital type to overcome the constraints of the old technique. When printing with wood or lead type the limitation to one color per glyph is inherent (if you don’t count random gradients). More than one color per letter required separate fonts for the differently colored parts and a new print run for every color. This has been done beautifully and pictures of some magnificent examples are available online. Using overprinting the impression of three colors can be achieved with just two colors.

    Overprinting colors
    Simulation of two overprinting colors resulting in a third.

    Digital font formats kept the limitation to one ‘surface’ per glyph. There can be several outlines in a glyph but when the font is used to set type the assigned color applies to all outlines. Analog to letterpress the content needs to be doubled and superimposed to have more than one color per glyph. Multiplying does not sound like an elegant solution and it is a constant source of errors.

    It took some emojis until the demand for multi-colored fonts was big enough to develop additional tables to store this information within OpenType fonts. As of this writing there are several different ways to implement this. Adam Twardoch compares all proposed solutions in great detail on the FontLab blog.

    To me the Adobe/Mozilla way looks the most intriguing.

    Upon its proposal it was discussed by a W3C community group and published as a stable document. The basic idea is to store the colored glyphs as svgs in the OpenType font. Of course this depends on the complexity of your typeface but svgs should usually result in a smaller file size than pngs. With the development of high resolution screens vectors also seem to be a better solution than pixels. The possibility to animate the svgs is an interesting addition and will surely be used in interesting (and very annoying) ways. BLING BLING.


    I am not a font technician or a web developer just very curious about this new developments. There might be other ways but this is how I managed to build colorful OpenType fonts.

    In order to make your own you will need a font editor. There are several options like RoboFont and Glyphs (both Mac only), FontLab and the free FontForge. RoboFont is the editor of my choice, since it is highly customizable and you can build your own extensions with python. In a new font I added as many new layers as the amount of colors I wanted to have in the final font. Either draw in the separate layers right away or just copy the outlines into the respective layer after you’ve drawn them in the foreground layer. With the very handy Layer Preview extension you can preview all Layers overlapping. You can also just increase the size of the thumbnails in the font window. At some point they will show all layers. Adjust the colors to your liking in the Inspector since they are used for the preview.

    RoboFont Inspector
    Define the colors you want to see in the Layer Preview

    A separated letter
    Layer preview
    The outlines of the separate layers and their combination

    When you are done drawing your outlines you will need to safe a ufo for every layer / color. I used a little python script to safe them in the same place as the main file:

    f = CurrentFont()
    path = f.path
    for layer in f.layerOrder:
    newFont = RFont()
    for g in f:
        orig = g.getLayer(layer)
        newFont[].width = orig.width
 = = layer = path[:-4] +"_%s" % layer +".ufo")
    print "Done Splitting"

    Once I had all my separate ufos I loaded them into TransType from FontLab. Just drop your ufos in the main window and select the ones you want to combine. In the Effect menu click ‘Overlay Fonts …’. You get a preview window where you can assign a rgba value for each ufo and then hit OK. Select the newly added font in the collection and export it as OpenType (ttf). You will get a folder with all colorfont versions.

    The preview of your colorfont in TransType.


    In case you don’t want to use TransType you might have a look at the very powerful RoboFont extension by Jens Kutílek called RoboChrome. You will need a separate version of your base-glyph for every color, which can also be done with a scipt if you have all of your outlines in layers.

    f = CurrentFont()
    selection = f.selection
    for l, layer in enumerate(f.layerOrder):
    for g in selection:
        char = f[g]
        name = g + ".layer%d" % l
        f[name].width = f[g].width
        l_glyph = f[g].getLayer(layer)
        f[name].mark = (.2, .2, .2, .2)
    print "Done with the Devision"

    For RoboChrome you will need to split your glyph into several.


    You can also modify the svg table of a compiled font or insert your own if it does not have any yet. To do so I used the very helpful fonttools by Just van Rossum. Just generate a otf or ttf with the font editor of your choice. Open the Terminal and type ttx if you are on Mac OS and have fonttools installed. Drop the font file in the Terminal window and hit return. Fonttools will convert your font into an xml (YourFontName.ttx) in the same folder. This file can then be opened, modified and recompiled into a otf or ttf.

    This can be quite helpful to streamline the svg compiled by a program and therefore reduce the file size. I rewrote the svg of a 1.6mb font to get it down to 980kb. Using it as a webfont that makes quite a difference. If you want to add your own svg table and font that does not have any yet you might read a bit about the required header information. The endGlyphID and startGlyphID for the glyph you want to supply with svg data can be found in the <GlyphOrder> Table.

    <svgDoc endGlyphID="18" startGlyphID="18">
        <!-- here goes your svg -->
    <svgDoc endGlyphID="19" startGlyphID="19">...</svgDoc>
    <svgDoc endGlyphID="20" startGlyphID="20">...</svgDoc>

    One thing to keep in mind is the two different coordinate systems. Contrary to a digital font svg has a y-down axis. So you either have to draw in the negative space or you draw reversed and then mirror everything with:

    Y-axis comparison
    While typefaces usually have a y-up axis SVG uses y-down.


    Now if you really want to pimp your fonts you should add some unnecessary animation to annoy everybody. Just insert it between the opening and closing tags of whatever you want to modify. Here is an example of a circle changing its fill-opacity from zero to 100% over a duration of 500ms in a loop.

    <animate    attributeName="fill-opacity"


    Technically these fonts should work in any application that works with otfs or ttfs. But as of this writing only Firefox shows the svg. If the rendering is not supported the application will just use the regular glyph outlines as a fallback. So if you have your font(s) ready it’s time to write some css and html to test and display them on a website.

    The @font-face

    @font-face {
    font-family: "Colors-Yes"; /* reference name */
    src: url('./fonts/Name_of_your_font.ttf');
    font-weight: 400; /* or whatever applies */
    font-style: normal; /* or whatever applies */
    text-rendering: optimizeLegibility; /* maybe */

    The basic css

    .color_font { font-family: "Colors-Yes"; }

    The HTML

    <p class="color_font">Shiny polychromatic text</p>


    As of this writing (October 2014) the format is supported by Firefox (26+) only. Since this was initiated by Adobe and Mozilla there might be a broader support in the future.

    While using svg has the advantage of reasonably small files and the content does not have to be multiplied it brings one major drawback. Since the colors are ‘hard-coded’ into the font there is no possibility to access them with css. Hopefully this might change with the implementation of a <COLR/CPAL> table.

    There is a bug that keeps animations from being played in Firefox 32. While animations are rendered in the current version (33) this might change for obvious reasons.

    Depending how you establish your svg table it might blow up and result in fairly big files. Be aware of that in case you use them to render the most crucial content of your websites.


    Links, Credits & Thanks

    Thanks Erik, Frederik, Just and Tal for making great tools!

  2. The Visibility Monitor supported by Gaia

    With the booming ultra-low-price device demands, we have to more carefully calculate about each resource of the device, such as CPU, RAM, and Flash. Here I want to introduce the Visibility Monitor which has existed for a long time in Gaia.


    The Visibility Monitor originated from the Gallery app of Gaia and appeared in Bug 809782 (gallery crashes if too many images are available on sdcard) for the first time. It solves the problem of the memory shortage which is caused by storing too many images in the Gallery app. After a period of time, Tag Visibility Monitor, the “brother” of Visibility Monitor, was born. Both of their functionalities are almost the same, except that Tag Visibility Monitor follows pre-assigned tag names to filter elements which need to be monitored. So, we are going to use the Tag Visibility Monitor as the example in the following sections. Of course, the Visibility Monitor is also applicable.

    For your information, the Visibility Monitor was done by JavaScript master David Flanagan. He is also the author of JavaScript: The Definitive Guide and works at Mozilla.

    Working Principle

    Basically, the Visibility Monitor removes the images that are outside of the visible screen from the DOM tree, so Gecko has the chance to release the image memory which is temporarily used by the image loader/decoder.

    You may ask: “The operation can be done on Gecko. Why do this on Gaia?” In fact, Gecko enables the Visibility Monitor by default; however, the Visibility Monitor only removes the images which are image buffers (the uncompressed ones by the image decoder). However, the original images are still temporarily stored in the memory. These images were captured by the image loader from the Internet or the local file system. However, the Visibility Monitor supported by Gaia will completely remove images from the DOM tree, even the original ones which are temporarily stored in the image loader as well. This feature is extremely important for the Tarako, the codename of the Firefox OS low-end device project, which only equips 128MB memory.

    To take the graphic above as the example, we can separate the whole image as:

    • display port
    • pre-rendered area
    • margin
    • all other area

    When the display port is moving up and down, the Visibility Monitor should dynamically load the pre-rendered area. At the same time, the image outside of the pre-rendered area will not be loaded or uncompressed. The Visibility Monitor will take the margin area as a dynamically adjustable parameter.

    • The higher the margin value is, the bigger the part of the image Gecko has to pre-render, which will lead to more memory usage and to scroll more smoothly (FPS will be higher)
    • vice versa: the lower the margin is, the smaller the part of the image Gecko has to pre-render, which will lead to less memory usage and to scroll less smoothly (FPS will be lower).

    Because of this working principle, we can adjust the parameters and image quality to match our demands.


    It’s impossible to “have your cake and eat it too”. Just like it’s impossible to “use the Visibility Monitor and be out of its influence.” The prerequisites to use the Visibility Monitor) are listed below:

    The monitored HTML DOM Elements are arranged from top to bottom

    The original layout of Web is from top to bottom, but we may change the layout from bottom to top with some CSS options, such as flex-flow. After applying them, the Visibility Monitor may become more complex and make the FPS lower (we do not like the result), and this kind of layout is not acceptable for the Visibility Monitor. When someone uses this layout, the Visibility Monitor shows nothing at the areas where it should display images and sends errors instead.

    The monitored HTML DOM Elements cannot be absolutely positioned

    The Visibility Monitor calculates the height of each HTML DOM Elements to decide whether to display the element or not. So, when the element is fixed at a certain location, the calculation becomes more complex, which is unacceptable. When someone uses this kind of arrangement, the Visibility Monitor shows nothing at the area where it should display images and sends error message.

    The monitored HTML DOM Elements should not dynamically change their position through JavaScript

    Similar to absolute location, dynamically changing HTML DOM Elements’ locations make calculations more complex, both of them are unacceptable. When someone uses this kind of arrangement, the Visibility Monitor shows nothing at the area.

    The monitored HTML DOM Elements cannot be resized or be hidden, but they can have different sizes

    The Visibility Monitor uses MutationObserver to monitor adding and removal operations of HTML DOM Elements, but not appearing, disappearing or resizing of an HTML DOM Element. When someone uses this kind of arrangement, the Visibility Monitor again shows nothing.

    The container which runs monitoring cannot use position: static

    Because the Visibility Monitor uses offsetTop to calculate the location of display port, it cannot use position: static. We recommend to use position: relative instead.

    The container which runs monitoring can only be resized by the resizing window

    The Visibility Monitor uses the window.onresize event to decide whether to re-calculate the pre-rendered area or not. So each change of the size should send a resize event.

    Tag Visibility Monitor API

    The Visibility Monitor API is very simple and has only one function:

    function monitorTagVisibility(

    The parameters it accepts are defined as follows:

    1. container: a real HTML DOM Element for users to scroll. It doesn’t necessarily have be the direct parent of the monitored elements, but it has to be one of their ancestors
    2. tag: a string to represent the element name which is going to be monitored
    3. scrollMargin: a number to define the margin size out of the display port
    4. scrollDelta: a number to define “how many pixels have been scrolled should that shoukd have a calculation to produce a new pre-rendered area”
    5. onscreenCallback: a callback function that will be called after a HTML DOM Element moved into the pre-rendered area
    6. offscreenCallback: a callback function that will be called after a HTML DOM Element moved out of the pre-rendered area

    Note: the “move into” and “move out” mentioned above means: as soon as only one pixel is in the pre-rendered area, we say it moves into or remains on the screen; as soon as none of the pixels are in the pre-rendered area, we say it moves out of or does not exist on the screen.

    Example: Music App (1.3T branch)

    One of my tasks is to the add the Visibility Monitor into the 1.3T Music app. Because lack of understanding for the structure of the Music app, I asked help from another colleague to find where I should add it in, which were in three locations:

    • TilesView
    • ListView
    • SearchView

    Here we only take TilesView as the example and demonstrate the way of adding it. First, we use the App Manager to find out the real HTML DOM Element in TilesView for scrolling:

    With the App Manager, we find that TilesView has views-tile, views-tiles-search, views-tiles-anchor, and li.tile (which is under all three of them). After the test, we can see that the scroll bar shows at views-tile; views-tiles-search will then automatically be scrolled to the invisible location. Then each tile exists in the way of li.tile. Therefore, we should set the container as views-tiles and set tag as li. The following code was used to call the Visibility Monitor:

        visibilityMargin,    // extra space top and bottom
        minimumScrollDelta,  // min scroll before we do work
        thumbnailOnscreen,   // set background image
        thumbnailOffscreen // remove background image

    In the code above, visibilityMargin is set as 360, which means 3/4 of the screen. minimumScrollDelta is set as 1, which means each pixel will be recalculated once. thumbnailOnScreen and thumbnailOffscreen can be used to set the background image of the thumbnail or clean it up.

    The Effect

    We performed practical tests on the Tarako device. We launched the Music app and made it load nearly 200 MP3 files with cover images, which were totally about 900MB. Without the Visibility Monitor, the memory usage of the Music app for images were as follows:

    ├──23.48 MB (41.04%) -- images
    │ ├──23.48 MB (41.04%) -- content
    │   │   ├──23.48 MB (41.04%) -- used
    │   │   │ ├──17.27 MB (30.18%) ── uncompressed-nonheap
    │   │   │ ├───6.10 MB (10.66%) ── raw
    │   │   │ └───0.12 MB (00.20%) ── uncompressed-heap
    │   │   └───0.00 MB (00.00%) ++ unused
    │   └───0.00 MB (00.00%) ++ chrome

    With the Visibility Monitor, we re-gained the memory usage as follows:

    ├───6.75 MB (16.60%) -- images
    │   ├──6.75 MB (16.60%) -- content
    │   │  ├──5.77 MB (14.19%) -- used
    │   │  │  ├──3.77 MB (09.26%) ── uncompressed-nonheap
    │   │  │  ├──1.87 MB (04.59%) ── raw
    │   │  │  └──0.14 MB (00.34%) ── uncompressed-heap
    │   │  └──0.98 MB (02.41%) ++ unused
    │   └──0.00 MB (00.00%) ++ chrome

    To compare both of them:

    ├──-16.73 MB (101.12%) -- images/content
    │  ├──-17.71 MB (107.05%) -- used
    │  │  ├──-13.50 MB (81.60%) ── uncompressed-nonheap
    │  │  ├───-4.23 MB (25.58%) ── raw
    │  │  └────0.02 MB (-0.13%) ── uncompressed-heap
    │  └────0.98 MB (-5.93%) ── unused/raw

    To make sure the Visibility Monitor works properly, we added more MP3 files which reached about 400 files in total. At the same time, the usage of memory maintained around 7MB. It’s really a great progress for the 128MB device.


    Honestly, we don’t have to use the Visibility Monitor if there weren’t so many images. Because the Visibility Monitor always influences FPS, we can have Gecko deal with the situation. When talking about apps which use lots of images, we can control memory resources through the Visibility Monitor. Even if we increase the amount of images, the memory usage still keeps stable.

    The margin and delta parameters of the Visibility Monitor will affect the FPS and memory usage, which can be concluded as follows:

    • the value of higher marginvalue: more memory usage, FPS will be closer to Gecko native scrolling
    • the value of lower margin: less memory usage, lower FPS
    • The value of higher delta: memory usage increases slightly, higher FPS, higher possibility to see unloaded images
    • the value of lower delta: memory usage decreases slightly, lower FPS, lower possibility to see unloaded images
  3. New on MDN: Sign in with Github!

    MDN now gives users more options for signing in!

    Sign in with GitHub

    Signing in to MDN previously required a Mozilla Persona account. Getting a Persona account is free and easy, but MDN analytics showed a steep drop-off at the “Sign in with Persona” interface. For example, almost 90% of signed-out users who clicked “Edit” never signed in, which means they never got to edit. That’s a lot of missed opportunities!

    It should be easy to join and edit MDN. If you click “Edit,” we should make it easy for you to edit. Our analysis demonstrated that most potential editors stumbled at the Persona sign in. So, we looked for ways to improve sign in for potential contributors.

    Common sense suggests that many developers have a GitHub account, and analysis confirms it. Of the MDN users who list external accounts in their profiles, approximately 30% include a GitHub account. GitHub is the 2nd-most common external account listed, after Twitter.

    That got us thinking: If we integrated GitHub accounts with MDN profiles, we could one day share interesting GitHub activity with each other on MDN. We could one day use some of GitHub’s tools to create even more value for MDN users. Most immediately, we could offer “sign in with GitHub” to at least 30% (but probably more) of MDN’s existing users.

    And if we did that, we could also offer “sign in with GitHub” to over 3 million GitHub users.

    The entire engineering team and MDN community helped make it happen.

    Authentication Library

    Adding the ability to authenticate using GitHub accounts required us to extend the way MDN handles authentication so that MDN users can start to add their GitHub accounts without effort. We reviewed the current code of kuma (the code base that runs MDN) and realized that it was deeply integrated with how Mozilla Persona works technically.

    As we’re constantly trying to remove technical debt that meant revisiting some of the decisions we’ve made years ago when the code responsible for authentication was written. After a review process we decided to replace our home-grown system, django-browserid, with a 3rd party library called django-allauth as it is a well known system in the Django community that is able to use multiple authentication providers side-by-side – Mozilla Persona and GitHub in our case.

    One challenge was making sure that our existing user database could be ported over to the new system to reduce the negative impact on our users. To our surprise this was not a big problem and could be automated with a database migration–a special piece of code that would convert the data into the new format. We implemented the new authentication library and migrated accounts to it several months ago. MDN has been using django-allauth for Mozilla Persona authentication since then.

    UX Challenges

    We wanted our users to experience a fast and easy sign-up process with the goal of having them edit MDN content at the end. Some things we did in the interface to support this:

    • Remember why the user is signing up and return them to that task when sign up is complete.
    • Pre-fill the username and email address fields with data from GitHub (including pre-checking if they are available).
    • Trust GitHub as a source of confirmed email address so we do not have to confirm the email address before the user can complete signing up.
    • Standardise our language (this is harder than it sounds). Users on MDN “sign in” to their “MDN profile” by connecting “accounts” on other “services”. See the discussion.

    One of our biggest UX challenges was allowing existing users to sign in with a new authentication provider. In this case, the user needs to “claim” an existing MDN profile after signing in with a new service, or needs to add a new sign-in service to their existing profile. We put a lot of work into making sure this was easy both from the user’s profile if they signed in with Persona first and from the sign-up flow if they signed in with GitHub first.

    We started with an ideal plan for the UX but expected to make changes once we had a better understanding of what allauth and GitHub’s API are capable of. It was much easier to smooth the kinks out of the flow once we were able to click around and try it ourselves. This was facilitated by the way MDN uses feature toggles for testing.

    Phased Testing & Release

    This project could potentially corrupt profile or sign-in data, and changes one of our most essential interfaces – sign up and sign in. So, we made a careful release plan with several waves of functional testing.

    We love to alpha- and beta-test changes on MDN with feature toggles. To toggle features we use the excellent django-waffle feature-flipper by James Socol – MDN Manager Emeritus.

    We deployed the new code to our MDN development environment every day behind a feature toggle. During this time MDN engineers exercised the new features heavily, finding and filing bugs under our master tracking bug.

    When the featureset was relatively complete, we created our beta test page, toggled the feature on our MDN staging environment for even more review. We did the end-to-end UX testing, invited internal Mozilla staff to help us beta test, filed a lot of UX bugs, and started to triage and prioritize launch blockers.

    Next, we started an open beta by posting a site-wide banner on the live site, inviting anyone to test and file bugs. 365 beta testers participated in this round of QA. We also asked Mozilla WebQA to help deep-dive into the feature on our stage server. We only received a handful of bugs, which gave us great confidence about a final release.


    It was a lot of work, but all the pieces finally came together and we launched. Because of our extensive testing & release plan, we’ve 0 incidents with the launch – no down-time, no stacktraces, no new bugs reported. We’re very excited to release this feature. We’re excited to give more options and features to our incredible MDN users and contributors, and we’re excited to invite each and every GitHub user to join the Mozilla Developer Network. Together we can make the web even more awesome. Sign in now.


    Now that we have worked out the infrastructure and UX challenges associated with multi-account authentication, we can look for other promising authentication services to integrate with. For example, Firefox Accounts (FxA) is the authentication service that powers Firefox Sync. FxA is integrated with Firefox and will soon be integrated with a variety of other Mozilla services. As more developers sign up for Firefox Accounts, we will look for opportunities to add it to our authentication options.

  4. Creating a mobile app from a simple HTML site

    This article is a simple tutorial designed to teach you some fundamental skills for creating cross platform web applications. You will build a sample School Plan app, which will provide a dynamic “app-like” experience across many different platforms and work offline. It will use Apache Cordova and Mozilla’s Brick web components.

    The story behind the app, written by Piotr

    I’ve got two kids and I’m always forgetting their school plan, as are they. Certainly I could copy the HTML to JSFiddle and load the plan as a Firefox app. Unfortunately this would not load offline, and currently would not work on iOS. Instead I would like to create an app that could be used by everyone in our family, regardless of the device they choose to use.

    We will build

    A mobile application which will:

    1. Display school plan(s)
    2. Work offline
    3. Work on many platforms

    Prerequisite knowledge

    • You should understand the basics of HTML, CSS and JavaScript before getting started.
    • Please also read the instructions on how to load any stage in this tutorial.
    • The Cordova documentation would also be a good thing to read, although we’ll explain the bits you need to know below.
    • You could also read up on Mozilla Brick components to find out what they do.


    Before building up the sample app, you need to prepare your environment.

    Installing Cordova

    We’ve decided to use Apache Cordova for this project as it’s currently the best free tool for delivering HTML apps to many different platforms. You can build up your app using web technologies and then get Cordova to automatically port the app over to the different native platforms. Let’s get it installed first.

    1. First install NodeJS: Cordova is a NodeJS package.
    2. Next, install Cordova globally using the npm package manager:
      npm install -g cordova

    Note: On Linux or OS X, you may need to have root access.

    Installing the latest Firefox

    If you haven’t updated Firefox for a while, you should install the latest version to make sure you have all the tools you need.

    Installing Brick

    Mozilla Brick is a tool built for app developers. It’s a set of ready-to-use web components that allow you to build up and use common UI components very quickly.

    1. To install Brick we will need to use the Bower package manager. Install this, again using npm:
      npm install -g bower
    2. You can install Brick for your current project using
      bower install mozbrick/brick

      but don’t do this right now — you need to put this inside your project, not just anywhere.

    Getting some sample HTML

    Now you should find some sample HTML to use in the project — copy your own children’s online school plans for this purpose, or use our sample if you don’t have any but still want to follow along. Save your markup in a safe place for now.

    Stage 1: Setting up the basic HTML project

    In this part of the tutorial we will set up the basic project, and display the school plans in plain HTML. See the stage 1 code on Github if you want to see what the code should look like at the end of this section.

    1. Start by setting up a plain Cordova project. On your command line, go to the directory in which you want to create your app project, and enter the following command:
      cordova create school-plan com.example.schoolplan SchoolPlan

      This will create a school-plan directory containing some files.

    2. Inside school-plan, open www/index.html in your text editor and remove everything from inside the <body> element.
    3. Copy the school plan HTML you saved earlier into separate elements. This can be structured however you want, but we’d recommend using HTML <table>s for holding each separate plan:
    4. Change the styling contained within www/css/index.css if you wish, to make the tables look how you want. We’ve chosen to use “zebra striping” for ease of reading.
      table {
        width: 100%;
        border-collapse: collapse;
        font-size: 10px;
      th {
        font-size: 12px;
        font-weight: normal;
        color: #039;
        padding: 10px 8px;
      td {
        color: #669;
        padding: 8px;
      tbody tr:nth-child(odd) {
        background: #e8edff;
    5. To test the app quickly and easily, add the firefoxos platform as a cordova target and prepare the application by entering the following two commands:
      cordova platform add firefoxos
      cordova prepare

      The last step is needed every time you want to check the changes.

    6. Open the App Manager in the Firefox browser. Press the [Add Packaged App] button and navigate to the prepared firefoxos app directory, which should be available in school-plan/platforms/firefoxos/www.

      Note: If you are running Firefox Aurora or Nightly, you can do these tasks using our new WebIDE tool, which has a similar but slightly different workflow to the App Manager.

    7. Press the [Start Simulator] button then [Update] and you will see the app running in a Firefox OS simulator. You can inspect, debug and profile it using the App Manager — read Using the App Manager for more details. App Manager buttons<br />
    8. Now let’s export the app as a native Android APK so we can see it working on that platform. Add the platform and get Cordova to build the apk file with the following two commands:
      cordova platform add android
      cordova build android
    9. The apk is build in school-plan/platforms/android/ant-build/SchoolPlan-debug.apk — read the Cordova Android Platform Guide for more details on how to test this.

    Stage1 Result Screenshot<br />

    Stage 2

    In Stage 2 of our app implementation, we will look at using Brick to improve the user experience of our app. Instead of having to potentially scroll through a lot of lesson plans to find the one you want, we’ll implement a Brick custom element that allows us to display different plans in the same place.

    You can see the finished Stage 2 code on Github.

    1. First, run the following command to install the entire Brick codebase into the app/bower_components directory.
      bower install mozbrick/brick
    2. We will be using the brick-deck component. This provides a “deck of cards” type interface that displays one brick-card while hiding the others. To make use of it, add the following code to the <head> of your index.html file, to import its HTML and JavaScript:
      <script src="app/bower_components/brick/dist/platform/platform.js"></script>
      <link rel="import" href="app/bower_components/brick-deck/dist/brick-deck.html">
    3. Next, all the plans need to be wrapped inside a <brick-deck> custom element, and every individual plan should be wrapped inside a <brick-card> custom element — the structure should end up similar to this:
      <brick-deck id="plan-group" selected-index="0">
        <brick-card selected>
            <!-- school plan 1 -->
            <!-- school plan 2 -->
    4. The brick-deck component requires that you set the height of the <html> and <body> elements to 100%. Add the following to the css/index.css file:
      html, body {height: 100%}
    5. When you run the application, the first card should be visible while the others remain hidden. To handle this we’ll now add some JavaScript to the mix. First, add some <link> elements to link the necessary JavaScript files to the HTML:
      <script type="text/javascript" src="cordova.js"></script>
      <script type="text/javascript" src="js/index.js"></script>
    6. cordova.js contains useful general Cordova-specific helper functions, while index.js will contain our app’s specific JavaScript. index.js already contains a definition of an app variable. The app is running after app.initialize() is called. It’s a good idea to call this when window is loaded, so add the following:
      window.onload = function() {
    7. Cordova adds a few events; one of which — deviceready — is fired after all Cordova code is loaded and initiated. Let’s put the main app action code inside this event’s callback — app.onDeviceReady.
      onDeviceReady: function() {
          // starts when device is ready
    8. Brick adds a few functions and attributes to all its elements. In this case loop and nextCard are added to the <brick-deck> element. As it includes an id="plan-group" attribute, the appropriate way to get this element from the DOM is document.getElementById. We want the cards to switch when the touchstart event is fired; at this point nextCard will be called from the callback app.nextPlan.
      onDeviceReady: function() {
          app.planGroup = document.getElementById('plan-group');
          app.planGroup.loop = true;
          app.planGroup.addEventListener('touchstart', app.nextPlan);
      nextPlan: function() {

    Stage2 Result Animation<br />

    Stage 3

    In this section of the tutorial, we’ll add a menu bar with the name of the currently displayed plan, to provide an extra usability enhancement. See the finished Stage 3 code on GitHub.

    1. To implement the menu bar, we will use Brick’s brick-tabbar component. We first need to import the component. Add the following lines to the <head> of your HTML:
      <script src="app/bower_components/brick/dist/platform/platform.js"></script>
      <link rel="import" href="app/bower_components/brick-deck/dist/brick-deck.html">
      <link rel="import" href="app/bower_components/brick-tabbar/dist/brick-tabbar.html">
    2. Next, add an id to all the cards and include them as the values of target attributes on brick-tabbar-tab elements like so:
      <brick-tabbar id="plan-group-menu" selected-index="0">
          <brick-tabbar-tab target="angelica">Angelica</brick-tabbar-tab>
          <brick-tabbar-tab target="andrew">Andrew</brick-tabbar-tab>
      <brick-deck id="plan-group" selected-index="0">
          <brick-card selected id="angelica">
    3. The Deck’s nextCard method is called by Brick behind the scenes using tab’s reveal event. The cards will change when the tabbar element is touched. The app got simpler, as we are now using the in-built Brick functionality, rather than our own custom code, and Cordova functionality. If you wished to end the tutorial here you could safely remove the <script> elements that link to index.js and cordova.js from the index.html file.

    Stage3 Result Animation<br />

    Stage 4

    To further improve the user experience on touch devices, we’ll now add functionality to allow you to swipe left/right to navigate between cards. See the finished stage 4 code on GitHub.

    1. Switching cards is currently done using the tabbar component. To keep the selected tab in sync with the current card you need to link them back. This is done by listening to the show event of each card. For each tab from stored in app.planGroupMenu.tabs:
      tab.targetElement.addEventListener('show', function() {
          // select the tab
    2. Because of the race condition (planGroupMenu.tabs might not exist when the app is initialized) polling is used to wait until the right moment before trying to assign the events:
      function assignTabs() {
          if (!app.planGroupMenu.tabs) {
              return window.setTimeout(assignTabs, 100);
          // proceed

      The code for linking the tabs to their associated cards looks like so:

      onDeviceReady: function() {
          app.planGroupMenu = document.getElementById('plan-group-menu');
          function assignTabs() {
              if (!app.planGroupMenu.tabs) {
                  return window.setTimeout(assignTabs, 100);
              for (var i=0; i < app.planGroupMenu.tabs.length; i++) {
                  var tab = app.planGroupMenu.tabs[i];
                  tab.targetElement.tabElement = tab;
                  tab.targetElement.addEventListener('show', function() {
          // continue below ...
    3. Detecting a one finger swipe is pretty easy in a Firefox OS app. Two callbacks are needed to listen to the touchstart and touchend events and calculate the delta on the pageX parameter. Unfortunately Android and iOS do not fire the touchend event if the finger has moved. The obvious move would be to listen to the touchmove event, but that is fired only once as it’s intercepted by the scroll event. The best way forward is to stop the event from bubbling up by calling preventDefault() in the touchmove callback. That way scroll is switched off, and the functionality can work as expected:
      // ... continuation
      app.planGroup = document.getElementById('plan-group');
      var startX = null;
      var slideThreshold = 100;
      function touchStart(sX) {
          startX = sX;
      function touchEnd(endX) {
          var deltaX;
          if (startX) {
              deltaX = endX - startX;
              if (Math.abs(deltaX) > slideThreshold) {
                  startX = null;
                  if (deltaX > 0) {
                  } else {
      app.planGroup.addEventListener('touchstart', function(evt) {
          var touches = evt.changedTouches;
          if (touches.length === 1) {
      app.planGroup.addEventListener('touchmove', function(evt) {

    You can add as many plans as you like — just make sure that their titles fit on the screen in the tabbar. Actions will be assigned automatically.

    Stage4 Result Screenshot<br />

    To be continued …

    We’re preparing the next part, in which this app will evolve into a marketplace app with downloadable plans. Stay tuned!

  5. Passwordless authentication: Secure, simple, and fast to deploy

    Passwordless is an authentication middleware for Node.js that improves security for your users while being fast and easy to deploy.

    The last months were very exciting for everyone interested in web security and privacy: Fantastic articles, discussions, and talks but also plenty of incidents that raised awareness.

    Most websites are, however, still stuck with the same authentication mechanism as from the earliest days of the web: username and password.

    While username and password have their place, we should be much more challenging if they are the right solution for our projects. We know that most people use the same password on all the sites they visit. For projects without dedicated security experts, should we really open up our users to the risk that a breach of our site also compromises their Amazon account? Also, the classic mechanism has by default at least two attack vectors: the login page and the password recovery page. Especially the latter is often implemented hurried and hence inherently more risky.

    We’ve seen quite a bit of great ideas recently and I got particularly excited by one very straightforward and low-tech solution: one-time passwords. They are fast to implement, have a small attack surface, and require neither QR codes nor JavaScript. Whenever a user wants to login or has her session invalidated, she receives a short-lived one-time link with a token via email or text message. If you want to give it a spin, feel free to test the demo on

    Unfortunately—depending on your technology stack—there are few to none ready-made solutions out there. Passwordless changes this for Node.js.

    Getting started with Node.js & Express

    Getting started with Passwordless is straight-forward and you’ll be able to deploy a fully fledged and secure authentication solution for a small project within two hours:

    $ npm install passwordless --save

    gets you the basic framework. You’ll also want to install one of the existing storage interfaces such as MongoStore which store the tokens securely:

    $ npm install passwordless-mongostore --save

    To deliver the tokens to the users, email would be the most common option (but text message is also feasible) and you’re free to pick any of the existing email frameworks such as:

    $ npm install emailjs --save

    Setting up the basics

    Let’s require all of the above mentioned modules in the same file that you use to initialise Express:

    var passwordless = require('passwordless');
    var MongoStore = require('passwordless-mongostore');
    var email   = require("emailjs");

    If you’ve chosen emailjs for delivery that would also be a great moment to connect it to your email account (e.g. a Gmail account):

    var smtpServer  = email.server.connect({
       user:    yourEmail,
       password: yourPwd,
       host:    yourSmtp,
       ssl:     true

    The final preliminary step would be to tell Passwordless which storage interface you’ve chosen above and to initialise it:

    // Your MongoDB TokenStore
    var pathToMongoDb = 'mongodb://localhost/passwordless-simple-mail';
    passwordless.init(new MongoStore(pathToMongoDb));

    Delivering a token

    passwordless.addDelivery(deliver) adds a new delivery mechanism. deliver is called whenever a token has to be sent. By default, the mechanism you choose should provide the user with a link in the following format:{TOKEN}&amp;uid={UID}

    deliver will be called with all the needed details. Hence, the delivery of the token (in this case with emailjs) can be as easy as:

        function(tokenToSend, uidToSend, recipient, callback) {
            var host = 'localhost:3000';
                text:    'Hello!nAccess your account here: http://'
                + host + '?token=' + tokenToSend + '&amp;uid='
                + encodeURIComponent(uidToSend),
                from:    yourEmail,
                to:      recipient,
                subject: 'Token for ' + host
            }, function(err, message) {
                if(err) {

    Initialising the Express middleware

    app.use(passwordless.acceptToken({ successRedirect: '/'}));

    sessionSupport() makes the login persistent, so the user will stay logged in while browsing your site. Please make sure that you’ve already prepared your session middleware (such as express-session) beforehand.

    acceptToken() will intercept any incoming tokens, authenticate users, and redirect them to the correct page. While the option successRedirect is not strictly needed, it is strongly recommended to use it to avoid leaking valid tokens via the referrer header of outgoing HTTP links on your site.

    Routing & Authenticating

    The following takes for granted that you’ve already setup your router var router = express.Router(); as explained in the express docs

    You will need at least two URLs to:

    • Display a page asking for the user’s email
    • Accept the form details (via POST)
    /* GET: login screen */
    router.get('/login', function(req, res) {
    /* POST: login details */'/sendtoken',
        function(req, res, next) {
            // TODO: Input validation
        // Turn the email address into a user ID
            function(user, delivery, callback) {
                // E.g. if you have a User model:
                User.findUser(email, function(error, user) {
                    if(error) {
                    } else if(user) {
                        // return the user ID to Passwordless
                    } else {
                        // If the user couldn’t be found: Create it!
                        // You can also implement a dedicated route
                        // to e.g. capture more user details
                        User.createUser(email, '', '',
                            function(error, user) {
                                if(error) {
                                } else {
        function(req, res) {
            // Success! Tell your users that their token is on its way

    What happens here? passwordless.requestToken(getUserId) has two tasks: Making sure the email address exists and transforming it into a unique user ID that can be sent out via email and can be used for identifying users later on. Usually, you’ll already have a model that is taking care of storing your user details and you can simply interact with it as shown in the example above.

    In some cases (think of a blog edited by just a couple of users) you can also skip the user model entirely and just hardwire valid email addresses with their respective IDs:

    var users = [
        { id: 1, email: '' },
        { id: 2, email: '' }
    /* POST: login details */'/sendtoken',
            function(user, delivery, callback) {
                for (var i = users.length - 1; i >= 0; i--) {
                    if(users[i].email === user.toLowerCase()) {
                        return callback(null, users[i].id);
                callback(null, null);
            // Same as above…

    HTML pages

    All it needs is a simple HTML form capturing the user’s email address. By default, Passwordless will look for an input field called user:

            <form action="/sendtoken" method="POST">
                <br /><input name="user" type="text">
                <br /><input type="submit" value="Login">

    Protecting your pages

    Passwordless offers middleware to ensure only authenticated users get to see certain pages:

    /* Protect a single page */
    router.get('/restricted', passwordless.restricted(),
     function(req, res) {
      // render the secret page
    /* Protect a path with all its children */
    router.use('/admin', passwordless.restricted());

    Who is logged in?

    By default, Passwordless makes the user ID available through the request object: req.user. To display or reuse the ID it to pull further details from the database you can do the following:

    router.get('/admin', passwordless.restricted(),
        function(req, res) {
            res.render('admin', { user: req.user });

    Or, more generally, you can add another middleware that pulls the whole user record from your model and makes it available to any route on your site:

    app.use(function(req, res, next) {
        if(req.user) {
            User.findById(req.user, function(error, user) {
                res.locals.user = user;
        } else {

    That’s it!

    That’s all it takes to let your users authenticate securely and easily. For more details you should check out the deep dive which explains all the options and the example that will show you how to integrate all of the things above into a working solution.


    As mentioned earlier, all authentication systems have their tradeoffs and you should pick the right system for your needs. Token-based channels share one risk with the majority of other solutions incl. the classic username/password scheme: If the user’s email account is compromised and/or the channel between your SMTP server and the user’s, the user’s account on your site will be compromised as well. Two default options help mitigate (but not entirely eliminate!) this risk: short-lived tokens and automatic invalidation of the tokens after they’ve been used once.

    For most sites token-based authentication represents a step up in security: users don’t have to think of new passwords (which are usually too simple) and there is no risk of users reusing passwords. For us as developers, Passwordless offers a solution that has only one (and simple!) path of authentication that is easier to understand and hence to protect. Also, we don’t have to touch any user passwords.

    Another point is usability. We should consider both, the first time usage of your site and the following logons. For first-time users, token-based authentication couldn’t be more straight-forward: They will still have to validate their email address as they have to with classic login mechanisms, but in the best-case scenario there will be no additional details required. No creativity needed to come up with a password that fulfils all restrictions and nothing to memorise. If the user logins again, the experience depends on the specific use case. Most websites have relatively long session timeouts and logins are relatively rare. Or, people’s visits to the website are actually so infrequent that they will have difficulties recounting if they already had an account and if so what the password could have been. In those cases Passwordless presents a clear advantage in terms of usability. Also, there are few steps to take and those can be explained very clearly along the process. Websites that users visit frequently and/or that have conditioned people to login several times a week (think of Amazon) might however benefit from a classic (or even better: two-factor) approach as people will likely be aware of their passwords and there might be more opportunity to convince users about the importance of good passwords.

    While Passwordless is considered stable, I would love your comments and contributions on GitHub or your questions on Twitter: @thesumofall

  6. Unity games in WebGL: Owlchemy Labs’ conversion of Aaaaa! to asm.js

    You may have seen the big news today, but for those who’ve been living in an Internet-less cave, starting today through October 28 you can check out the brand spankin’ new Humble Mozilla Bundle. The crew here at Owlchemy Labs were given the unique opportunity to work closely with Unity, maker of the leading cross-platform game engine, and Humble to attempt to bring one of our games, Aaaaa! for the Awesome, a collaboration with Dejobaan Games, to the web via technologies like WebGL and asm.js.

    I’ll attempt to enumerate some of the technical challenges we hit along the way as well as provide some tips for developers who might follow our path in the future.

    Unity WebGL exporter

    Working with pre-release alpha versions of the Unity WebGL exporter (now in beta) was a surprisingly smooth experience overall! Jonas Echterhoff, Ralph Hauwert and the rest of the team at Unity did an amazing job getting the core engine running with asm.js and playing Unity content in the browser at incredible speeds; it was pretty staggering. When you look at the scope of the problem and the technical magic needed to go all the way from C# scripting down to the final 1-million-plus-line .js file, the technology is mind boggling.

    Thankfully, as content creators and game developers, Unity has allowed us to focus our worries away from the problem of getting our games to compile in this new build target by taking care of the heavy lifting under the hood. So did we just hit the big WebGL export button and sit back while Unity cranked out the html and js? Well, it’s a bit more involved than that, but it’s certainly better than some of the prior early-stage ports we’ve done.

    For example, our experience with bringing a game through the now defunct Unity to Stage3D/Flash exporter during the Flash in a Flash contest in late 2011 was more like taking a machete to a jungle of code, hacking away core bits, working around inexplicably missing core functionality (no generic lists?!) and making a mess of our codebase. WebGL was a breeze comparatively!

    The porting process

    Our porting process began in early June of this year when we gained alpha access to the WIP WebGL exporter to prove whether a complex game like Aaaaa! for the Awesome was going to be portable within a relatively short time frame with such an early framework. After two days of mucking about with the exporter, we knew it would be doable (and had content actually running in-browser!) but as with all tech endeavors like this, we were walking in blind as to the scope of the entire port that was ahead of us.

    Would we hit one or two bugs? Hundreds? Could it be completed in the short timespan we were given? Thankfully we made it out alive and dozens of bug reports and fixes later, we have a working game! Devs jumping into this process now (October 2014 and onward) fortunately get all of these fixes built in from the start and can benefit from a much smoother pipeline from Unity to WebGL. The exporter has improved by a huge amount since June!

    Initial issues

    We came across some silly issues that were either caused by our project’s upgrade from Unity 4 to Unity 5 or simply the exporter being in such “early days”. Fun little things such as all mouse cursor coordinates being inverted inexplicably caused some baffled faces but of course has been fixed at the time of writing. We also hit some physics-related bugs that turned out to have been caused by the Unity 4 to Unity 5 upgrade — this led to a hilarious bug where players wouldn’t smash through score plates and get points but instead slammed into score plates as if they were made of concrete, instantly crushing the skydiving player. A fun new feature!

    Additionally, we came across a very hard-to-track-down memory leak bug that only exhibited itself after playing the game for an extended session. With a hunch that the leak revolved around scene loading and unloading, we built a hands-off repro case that loaded and unloaded the same scene hundreds of times, causing the crash and helping the Unity team find and fix the leak! Huzzah!

    Bandwidth considerations

    Above examples are fun to talk about but have essentially been solved by this point. That leaves developers with two core development issues that they’ll need to keep in mind when bringing games to the Web: bandwidth considerations, and form factor / user experience changes.

    Aaaaa! Is a great test case for a worst case scenario when it comes to file size. We have a game with over 200 levels or zones, over with 300 level assets that can be spawned at runtime in any level, 48 unique skyboxes (6 textures per sky!), and 38 full-length songs. Our standalone PC/Mac build weighs in at 388mb uncompressed. Downloading almost 400 megabytes to get to the title screen of our game would be completely unacceptable!

    In our case, we were able to rely on Unity’s build process to efficiently strip and pack the build into a much smaller size, but also took advantage of Unity’s AudioClip streaming solution to stream in our music at runtime on demand! The file size savings of streaming music was huge and highly recommended for all Unity games. To glean additional file size savings, Asset Bundles can be used for loading levels on demand, but are best used in simple games or when building games from the ground up with web in mind.

    In the end, our final *compressed* WebGL build size, which includes all of our loaded assets as well as the Unity engine itself ended up weighing in at 68.8 MB, compared to a *compressed* standalone size of 192 MB, almost 3x smaller than our PC build!

    Form factor/user experience changes

    User experience considerations are the other important factor to keep in mind when developing games for the Web or porting existing games to be fun, playable Web experiences. Examples of keeping the form factor of the Web include avoiding “sacred” key presses, such as Escape. Escape is used as pause in many games but many browsers eat up the Escape key and reserve it for exiting full-screen mode or releasing mouse lock. Mouse lock and full-screen are both important to creating fully-fledged gaming experiences on the web so you’ll want to find a way to re-bind keys to avoid these special key presses that are off-limits when in the browser.

    Secondly, you’ll want to remember that you’re working within a sandboxed environment on the Web so loading in custom music from the user’s hard drive or saving large files locally can be problematic due to this sandboxing. It might be worth evaluating which features in your game you might want to be modified to fit the Web experience vs. a desktop experience.

    Players also notice the little things that key someone into a game being a rushed port. For example, if you have a quit button on the title screen of your PC game, you should definitely remove it in your web build as quitting is not a paradigm used on the Web. At any point the user can simply navigate away from the page, so watch out for elements in your game that don’t fit the current web ecosystem.

    Lastly you’ll want to think about ways to allow your data to persist across multiple browsers on different machines. Gamers don’t always sit on the same machine to play their games, which is why many services allow for cloud save functionality. The same goes for the Web, and if you can build a system (like the wonderfully talented Edward Rudd created for the Humble Player, it will help the overall web experience for the player.

    Bringing games to the Web!

    So with all of that being said, the Web seems like a very viable place to be bringing Unity content as the WebGL exporter solidifies. You can expect Owlchemy Labs to bring more of their games to the Web in the near future, so keep an eye out for those! ;) With our content running at almost the same speed as native desktop builds, we definitely have a revolution on our hands when it comes to portability of content, empowering game developers with another outlet for their creative content, which is always a good thing.

    Thanks to Dejobaan Games, the team at Humble Bundle, and of course the team at Unity for making all of this possible!

  7. Blend4Web: the Open Source Solution for Online 3D

    Half year ago Blend4Web was first released publicly. In this article I’ll show what Blend4Web is, how it is evolved and and how it can be used for web development.

    What Is Blend4Web?

    In short, Blend4Web is an open source framework for creating 3D web applications. It uses Blender – the popular open source 3D modeling suite – as the primary authoring tool. 3D graphics is rendered by means of WebGL which is also an open standard technology. The two main keywords here – Blender and Web(GL) – explain the purpose of this engine perfectly.

    The full source code of Blend4Web together with some usage examples is available under GPLv3 on GitHub (there is also a commercial licensing option).

    The 3D Web

    On June the 2nd Apple presented their new operating systems – OS X Yosemite and iOS 8 – both featuring WebGL support in their Safari browser. That marked the end of a 5 year cycle during which the WebGL technology has been evolving, starting with the first unstable browser builds (if anybody remembers Firefox 3.7 alpha?). Now, all the major browsers on all desktop and mobile systems support this open standard for rendering 3D graphics, everywhere, without any plugins.

    That was a long and difficult road, along which Blend4Web development has been following WebGL development as a shadow. Broken rendering, tab crashes, security “warnings” from some big guys, unavailability in public browser builds, all sorts of fears, uncertainty and doubts. All this didn’t matter, because we have the opportunity to do 3D graphics (and sound) in browsers!


    The first Blender 2.5x builds appeared in summer 2010. At the time we, the programming geeks, were pushed to learn the basics of 3D modeling by the beautiful Sintel from the open source movie of the same name. After choosing Blender, we could be as independent as possible, with a full open source pipeline organized on a Linux platform. Blender gave us the power to make our own 3D scenes, and later helped to attract talanted artists from its wonderful community to join us.

    Blend4Web Evolution in Demos

    Our demo scenes matured together with the development of Blend4Web. The first one was a quite low-poly and almost non-interactive demo called The Island. It was created in 2011 and polished a bit before the public release. In this demo we introduced our Blender-based pipeline in which all the assets are stored in separate files and are linked into the main file for level design and further exporting (for this reason some of Blend4Web users call it “free Unity Pro”).

    In Fashion Show we developed cloth animation techniques. Some post-processing effects, dynamic reflection and particle systems were added later. After Blend4Web has gone public we summarized these cloth-releated tricks in one of our tutorials.

    The Farm is a huge scene (in the scale of a browser): over 25 hectares of land, buildings, animated animals and foliage. We added some gamedev elements into it, including the ability of first-person walking, interacting with objects, driving a vehicle. The demo features spatial audio (via Web Audio) and physics (via Bullet and Asm.js). The Freedesktop folks tried it as a benchmark while testing the Mesa drivers (and got “massive crashes” :).

    We also tried some visualization and created Nature Morte. In this scene we used carefully crafted textures and materials, as well as post-processing effects to improve realism. However, the technology used for this demo was
    quite simple and old-school, as we had no support for visual shader editing yet.

    Things have changed when Blender’s node materials have become available to our artists. They created over 40 different materials for the Sports Car model: chromed metal, painted metal, glass, rubber, leather etc.

    In our latest release we stepped even further by adding support for the animation control by the user. Now interactivity can be implemented without any coding. In order to demonstrate the new opening possibilities we presented interactive infographic of a light helicopter.

    Among the other possible applications of this simple yet effective tool (called NLA Script) we can list the following: interactive 3D web design, product promotions, learning materials, cartoons with the ability to choose between different story lines, point-and-click games and any other applications previously created with Flash.

    Using Blend4Web

    It is very easy to start using Blend4Web – just download and install the Blender addon as shown in this video tutorial:

    The most wonderful thing is that your Blender scene can be exported into a self-contained HTML file that can be emailed, uploaded to your own website or to a cloud – in short shared however you like. This freedom is a fundamental difference from numerous 3D web publishing services as we don’t lock our users to our technology by any means.

    For those who want to create highly interactive 3D web apps we offer the SDK. Some notable examples of what is possible with the Blend4Web API are demonstrated in our programming tutorials, ranging from web design to games.

    Programming 3D web apps with Blend4Web is not much harder than building average RIAs. Unlike some other WebGL frameworks in the wild we tried to offload all graphics, animation and audio tasks to respective professionals. The programmer just loads the scene…

    var m_data = require("data");
    m_data.load("example.json", load_cb);

    …and then writes the logic which triggers the 3D scene changes that are “hard-coded” by the artists, e.g. plays the animation for the user-selected object:

    var m_scenes = require("scenes");
    var m_anim = require("animation");
    var myobj = m_scenes.pick_object(event.clientX, event.clientY);

    As you can see the APIs are structured in a CommonJS way which we believe is important for creating compact and fast web apps.

    The Future

    There are many possible ways in which the Internet and IT will be going but there is no doubt that the strong and steady development of 3D Web is already here. We expect that more and more users will change their expectations about how web content should look and feel like. We’re gonna help the web developers meet these demands with plans to improve usability and performance and to implement new interesting graphics effects.

    We also follow the development of WebGL 2.0 (thanks Mozilla for your job) and expect to create even more nice things on top of it.

    Stay Tuned

    Read our blog, join us on Twitter, Google+, Facebook and Reddit, watch the demos and tutorials on our YouTube channel, fork Blend4Web at GitHub.

  8. The Missing SDK For Hybrid App Development

    Hybrid vs. native: The debate has gone on, and will go on, for ages. Each form of app development has its pros and cons, and an examination of the key differences between the two methods reveals that a flat correlation is like comparing apples to oranges. Many hybrid app developers understand that they’re not starting on a level playing field with native developers, but it’s important for them to understand exactly how that field fails to be level. We should analyze the differences at the app development framework level, instead of simply comparing hybrid to native.

    Native App Development

    Native software development kits (SDKs) come prepackaged with the required tools to compile and sign apps–all that boring stuff that “just works.” Their canned templates and the common code they provide determines “how” an app works, which are hugely beneficial features hiding in plain sight.

    For example, Apple’s own development environment, Xcode, is a suite of tools that make it easier to develop software for Apple platforms. As soon as you launch Xcode, it provides a list of templates for a new project. Let’s create an iOS tabbed application and see just how many tools native app developers have available to them from the start.

    After plugging in a few settings, I quickly have a working tabbed app that can be run within a minute of Xcode’s launch. Awesome! The easy part is done, and now it’s time to sport the hoodie and headphones and focus on my goal of creating a best-selling iOS app.

    Let’s drill into this a little further and take a look at what Xcode built for me, without me even having to give it a second thought. The app fills 100% of the device’s display; the tabs are aligned to the base with a gray background; one tab is highlighted blue, the other one gray, with icons on top of the tab’s title. Event listeners are added to the tab buttons, and when I click the second tab, it then turns blue, and the other one turns gray, along with switching out the content for the second view.

    After a little more development, my app will have a separate navigation stack for each tab; the back button will show when needed; the title of each view will switch out in the header; and smooth transitions will happen as I navigate between different views. Fonts, colors, gradients, and styles will all follow what an app should look like on that platform. By expanding folders in Xcode’s project navigator, I notice the ridiculous amount of files that were placed there for me. I have no clue what they do and honestly don’t really care, but they’re actually extremely important: They’re the files that made the development of my app so seamless.

    Hybrid App Development

    Honestly, though, I’m a web developer. My organization isn’t filled with Objective-C and Java developers ready to build multiple apps with the ultimate goal of just trying to be the same app (but built entirely differently, from the ground up). I have existing HTML/CSS/JS skills, and I want my app to hit not only Apple’s App Store, but Google’s Play Store and all the others. Instead of locking myself into each platform and having to learn and maintain multiple proprietary languages, hybrid app development makes more sense, given my skill set and my time constraints.

    Don’t let hybrid app development scare you. In its simplest form, a hybrid app is still just a web browser, but without the URL bar and back button. This means you can build a full-fledged app using HTML/CSS/JS but still have it work like a native app, with all the same superpowers, like accessing bluetooth, GPS, camera, files, device motion, etc. Hybrid apps offers the best of both worlds.

    Let’s Build A Hybrid App

    For my use-case, I’ve decide a hybrid app is perfect, so let’s fire up….wait, there is no Xcode for hybrid apps! Where do I get all those files that makes my app do app things? Where’s the built-in UI, cool animations, and smooth transitions? The disadvantage to hybrid app development is that those starting with the web platform have to start from scratch. It’s the Wild West, and each developer is on his or her own.

    I absolutely love the web platform. It provides a core set of tools to not only share information but consume it. However, the web platform offers only the basic building blocks for structuring content; styling and interacting with that content; and navigating between documents. The core of the web platform does not provide pre-packaged libraries like those in iOS and Android. At the lowest level, a web browser represents one “view” and one navigation/history stack. This is another area in which native apps have an advantage, and hybrid devs burn up time generating code to give browsers the ability to handle multiple views with multiple stacks.

    Hybrid app developers quickly find themselves dealing with countless issues just to get their heads above water: figuring out the CSS to fill 100% of the viewport, sticking the tabs on the bottom and hovering content, adding event listeners, switching active states for icons and views, keeping track of navigation stacks of each tab, implementing touch gestures, generating a common way to navigate and transition between view, using hardware accelerated animations for the transitions, removing 300ms delays, knowing when to show or hide the back button, updating and animating the header title, interacting with Android’s back button, smooth scrolling through thousands of items, etc. All of these things come with native SDKs. And as an app grows, it’ll seem as though recreating natural, native interactions is a constant uphill battle–all of which native developers rarely think about, because for them it was baked in from the start.

    I believe the solution is to use a framework/library that levels the playing field, giving hybrid developers a toolkit similar to that of native developers. Most developers have a sense of pride in writing everything themselves, and there are certainly plenty of use-cases in which frameworks and/or libraries are not required. But when it comes to hybrid app development, there is a whole world of common code being recreated by each developer, for each app, and that’s a lot of work and time to invest in something that native developers already get.

    Cordova (the guts to Phonegap) is the predominant open-source software to morph the web platform into a native app. But just like the web platform, it only offers the basic building blocks, rather than a built-in framework to start with.

    Frameworks and libraries amplify app development and allow devs to focus on how their apps work, not the logic of how apps work.

    Hybrid Development Frameworks: Hybrid’s Missing SDK

    I define frameworks and libraries as any software package that increases productivity. In my experience, hybrid developers’ main frustration lies not in creating their app, but rather in getting their app to work like an app, which is a subtle, yet significant difference.

    With the advent of development frameworks focused on hybrid app development, developers finally have that missing SDK that levels the playing field with native. Cross-platform frameworks that embrace HTML/CSS/JS allow web developers to take their skills to app development. To increase productivity, hybrid app development frameworks should not only stick to the standards with pretty CSS and markup; they should also provide a MVC for serious, large-scale app development by a team.

    Sworkit, a hybrid app built using the Ionic Framework, featured in the iOS App Store.

    Sworkit, a hybrid app built using the Ionic Framework, featured in the iOS App Store.

    Frameworks like the Ionic Framework provide a common set of logic used to create apps, which isn’t already baked into the web platform (full disclosure: I am a co-creator of Ionic and a core contributor to AngularJS Material Design). Rather than inventing yet another MVC, Ionic chose to build on top of AngularJS, which has proven itself to be a leader among popular front-end frameworks in the last few years. Ionic is backed by a large and active community, solid documentation, and its own command-line tool to increase productivity (the CLI isn’t required but is highly recommended).

    Ionic’s use of AngularJS directives to create the user-interface allows for fast development, but most importantly, it’s highly customizable, through its use of Sass and simple markup. If you’re unfamiliar with AngularJS, Ionic offers a great way to learn it quickly, via numerous examples and templates.

    Google’s Polymer project and Mozilla Brick are built on top of the cutting-edge Web Components specification. As with AngularJS directives, Web Components offer a great way to abstract away complex code with simple HTML tags.

    Other proven hybrid development frameworks that have been around longer and offer their own powerful MVC/MVVM include Sencha Touch and Kendo UI. Both frameworks come with a large list of built-in components and widgets that enable developers to build powerful apps that work on iOS, Android, BlackBerry, Windows Phone, and more.

    I do not consider front-end frameworks, such as AngularJS, Ember, and Backbone to be hybrid app development frameworks. While I highly recommend front-end frameworks for large-scale app development, they’re focused on the logic, rather than the user interface.

    Frameworks like Twitter Bootstrap and Foundation offer a great user interface and responsive design capabilities. However, CSS frameworks focus on the UI and a responsive design pattern; but a UI alone still requires each developer to recreate the way a native app “works.”

    App frameworks abstract away the complexity of building an app and offer developers both the core logic and the UI, similar to what iOS and Android provide. I believe it is important to embrace the web standards and the web platform at the core, so it can be easily run from Cordova and built for multiple platforms (including a standard web browser).

    Software Engineering: Still Required

    Hybrid app development frameworks have come a long way in the last few years, and they have a bright future ahead of them. However, Developing Hybrid Apps Still Requires Actual Software Engineering, just as developing native apps does. As in any form of web development, if only a few hours are put into a project, the result is rarely a highly polished and production-ready piece of work.

    Hybrid fans sometimes offer the impression that frameworks create a unicorn and rainbow filled wonderland during development, which may mislead new developers. Just because you can write once and deploy everywhere doesn’t mean you’re exempt from the rules of software development. Hybrid offers many benefits, but as with so many things in life, cutting corners does not produce quality results, so please don’t claim all is lost after minimal effort.

    Making An Accurate Comparison

    The comparison of a hybrid app without a framework to a native app isn’t a fair one. To accurately examine hybrid vs. native development, we need to analyze each method at the framework level (hybrid with a framework vs. native, like Ionic Framework vs. iOS or Polymer vs. Android). As for hybrid vs. native: It’s a matter of studying your use-case first, rather than a choosing a definitive level of abstraction over everything else. It all ends up as 1s and 0s anyway, so jumping up another level of abstraction (HTML/CSS/JS) may be the right fit for your use-case and organization.

    I often feel that opinions regarding hybrid app development were formed well before hybrid was ready for prime time. The world has quickly been transitioning to faster devices and browsers, as well as improved web standards, and there’s no signs of either one slowing down. There is no better time than now to leverage your existing web development skills and build quality apps by standing on the shoulders of hybrid SDKs.


  9. Matchstick Brings Firefox OS to Your HDTV: Be the First to get a Developer Stick

    The first HDMI streaming stick powered by Firefox OS has arrived. It’s called Matchstick and we’re looking for your help to create apps for this new device.


    Matchstick stems from a group of coders that spent way too much time mired in the guts of platforms such as Boot to Gecko, XBMC, and Boxee. When Google introduced Chromecast we were excited about the possibilities but ultimately were disappointed when they pulled back on the device’s ultimate promise – any content on any HD screen, anywhere, anytime.

    We decided to make something better and more open, and to accomplish this we had to choose an operating system that would become the bedrock for the adaptable and open-sourced platform that is Matchstick. That platform is Firefox OS, which allows us to build the first streaming stick free of any walled garden ecosystem.

    Matchstick and Firefox OS combined offers a totally open platform (both software and hardware), that lets developers explore content and applications from video to games, and bring it right into the living room. That’s right Developers! An open SDK means you can build out your own personalized streaming and interactive experiences without the need for approval or review.

    Apps for Matchsticks

    We have opened up a full developer site with access to everything you need to begin working with Matchstick. Support for Firefox OS will be available at launch, and we look forward to adding TV applications to the Firefox OS Marketplace. For now, we have included a full API library, of sender apps with support for Android and iOS, as well as receiver apps that are compatible with the Matchstick Receiver.

    When we say Apps for Matchsticks, we mean both sender and receiver apps. You can use the sender APIs to enable your Firefox OS, Android or iOS device to discover a Matchstick device, then communicate with your receiver app. It’s not difficult to embed sender APIs into existing apps or create new sender apps, please refer to the sample code we have included with the SDK.

    Matchstick sender apps typically follow this execution flow:

    1. Scan for Matchstick
      The sender app searches for Matchstick devices residing in the same Wi-Fi network as the sender device. The scanning reveals a friendly display name, model and manufacturer, icon, and the device’s IP address. Presented with a list, a user may then select a target device from all those discovered.
    2. Connect to Matchstick
      We support both TLS and NON-TLS communication between sender and receiver.
    3. Launch Receiver App
      The sender initiates a negotiation with the target device, launching a receiver app either with the URL of a HTML5 receiver app, or even a Chromecast App ID.
    4. Establish Message Channels
      With the receiver app now launched, Matchstick establishes message channels between the sender and receiver. In addition to a media control channel common to all Matchstick and Chromecast apps, you may establish any number of application-specific channels to convey whatever customized data that your app might require.

    The receiver app is a combination of HTML5, CSS and Javascript, loaded into a “receiver container” which is a certified app of Firefox OS. To use the Matchstick receiver APIs, you need only include fling_receiver.js in your app.

    Here is an example of a simple video receiver app:

    <!DOCTYPE html>
         <title>Example simplest receiver</title>
         <meta charset="UTF-8">
         <!--(need) include matchstick receiver sdk-->
         <script src="//"></script>
         <!--(need) Give a video tag-->
         <video id="media"></video>
             // (need) Get video element
             window.mediaElement = document.getElementById("media");
             // (need) For media applications, override/provide any event listeners on the MediaManager.
             window.mediaManager = new fling.receiver.MediaManager(window.mediaElement);
             // (need) Get and hold the provided instance of the FlingReceiverManager.
             // Override/provide any event listeners on the FlingReceiverManager.
             window.flingReceiverManager = fling.receiver.FlingReceiverManager.getInstance();
             // (need) Call start on the FlingReceiverManager to indicate the receiver application is ready to receive messages.

    Matchsticks for Apps

    Rather than relying on emulators, we want to be sure developers can get their hands on Matchstick prototypes and start coding without delay. As a result, we are inviting app developers who will commit to building and porting apps for Firefox OS on Matchstick to apply for a free developer-preview device through our Matchsticks for Apps program.

    Similar to the phones-for-apps program launched by Mozilla, our Matchsticks for Apps program is aimed at developers who have built apps for Firefox OS, Chrome, Android, iOS …. Even for Chromecast! Let’s bring those visions to the big screen.


    Developer (pre-production) version of Matchstick (pic by Christian Heilmann)

    We are now looking for good ideas, video content, new channels, as well as games, tools, utilities, pictures, even skins for the UI. If you have a plan to build apps or do something for Matchstick, please share your plan and we’ll send you a stick so you can start coding ASAP.

    Who Should Apply:

    • Those interested in building apps on a big screen TV
    • Those with existing Web or mobile apps, who would like to expand to the big screen
    • HDMI dongle developers, who want to build their own Matchstick
    • Chromecast developers, who want to port their apps to an open platform
    • You!

    Matchstick workshop in November

    On Tuesday, November 18th, we plan to host an invitation-only Firefox OS App Workshop for Matchstick in the Mozilla office in San Francisco. The enrollment form is open and we are accepting applications from qualified developers.

    Apply now for the San Francisco workshop!

    If you don’t live near San Francisco, don’t worry! We plan to offer several other app workshops in the near future and we’ll announce them here, of course!

    Happy Hacking!

  10. Generational Garbage Collection in Firefox

    Generational garbage collection (GGC) has now been enabled in the SpiderMonkey JavaScript engine in Firefox 32. GGC is a performance optimization only, and should have no observable effects on script behavior.

    So what is it? What does it do?

    GGC is a way for the JavaScript engine to collect short-lived objects faster. Say you have code similar to:

    function add(point1, point2) {
        return [ point1[0] + point2[0], point1[1] + point2[1] ];

    Without GGC, you will have high overhead for garbage collection (from here on, just “GC”). Each call to add() creates a new Array, and it is likely that the old arrays that you passed in are now garbage. Before too long, enough garbage will pile up that the GC will need to kick in. That means the entire JavaScript heap (the set of all objects ever created) needs to be scanned to find the stuff that is still needed (“live”) so that everything else can be thrown away and the space reused for new objects.

    If your script does not keep very many total objects live, this is totally fine. Sure, you’ll be creating tons of garbage and collecting it constantly, but the scan of the live objects will be fast (since not much is live). However, if your script does create a large number of objects and keep them alive, then the full GC scans will be slow, and the performance of your script will be largely determined by the rate at which it produces temporary objects — even when the older objects aren’t changing, and you’re just re-scanning them over and over again to discover what you already knew. (“Are you dead?” “No.” “Are you dead?” “No.” “Are you dead?”…)

    Generational collector, Nursery & Tenured

    With a generational collector, the penalty for temporary objects is much lower. Most objects will be allocated into a separate memory region called the Nursery. When the Nursery fills up, only the Nursery will be scanned for live objects. The majority of the short-lived temporary objects will be dead, so this scan will be fast. The survivors will be promoted to the Tenured region.

    The Tenured heap will also accumulate garbage, but usually at a far lower rate than the Nursery. It will take much longer to fill up. Eventually, we will still need to do a full GC, but under typical allocation patterns these should be much less common than Nursery GCs. To distinguish the two cases, we refer to Nursery collections as minor GCs and full heap scans as major GCs. Thus, with a generational collector, we split our GCs into two types: mostly fast minor GCs, and fewer slower major GCs.

    GGC Overhead

    While it might seem like we should have always been doing this, it turns out to require quite a bit of infrastructure that we previously did not have, and it also incurs some overhead during normal operation. Consider the question of how to figure out whether some Nursery object is live. It might be pointed to by a live Tenured object — for example, if you create an object and store it into a property of a live Tenured object.

    How do you know which Nursery objects are being kept alive by Tenured objects? One alternative would be to scan the entire Tenured heap to find pointers into the Nursery, but this would defeat the whole point of GGC. So we need a way of answering the question more cheaply.

    Note that these Tenured ⇒ Nursery edges in the heap graph won’t last very long, because the next minor GC will promote all survivors in the Nursery to the Tenured heap. So we only care about the Tenured objects that have been modified since the last minor (or major) GC. That won’t be a huge number of objects, so we make the code that writes into Tenured objects check whether it is writing any Nursery pointers, and if so, record the cross-generational edges in a store buffer.

    In technical terms, this is known as a write barrier. Then, at minor GC time, we walk through the store buffer and mark every target Nursery object as being live. (We actually use the source of the edge at the same time, since we relocate the Nursery object into the Tenured area while marking it live, and thus the Tenured pointer into the Nursery needs to be updated.)

    With a store buffer, the time for a minor GC is dependent on the number of newly-created edges from the Tenured area to the Nursery, not just the number of live objects in the Nursery. Also, keeping track of the store buffer records (or even just the checks to see whether a store buffer record needs to be created) does slow down normal heap access a little, so some code patterns may actually run slower with GGC.

    Allocation Performance

    On the flip side, GGC can speed up object allocation. The pre-GGC heap needs to be fully general. It must track in-use and free areas and avoid fragmentation. The GC needs to be able to iterate over everything in the heap to find live objects. Allocating an object in a general heap like this is surprisingly complex. (GGC’s Tenured heap has pretty much the same set of constraints, and in fact reuses the pre-GGC heap implementation.)

    The Nursery, on the other hand, just grows until it is full. You never need to delete anything, at least until you free up the whole Nursery during a minor GC, so there is no need to track free regions. Consequently, the Nursery is perfect for bump allocation: to allocate N bytes you just check whether there is space available, then increment the current end-of-heap pointer by N bytes and return the previous pointer.

    There are even tricks to optimize away the “space available” check in many cases. As a result, objects with a short lifespan never go through the slower Tenured heap allocation code at all.


    I wrote a simple benchmark to demonstrate the various possible gains of GGC. The benchmark is sort of a “vector Fibonacci” calculation, where it computes a Fibonacci sequence for both the x and y components of a two dimensional vector. The script allocates a temporary object on every iteration. It first times the loop with the (Tenured) heap nearly empty, then it constructs a large object graph, intended to be placed into the Tenured portion of the heap, and times the loop again.

    On my laptop, the benchmark shows huge wins from GGC. The average time for an iteration through the loop drops from 15 nanoseconds (ns) to 6ns with an empty heap, demonstrating the faster Nursery allocation. It also shows the independence from the Tenured heap size: without GGC, populating the long-lived heap slows down the mean time from 15ns to 27ns. With GGC, the speed stays flat at 6ns per iteration; the Tenured heap simply doesn’t matter.

    Note that this benchmark is intended to highlight the improvements possible with GGC. The actual benefit depends heavily on the details of a given script. In some scripts, the time to initialize an object is significant and may exceed the time required to allocate the memory. A higher percentage of Nursery objects may get tenured. When running inside the browser, we force enough major GCs (eg, after a redraw) that the benefits of GGC are less noticeable.

    Also, the description above implies that we will pause long enough to collect the entire heap, which is not the case — our incremental garbage collector dramatically reduces pause times on many Web workloads already. (The incremental and generational collectors complement each other — each attacks a different part of the problem.)