Sort by:


  1. Distributed On-the-Fly Image Processing and Open Source at Vimeo

    When you think of Vimeo, you probably think of video — after all, it’s what we do. However, we also have to handle creation and distribution a lot of images: thumbnails, user portraits, channel headers, and all the various awesome graphics around Vimeo, to name a few.

    For a very long time, all of this content was static and served from the CDN as-is. If we wanted to introduce a new video resolution, we would have to run batch jobs to get new, higher resolution thumbnails from all of the videos on the site, where possible. This also means that if we ever wanted to tweak the quality of said images, we would be out of luck. It also meant on mobile, or on a high DPI screen, we had to serve the same size images as on our main site, unless we wanted to store higher and/or lower resolution versions of the same images.

    Enter ViewMaster.

    About two years ago, during one of our code jams, one of our mobile site developers brought the issue to us, the transcode team, in search of a backend solution. ViewMaster was born that night, but sat idle for a long time after, due to heavy workloads, before being picked up again a few months ago.

    We’ll go into more detail below, but a quick summary of what ViewMaster is and does is:

    • Written in Go and C.
    • Resizes, filters, crops, and encodes (with optimizations such as pngout to different formats on-the-fly, and entirely in memory.
    • Can be scaled out; each server is ‘dumb’.
    • Reworked thumbnailing that picks one ‘good’ thumbnail per video, during upload, and stores a master for use with the on-the-fly processing later on.
    • Migrates our existing N versions of each image to one master, high quality image to be stored.

    This allows us to:

    • Serve smaller or larger images to different screen types, depending on DPI, and OS.
    • Serve optimized images for each browser; e.g. WebP for Chrome, and JPEG-XR for IE11+.
    • Easily introduce new video resolutions and player sizes.
    • Scale thumbnail images to the size of an embed, for display.
    • Introduce new optimizations such as mozjpeg instantly, and without any significant migration problems.

    Now for the slightly more technical bits.

    General Architectural Overview and Migration

    ViewMaster Flow

    A general look at the process is given in the diagram above. If you’d like a more detailed look at the infrastructure and migration strategy (and a higher res diagram), including what some of those funny names mean, head over to the Making Vimeo Blog to check it out!

    Open Source

    The actual image processing happens entirely in memory — the disk is never hit. The main image processing service is written in Go, and making somewhat liberal use of its C FFI to call several libraries and a few tiny C routines, open source or otherwise. It is known that calling C functions from Go has an overhead, but in practice, this has been negligible compared to the time taken by much more intensive operations inside the libraries, such as decoding, encoding, resizing, etc.

    The process is rather straight forward: The video frame is seeked to and decoded and converted to RGB (yes, JPEG is YCbCr, but it made more sense for the master to be stored as RGB to us) and/or the image is decoded, and various calculations are done to account for things like non-square pixels, cropping, resizing, aspect ratios, etc. The image is then resized, encoded, and optimized. All of this is done in-memory using buffered IO in Go (via bufio), and if need be piped to an external process and back to the service where libraries are not available, such as the case is with Gifsicle and pngout.

    Plenty of tricks are used to speed things up, such as detecting the image type and resolution based on mime-type, libmagic, and the libraries listed below, so we don’t need to call avformat_find_stream_info, which does a full decode of the image to get this information.

    A few of the notable open source libraries we leverage (and contribute to!), include:

    • FFmpeg & Libav – Base image decoding libraries (libavcodec), resizing (swscale), remote image access. Now supports CMYK JPEGs too!
    • FFMS2 – Frame accurate seeking using the above libraries.
    • libwebp – WebP encoding.
    • LCMS2 – ICC profile handling.

    On top of those, we’ve written several Go packages to aid in this as well, some of which we have just open sourced:

    • go-util – General utility functions for Go.
    • go-iccjpeg – ICC profile extraction from a generic io.Reader.
    • go-magic – Idiomatic Go bindings for the libmagic C API using io.Reader.
    • go-imgparse – Resolution extraction from JPEG, PNG, and WebP images optimized for I/O and convenience, again using a standard io.Reader.
    • go-taglog – Extended logging package compatible with the standard Go logging package.
    • go-mediainfo – Very basic binding for MediaInfo.


    Although we are currently optimizing quite well for PNG, and WebP, there is still lots to be done. To that end, we have been involved with an contributing to a number open source projects to create a better and faster web experience. A few are discussed below. It may not have been obvious though, since we tend to use our standard email accounts to contribute, rather than our corporate ones… Sneaky!

    mozjpeg – Very promising already, having added features such as scan optimization, trellis quantization, DHT/DQT table merging, and deringing via overshoot clipping, with future features such as optimized quantization tables for high DPI displays and globally optimal edge extension. We plan to roll this out after the plan for ABI compatibility is implemented in 3.0, and also we plan to then add support to ImageMagick to benefit the greater community, if someone else has not already.

    jxrlib – Awesome of Microsoft to open source this, but it needs a bit of work API-wise (that is, an actual API). Until fairly recently, it could not even be built as a library.

    jpeg-recompress – Alongside mozjpeg, something akin to this is very desirable for JPEG generation. Uses the excellent IQA with mozjpeg and some other metrics (one implemented poorly) by me!).

    Open Source PNG optimization library – This was a bit of a sticking point with us. The current open source PNG optimization utils do not support any in-memory API at all, or in fact, even piping via stdin/stdout. pngout is the only tool which even supports piping. Long term, we’d like to be able to ditch the closed source tool and contribute an API to one of these projects.

    Photoshop, GIMP, etc. plugins – I plan to implement these using the above-mentioned libraries, so designers can more easily reap the benefits of better image compression.

  2. Porting to Emscripten

    Emscripten is an open-source compiler that compiles C/C++ source code into the highly optimizable asm.js subset of JavaScript. This enables running programs originally written for desktop environments in a web browser.

    Porting your game to Emscripten offers several benefits. Most importantly it enables reaching a far wider potential user base. Emscripten games work on any modern web browser. There is no need for installers or setups – the user just opens a web page. Local storage of game data in browser cache means the game only needs to be re-downloaded after updates. If you implement a cloud based user data storage system users can continue their gameplay seamlessly on any computer with a browser.

    More info is available in:

    While Emscripten support for portable C/C++ code is very good there are some things that need to be taken into consideration. We will take a look at those in this article.

    Part 1: Preparation

    Is porting my game to Emscripten even feasible? If it is, how easy will it be? First consider the following restrictions imposed by Emscripten:

    • No closed-source third-party libraries
    • No threads

    Then, already having some of the following:

    • Using SDL2 and OpenGL ES 2.0 for graphics
    • Using SDL2 or OpenAL for audio
    • Existing multiplatform support

    will make the porting task easier. We’ll next look into each of these points more closely.

    First things to check

    If you’re using any third-party libraries for which you don’t have the source code you’re pretty much out of luck. You’ll have to rewrite your code not to use them.

    Heavy use of threads is also going to be a problem since Emscripten doesn’t currently support them. There are web workers but they’re not the same thing as threads on other platforms since there’s no shared memory. So you’ll have to disable multithreading.


    Before even touching Emscripten there are things you can do in your normal development environment. First of all you should use SDL2. SDL is a library which takes care of platform-specific things like creating windows and handling input. An incomplete port of SDL 1.3 ships with Emscripten and there’s a port of full SDL2 in the works. It will be merged to upstream soon.

    Space combat in FTL.

    OpenGL ES 2.0

    Second thing is to use OpenGL ES 2.0. If your game is using the SDL2 render interface this has already been done for you. If you use Direct3D you’ll first have to create an OpenGL version of your game. This is why multiplatform support from the beginning is such a good idea.

    Once you have a desktop OpenGL version you then need to create an OpenGL ES version. ES is a subset of full OpenGL where some features are not available and there are some additional restrictions. At least the NVidia driver and probably also AMD support creating ES contexts on desktop. This has the advantage that you can use your existing environment and debugging tools.

    You should avoid the deprecated OpenGL fixed-function pipeline if possible. While Emscripten has some support for this it might not work very well.

    There are certain problems you can run into at this stage. First one is lack of extension support. Shaders might also need rewriting for Emscripten. If you are using NVidia add #version line to trigger stricter shader validation.

    GLSL ES requires precision qualifiers for floating-point and integer variables. NVidia accepts these on desktop but most other GL implementations not, so you might end up with two different sets of shaders.

    OpenGL entry point names are different between GL ES and desktop. GL ES does not require a loader such as GLEW but you still might have to check GL extensions manually if you are using any. Also note that OpenGL ES on desktop is more lenient than WebGL. For example WebGL is more strict about glTexImage parameters and glTexParameter sampling modes.

    Multiple render targets might not be supported on GL ES. If you are using a stencil buffer you must also have a depth buffer. You must use vertex buffer objects, not user-mode arrays. Also you cannot mix index and vertex buffers into the same buffer object.

    For audio you should use SDL2 or OpenAL. One potential issue is that the Emscripten OpenAL implementation might require more and larger sound buffers than desktop to avoid choppy sounds.

    Multiplatform support

    It’s good if your project has multiplatform support, especially for mobile platforms (Android, iOS). There are two reasons for this. First, WebGL is essentially OpenGL ES instead of desktop OpenGL so most of your OpenGL work is already done. Second, since mobile platforms use ARM architecture most of the processor-specific problems have already been fixed. Particularly important is memory alignment since Emscripten doesn’t support unaligned loads from memory.

    After you have your OpenGL sorted out (or even concurrently with it if you have multiple people) you should port your game to Linux and/or OS X. Again there are several reasons. First one is that Emscripten is based on LLVM and Clang. If your code was written and tested with MSVC it probably contains non standard constructs which MSVC will accept but other compilers won’t. Also different optimizer might expose bugs which will be much easier to debug on desktop than on a browser.

    FTL Emscripten version main menu. Notice the missing “Quit” button. The UI is similar to that of the iPad version.

    A good overview of porting a Windows game to Linux is provided in Ryan Gordon’s Steam Dev Days talk.

    If you are using Windows you could also compile with MinGW.

    Useful debugging tools


    The second reason for porting to Linux is to gain access to several useful tools. First among these is undefined behavior sanitizer (UBSan). It’s a Clang compiler feature which adds runtime checks to catch C/C++ undefined behavior in your code. Most useful of these is the unaligned load check. C/C++ standard specifies that when accessing a pointer it must be properly aligned. Unfortunately x86-based processors will perform unaligned loads so most existing code has not been checked for this. ARM-based processors will usually crash your program when this happens. This is why a mobile port is good. On Emscripten an unaligned load will not crash but instead silently give you incorrect results.

    UBSan is also available in GCC starting with 4.9 but unfortunately the unaligned load sanitizer is only included in the upcoming 5.0 release.


    Second useful tool in Clang (and GCC) is AddressSanitizer. This is a runtime checker which validates your memory accesses. Reading or writing outside allocated buffers can lead to crashes on any platform but the problem is somewhat worse on Emscripten. Native binaries have a large address space which contains lots of empty space. Invalid read, especially one that is only slightly off, might hit a valid address and so not crash immediately or at all. On Emscripten the address space is much “denser” so any invalid access is likely to hit something critical or even be outside the allocated address space entirely. This will trigger an unspectacular crash and might be very hard to debug.


    The third tool is Valgrind. It is a runtime tool which runs uninstrumented binaries and checks them for various properties. For our purposes the most useful are memcheck and massif. Memcheck is a memory validator like AddressSanitizer but it catches a slightly different set of problems. It can also be used to pinpoint memory leaks. Massif is a memory profiler which can answer the question “why am I using so much memory?” This is useful since Emscripten is also a much more memory-constrained platform than desktop or even mobile and has no built in tools for memory profiling.

    Valgrind also has some other checkers like DRD and Helgrind which check for multithreading issues but since Emscripten doesn’t support threads we won’t discuss them here. They are very useful though so if you do multithreading on desktop you really should be using them.

    Valgrind is not available on Windows and probably will never be. That alone should be a reason to port your games to other platforms.

    Third-party libraries

    Most games use a number of third-party libraries. Hopefully you’ve already gotten rid of any closed-source ones. But even open-source ones are usually shipped as already-compiled libraries. Most of these are not readily available on Emscripten so you will have to compile them yourself. Also the Emscripten object format is based on LLVM bytecode which is not guaranteed to be stable. Any precompiled libraries might no longer work in future versions of Emscripten.

    While Emscripten has some support for dynamic linking it is not complete or well supported and should be avoided.

    The best way around these issues is to build your libraries as part of your standard build process and statically link them. While bundling up your libraries to archives and including those in link step works you might run into unexpected problems. Also changing your compiler options becomes easier if all sources are part of your build system.

    Once all that is done you should actually try to compile with Emscripten. If you’re using MS Visual Studio 2010 there’s an integration module which you can try. If you’re using cmake Emscripten ships with a wrapper (emcmake) which should automatically configure your build.

    If you’re using some other build system it’s up to you to set it up. Generally CC=emcc and CXX=em++ should do the trick. You might also have to remove platform-specific options like SSE and such.

    Part 2: Emscripten itself

    So now it links but when you load it up in your browser it just hangs and after a while the browser will tell you the script has hung and kill it.

    What went wrong?

    On desktop games have an event loop which will poll input, simulate state and draw the scene and run until terminated. On a browser there is instead a callback which does these things and is called by the browser. So to get your game to work you have to refactor your loop to a callback. In Emscripten this is set with the function emscripten_set_main_loop. Fortunately in most cases this is pretty simple. The easiest way is to refactor the body of your loop to a helper function and then in your desktop version call it in a loop and in the browser set it as your callback. Or if you’re using C++11 you can use a lambda and store that in std::function. Then you can add a small wrapper which calls that.

    Problems appear if you have multiple separate loops, for example loading screens. In that case you need to either refactor them into a single loop or call them one after another, setting a new one and canceling the previous one with emscripten_cancel_main_loop. Both of these are pretty complex and depend heavily on your code.

    So, now the game runs but you get a bunch of error messages that your assets can’t be found. The next step is to add your assets to the package. The simple way is to preload them. Adding the switch --preload-file <filename> to link flags will cause Emscripten to add the specified files to a .data file which will then be preloaded before main is called. These files can then be accessed with standard C/C++ IO calls. Emscripten will take care of the necessary magic.

    However this approach becomes problematic when you have lots of assets. The whole package needs to be loaded before the program starts which can lead to excessive loading times. To fix this you can stream in some assets like music or video.

    If you already have async loading in your desktop code you can reuse that. Emscripten has the function emscripten_async_wget_data for loading data asynchronously. One difference to keep in mind is that Emscripten async calls only know asset size after loading has completed whereas desktop generally knows if after the file has been opened. For optimal results you should refactor your code to something like “load this file, then here’s an operation to do after you have it”. C++11 lambdas can be useful here. In any case you really should have matching code on the desktop version because debugging is so much easier there.

    You should add a call at the end of your main loop which handles async loads. You should not load too much stuff asynchronously as it can be slow, especially if you’re loading multiple small files.

    So now it runs for a while but crashes with a message about exceeded memory limit. Since Emscripten emulates memory with JavaScript arrays the size of those arrays is crucial. By default they are pretty small and can’t grow. You can enable growing them by linking with -s ALLOW_MEMORY_GROWTH=1 but this is slow and might disable asm.js optimizations. It’s mostly useful in the debugging phase. For final release you should find out a memory limit that works and use -s TOTAL_MEMORY=<number>.

    As described above, Emscripten doesn’t have a memory profiler. Use Valgrind massif tool on Linux to find out where the memory is spent.

    If your game is still crashing you can try using JavaScript debugger and source maps but they don’t necessarily work very well. This is why sanitizers are important. printf or other logging is a good way to debug too. Also -s SAFE_HEAP=1 in link stage can find some memory bugs.

    Osmos test version on Emscripten test html page.

    Saves and preferences

    Saving stuff is not as simple as on desktop. The first thing you should do is find all the places where you’re saving or loading user-generated data. All of it should be in one place or go through one wrapper. If it doesn’t you should refactor it on desktop before continuing.

    The simplest thing is to set up a local storage. Emscripten already has the necessary code to do it and emulate standard C-like filesystem interface so you don’t have to change anything.

    You should add something like this to either the preRun in html or first thing in your main:

    FS.createFolder('/', 'user_data', true, true)
    FS.mount(IDBFS, {}, '/user_data');
    FS.syncfs(true, function(err) {
                  if(err) console.log('ERROR!', err);
                  console.log('finished syncing..');

    Then after you’ve written a file you need to tell the browser to sync it. Add a new method which contains something like this:

    static void userdata_sync()
            FS.syncfs(function(error) {
                if (error) {
                    console.log("Error while syncing", error);

    and call it after closing the file.

    While this works it has the problem that the files are stored locally. For desktop games this is not a problem since users understand that saves are stored on their computer. For web-based games the users expect their saves to be there on all computers. For the Mozilla Bundle, Humble Bundle built a CLOUDFS library which works just like Emscripten’s IDBFS and has a pluggable backend. You need to build your own using emscripten GET and POST APIs.

    Osmos demo at the Humble Mozilla Bundle page.

    Making it fast

    So now your game runs but not very fast. How to make it faster?

    On Firefox the first thing to check is that asm.js is enabled. Open web console and look for message “Successfully compiled asm.js”. If it’s not there the error message should tell you what’s going wrong.

    The next thing to check is your optimization level. Emscripten requires proper -O option both when compiling and linking. It’s easy to forget -O from link stage since desktop doesn’t usually require it. Test the different optimization levels and read the Emscripten documentation about other build flags. In particular OUTLINING_LIMIT and PRECISE_F32 might affect code speed.

    You can also enable link-time optimization by adding --llvm-lto <n> option. But beware that this has known bugs which might cause incorrect code generation and will only be fixed when Emscripten is upgraded to a newer LLVM sometime in the future. You might also run into bugs in the normal optimizer since Emscripten is still somewhat work-in-progress. So test your code carefully and if you run into any bugs report them to Emscripten developers.

    One strange feature of Emscripten is that any preloaded resources will be parsed by the browser. We usually don’t want this since we’re not using the browser to display them. Disable this by adding the following code as --pre-js:

    var Module;
    if (!Module) Module = (typeof Module !== 'undefined' ? Module : null) || {};
    // Disable image and audio decoding
    Module.noImageDecoding = true;
    Module.noAudioDecoding = true;

    Next thing: don’t guess where the time is being spent, profile! Compile your code with --profiling option (both compile and link stage) so the compiler will emit named symbols. Then use the browser’s built-in JavaScript profiler to see which parts are slow. Beware that some versions of Firefox can’t profile asm.js code so you will either have to upgrade your browser or temporarily disable asm.js by manually removing use asm -statement from the generated JavaScript. You should also profile with both Firefox and Chrome since they have different performance characteristics and their profilers work slightly differently. In particular Firefox might not account for slow OpenGL functions.

    Things like glGetError and glCheckFramebuffer which are slow on desktop can be catastrophic in a browser. Also calling glBufferData or glBufferSubData too many times can be very slow. You should refactor your code to avoid them or do as much with one call as possible.

    Another thing to note is that scripting languages used by your game can be very slow. There’s really no easy way around this one. If your language provides profiling facilities you can use those to try to speed it up. The other option is to replace your scripts with native code which will get compiled to asm.js.

    If you’re doing physics simulation or something else that can take advantage of SSE optimizations you should be aware that currently asm.js doesn’t support it but it should be coming sometime soon.

    To save some space on the final build you should also go through your code and third party libraries and disable all features you don’t actually use. In particular libraries like SDL2 and freetype contain lots of stuff which most programs don’t use. Check the libraries’ documentation on how to disable unused features. Emscripten doesn’t currently have a way to find out which parts of code are the largest but if you have a Linux build (again, you should) you can use

    nm -S --size-sort game.bin

    to see this. Just be aware that what’s large on Emscripten and what’s large on native might not be the same thing. In general they should agree pretty well.

    Sweeping autumn leaves in Dustforce.

    In conclusion

    To sum up, porting an existing game to Emscripten consists of removing any closed-source third party libraries and threading, using SDL2 for window management and input, OpenGL ES for graphics, and OpenAL or SDL2 for audio. You should also first port your game to other platforms, such as OS X and mobile, but at least for Linux. This makes finding potential issues easier and gives access to several useful debugging tools. The Emscripten port itself minimally requires changes to main loop, asset file handling, and user data storage. Also you need to pay special attention to optimizing your code to run in a browser.

  3. Massive: The asm.js Benchmark

    asm.js is a subset of JavaScript that is very easy to optimize. Most often it is generated by a compiler, such as Emscripten, from C or C++ code. The result can run at very high speeds, close to that of the same code compiled natively. For that reason, Emscripten and asm.js are useful for things like 3D game engines, which are usually large and complex C++ codebases that need to be fast, and indeed top companies in the game industry have adopted this approach, for example Unity and Epic, and you can see it in action in the Humble Mozilla Bundle, which recently ran.

    As asm.js code becomes more common, it is important to be able to measure performance on it. There are of course plenty of existing benchmarks, including Octane which contains one asm.js test, and JetStream which contains several. However, even those do not contain very large code samples, and massive codebases are challenging in particular ways. For example, just loading a page with such a script can take significant time while the browser parses it, causing a pause that is annoying to the user.

    A recent benchmark from Unity measures the performance of their game engine, which (when ported to the web) is a large asm.js codebase. Given the high popularity of the Unity engine among developers, this is an excellent benchmark for game performance in browsers, as real-world as it can get, and also it tests large-scale asm.js. It does however focus on game performance as a whole, taking into account both WebGL and JavaScript execution speed. For games, that overall result is often what you care about, but it is also interesting to measure asm.js on its own.

    Benchmarking asm.js specifically

    Massive is a benchmark that measures asm.js performance specifically. It contains several large, real-world codebases: Poppler, SQLite, Lua and Box2D; see the FAQ on the massive site for more details on each of those.

    Massive reports an overall score, summarizing it’s individual measurements. This score can help browser vendors track their performance over time and point to areas where improvements are needed, and for developers it can provide a simple way to get an idea of how fast asm.js execution is on a particular device and browser.

    Importantly, Massive does not only test throughput. As already mentioned, large codebases can affect startup time, and they can also affect responsiveness and other important aspects of the user experience. Massive therefore tests, in addition to throughput, how long it takes the browser to load a large codebase, and how responsive it is while doing so. It also tests how consistent performance is. Once again, see the FAQ for more details on each of those.

    Massive has been developed openly on github from day one, and we’ve solicited and received feedback from many relevant parties. Over the last few months Massive development has been in beta while we received comments, and there are currently no substantial outstanding issues, so we are ready to announce the first stable version, Massive 1.0.

    Massive tests multiple aspects of performance, in new ways, so it is possible something is not being measured in an optimal manner, and of course bugs always exist in software. However, by developing Massive in the open and thereby giving everyone the chance to inspect it and report issues, and by having a lengthy beta period, we believe we have the best possible chance of a reliable result. Of course, if you do find something wrong, please file an issue! General feedback is of course always welcome as well.

    Massive performance over time

    Massive is brand-new, but it is still interesting to look at how it performs on older browsers (“retroactively”), because if it measures something useful, and if browsers are moving in the right direction, then we should see Massive improve over time, even on browser versions that were released long before Massive existed. The graph below shows Firefox performance from version 14 (released 2012-07-17, over 2 years ago) and version 32 (which became the stable version in September 2014):

    Higher numbers are better, so we can indeed see that Massive scores do follow the expected pattern of improvement, with Firefox’s Massive score rising to around 6x its starting point 2 years ago. Note that the Massive score is not “linear” in the sense that 6x the score means 6x the performance, as it is calculated using the geometric mean (like Octane), however, the individual scores it averages are mostly linear. A 6x improvement therefore does represent a very large and significant speedup.

    Looking more closely at the changes over time, we can see which features landed in each of those versions of Firefox where we can see a significant improvement:

    There are three big jumps in Firefox’s Massive score, each annotated:

    • Firefox 22 introduced OdinMonkey, an optimization module for asm.js code. By specifically optimizing asm.js content, it almost doubled Firefox’s Massive score. (At the time, of course, Massive didn’t exist; but we measured speedups on other benchmarks.)
    • Firefox 26 parses async scripts off of the main thread. This avoids the browser or page becoming nonresponsive while the script loads. For asm.js content, not only parsing but also compilation happens in the background, making the user experience even smoother. Also in Firefox 26 are general optimizations for float32 operations, which appear in one of the Massive tests.
    • Firefox 29 caches asm.js code: The second time you visit the same site, previously-compiled asm.js code will just be loaded from disk, avoiding any compilation pause at all. Another speedup in this version is that the previous float32 optimizations are fully optimized in asm.js code as well.

    Large codebases, and why we need a new benchmark

    Each of those features is expected to improve asm.js performance, so it makes sense to see large speedups there. So far, everything looks pretty much as we would expect. However, a fourth milestone is noted on that graph, and it doesn’t cause any speedup. That feature is IonMonkey, which landed in Firefox 18. IonMonkey was a new optimizing compiler for Firefox, and it provided very large speedups on most common browser benchmarks. Why, then, doesn’t it show any benefit in Massive?

    IonMonkey does help very significantly on small asm.js codebases. But in its original release in Firefox 18 (see more details in the P.S. below), IonMonkey did not do well on very large ones – as a complex optimizing compiler, compilation time is not necessarily linear, which means that large scripts can take very large amounts of time to compile. IonMonkey therefore included a script size limit – over a certain size, IonMonkey simply never kicks in. This explains why Massive does not improve on Firefox 18, when IonMonkey landed – Massive contains very large codebases, and IonMonkey at the time could not actually run on them.

    That shows exactly why a benchmark like Massive is necessary, as other benchmarks did show speedups upon IonMonkey’s launch. In other words, Massive is measuring something that other benchmarks do not. And that thing – large asm.js codebases – is becoming more and more important.

    (P.S. IonMonkey’s script size limit prevented large codebases from being optimized when IonMonkey originally launched, but that limit has been relaxed over time, and practically does not exist today. This is possible through compilation on a background thread, interruptible compilation, and just straightforward improvements to compilation speed, all of which make it feasible to compile larger and larger functions. Exciting general improvements to JavaScript engines are constantly happening across the board!)

  4. Introducing SIMD.js

    SIMD stands for Single Instruction Multiple Data, and is the name for performing operations on multiple data elements together. For example, a SIMD add instruction can add multiple values, in parallel. SIMD is a very popular technique for accelerating computations in graphics, audio, codecs, physics simulation, cryptography, and many other domains.

    In addition to delivering performance, SIMD also reduces power usage, as it uses fewer instructions to do the same amount of work.


    SIMD.js is a new API being developed by Intel, Google, and Mozilla for JavaScript which introduces several new types and functions for doing SIMD computations. For example, the Float32x4 type represents 4 float32 values packed up together. The API contains functions to operate on those values together, including all the basic arithmetic operations, and operations to rearrange, load, and store such values. The intent is for browsers to implement this API directly, and provide optimized implementations that make use of SIMD instructions in the underlying hardware.

    The focus is currently on supporting both x86 platforms with SSE and ARM platforms with NEON. We’re also interested in the possibility of supporting other platforms, potentially including MIPS, Power, and others.

    SIMD.js is originally derived from the Dart SIMD specification, and it is rapidly evolving to become a more general API, and to cover additional use cases such as those that require narrower integer types, including Int8x16 and Int16x8, and saturating operations.

    SIMD.js is a fairly low-level API, and it is expected that libraries will be written on top of it to expose higher-level functionality such as matrix operations, transcendental functions, and more.

    In addition to being usable in regular JS, there is also work is underway to add SIMD.js to asm.js too, so that it can be used from asm.js programs such those produced by Emscripten. In Emscripten, SIMD can be achieved through the built-in autovectorization, the generic SIMD extensions, or the new (and still growing) Emscripten-specific API. Emscripten will also be implementing subsets of popular headers such as <xmmintrin.h> with wrappers around the SIMD.js APIs, as additional ways to ease porting SIMD code in some situations.

    SIMD.js Today

    The SIMD.js API itself is in active development. The ecmascript_simd github repository is currently serving as a provision specification as well as providing a polyfill implementation to provide the functionality, though of course not the accelerated performance, of the SIMD API on existing browsers. It also includes some benchmarks which also serve as examples of basic SIMD.js usage.

    To see SIMD.js in action, check out the demo page accompanying the IDF2014 talk on SIMD.js.

    The API has been presented to TC-39, which has approved it for stage 1 (Proposal). Work is proceeding in preparation for subsequent stages, which will involve proposing something closer to a finalized API.

    SIMD.js implementation in Firefox Nightly is in active development. Internet Explorer has listed SIMD.js as “under consideration”. There is also a prototype implementation in a branch of Chromium.

    Short SIMD and Long SIMD

    One of the uses of SIMD is to accelerate processing of large arrays of data. If you have an array of N elements, and you want to do roughly the same thing to every element in the array, you can divide N by whatever SIMD size the platform makes available and run that many instances of your SIMD subroutine. Since N can can be very large, I call these kind of problems long SIMD problems.

    Another use of SIMD is to accelerate processing of clusters of data. RGB or RGBA pixels, XYZW coordinates, or 4×4 matrices are all examples of such clusters, and I call problems which are expressed in these kinds of types short SIMD problems.

    SIMD is a broad domain, and the boundary between short and long SIMD isn’t always clear, but at a high level, the two styles are quite different. Even the terminology used to describe them features a split: In the short SIMD world, the operation which copies a scalar value into every element of a vector value is called a “splat”, while in the long vector world the analogous operation is called a “broadcast”.

    SIMD.js is primarily a “short” style API, and is well suited for short SIMD problems. SIMD.js can also be used for long SIMD problems, and it will still deliver significant speedups over plain scalar code. However, its fixed-length types aren’t going to achieve maximum performance of some of today’s CPUs, so there is still room for another solution to be developed to take advantage of that available performance.

    Portability and Performance

    There is a natural tension in many parts of SIMD.js between the desire to have an API which runs consistently across all important platforms, and the desire to have the API run as fast as possible on each individual platform.

    Fortunately, there is a core set of operations which are very consistent across a wide variety of platforms. These operations include most of the basic arithmetic operations and form the core of SIMD.js. In this set, little to no overhead is incurred because many of the corresponding SIMD API instructions map directly to individual instructions.

    But, there also are many operations that perform well on one platform, and poorly on others. These can lead to surprising performance cliffs. The current approach of the SIMD.js API is to focus on the things that can be done well with as few performance cliffs as possible. It is also focused on providing portable behavior. In combination, the aim is to ensure that a program which runs well on one platform will likely run and run well on another.

    In future iterations of SIMD.js, we expect to expand the scope and include more capabilities as well as mechanisms for querying capabilities of the underlying platform. Similar to WebGL, this will allow programs to determine what capabilities are available to them so they can decide whether to fall back to more conservative code, or disable optional functionality.

    The overall vision

    SIMD.js will accelerate a wide range of demanding applications today, including games, video and audio manipulation, scientific simulations, and more, on the web. Applications will be able to use the SIMD.js API directly, libraries will be able to use SIMD.js to expose higher-level interfaces that applications can use, and Emscripten will compile C++ with popular SIMD idioms onto optimized SIMD.js code.

    Looking forward, SIMD.js will continue to grow, to provide broader functionality. We hope to eventually accompany SIMD.js with a long-SIMD-style API as well, in which the two APIs can cooperate in a manner very similar to the way that OpenCL combines explicit vector types with the implicit long-vector parallelism of the underlying programming model.

  5. SVG & colors in OpenType fonts

    Sample of a colorfont


    Until recently having more than one color in a glyph of a vector font was technically not possible. Getting a polychrome letter required multiplying the content for every color. Like it happened with many other techniques before, it took some time for digital type to overcome the constraints of the old technique. When printing with wood or lead type the limitation to one color per glyph is inherent (if you don’t count random gradients). More than one color per letter required separate fonts for the differently colored parts and a new print run for every color. This has been done beautifully and pictures of some magnificent examples are available online. Using overprinting the impression of three colors can be achieved with just two colors.

    Overprinting colors
    Simulation of two overprinting colors resulting in a third.

    Digital font formats kept the limitation to one ‘surface’ per glyph. There can be several outlines in a glyph but when the font is used to set type the assigned color applies to all outlines. Analog to letterpress the content needs to be doubled and superimposed to have more than one color per glyph. Multiplying does not sound like an elegant solution and it is a constant source of errors.

    It took some emojis until the demand for multi-colored fonts was big enough to develop additional tables to store this information within OpenType fonts. As of this writing there are several different ways to implement this. Adam Twardoch compares all proposed solutions in great detail on the FontLab blog.

    To me the Adobe/Mozilla way looks the most intriguing.

    Upon its proposal it was discussed by a W3C community group and published as a stable document. The basic idea is to store the colored glyphs as svgs in the OpenType font. Of course this depends on the complexity of your typeface but svgs should usually result in a smaller file size than pngs. With the development of high resolution screens vectors also seem to be a better solution than pixels. The possibility to animate the svgs is an interesting addition and will surely be used in interesting (and very annoying) ways. BLING BLING.


    I am not a font technician or a web developer just very curious about this new developments. There might be other ways but this is how I managed to build colorful OpenType fonts.

    In order to make your own you will need a font editor. There are several options like RoboFont and Glyphs (both Mac only), FontLab and the free FontForge. RoboFont is the editor of my choice, since it is highly customizable and you can build your own extensions with python. In a new font I added as many new layers as the amount of colors I wanted to have in the final font. Either draw in the separate layers right away or just copy the outlines into the respective layer after you’ve drawn them in the foreground layer. With the very handy Layer Preview extension you can preview all Layers overlapping. You can also just increase the size of the thumbnails in the font window. At some point they will show all layers. Adjust the colors to your liking in the Inspector since they are used for the preview.

    RoboFont Inspector
    Define the colors you want to see in the Layer Preview

    A separated letter
    Layer preview
    The outlines of the separate layers and their combination

    When you are done drawing your outlines you will need to safe a ufo for every layer / color. I used a little python script to safe them in the same place as the main file:

    f = CurrentFont()
    path = f.path
    for layer in f.layerOrder:
    newFont = RFont()
    for g in f:
        orig = g.getLayer(layer)
        newFont[].width = orig.width
 = = layer = path[:-4] +"_%s" % layer +".ufo")
    print "Done Splitting"

    Once I had all my separate ufos I loaded them into TransType from FontLab. Just drop your ufos in the main window and select the ones you want to combine. In the Effect menu click ‘Overlay Fonts …’. You get a preview window where you can assign a rgba value for each ufo and then hit OK. Select the newly added font in the collection and export it as OpenType (ttf). You will get a folder with all colorfont versions.

    The preview of your colorfont in TransType.


    In case you don’t want to use TransType you might have a look at the very powerful RoboFont extension by Jens Kutílek called RoboChrome. You will need a separate version of your base-glyph for every color, which can also be done with a scipt if you have all of your outlines in layers.

    f = CurrentFont()
    selection = f.selection
    for l, layer in enumerate(f.layerOrder):
    for g in selection:
        char = f[g]
        name = g + ".layer%d" % l
        f[name].width = f[g].width
        l_glyph = f[g].getLayer(layer)
        f[name].mark = (.2, .2, .2, .2)
    print "Done with the Devision"

    For RoboChrome you will need to split your glyph into several.


    You can also modify the svg table of a compiled font or insert your own if it does not have any yet. To do so I used the very helpful fonttools by Just van Rossum. Just generate a otf or ttf with the font editor of your choice. Open the Terminal and type ttx if you are on Mac OS and have fonttools installed. Drop the font file in the Terminal window and hit return. Fonttools will convert your font into an xml (YourFontName.ttx) in the same folder. This file can then be opened, modified and recompiled into a otf or ttf.

    This can be quite helpful to streamline the svg compiled by a program and therefore reduce the file size. I rewrote the svg of a 1.6mb font to get it down to 980kb. Using it as a webfont that makes quite a difference. If you want to add your own svg table and font that does not have any yet you might read a bit about the required header information. The endGlyphID and startGlyphID for the glyph you want to supply with svg data can be found in the <GlyphOrder> Table.

    <svgDoc endGlyphID="18" startGlyphID="18">
        <!-- here goes your svg -->
    <svgDoc endGlyphID="19" startGlyphID="19">...</svgDoc>
    <svgDoc endGlyphID="20" startGlyphID="20">...</svgDoc>

    One thing to keep in mind is the two different coordinate systems. Contrary to a digital font svg has a y-down axis. So you either have to draw in the negative space or you draw reversed and then mirror everything with:

    Y-axis comparison
    While typefaces usually have a y-up axis SVG uses y-down.


    Now if you really want to pimp your fonts you should add some unnecessary animation to annoy everybody. Just insert it between the opening and closing tags of whatever you want to modify. Here is an example of a circle changing its fill-opacity from zero to 100% over a duration of 500ms in a loop.

    <animate    attributeName="fill-opacity"


    Technically these fonts should work in any application that works with otfs or ttfs. But as of this writing only Firefox shows the svg. If the rendering is not supported the application will just use the regular glyph outlines as a fallback. So if you have your font(s) ready it’s time to write some css and html to test and display them on a website.

    The @font-face

    @font-face {
    font-family: "Colors-Yes"; /* reference name */
    src: url('./fonts/Name_of_your_font.ttf');
    font-weight: 400; /* or whatever applies */
    font-style: normal; /* or whatever applies */
    text-rendering: optimizeLegibility; /* maybe */

    The basic css

    .color_font { font-family: "Colors-Yes"; }

    The HTML

    <p class="color_font">Shiny polychromatic text</p>


    As of this writing (October 2014) the format is supported by Firefox (26+) only. Since this was initiated by Adobe and Mozilla there might be a broader support in the future.

    While using svg has the advantage of reasonably small files and the content does not have to be multiplied it brings one major drawback. Since the colors are ‘hard-coded’ into the font there is no possibility to access them with css. Hopefully this might change with the implementation of a <COLR/CPAL> table.

    There is a bug that keeps animations from being played in Firefox 32. While animations are rendered in the current version (33) this might change for obvious reasons.

    Depending how you establish your svg table it might blow up and result in fairly big files. Be aware of that in case you use them to render the most crucial content of your websites.


    Links, Credits & Thanks

    Thanks Erik, Frederik, Just and Tal for making great tools!

  6. The Visibility Monitor supported by Gaia

    With the booming ultra-low-price device demands, we have to more carefully calculate about each resource of the device, such as CPU, RAM, and Flash. Here I want to introduce the Visibility Monitor which has existed for a long time in Gaia.


    The Visibility Monitor originated from the Gallery app of Gaia and appeared in Bug 809782 (gallery crashes if too many images are available on sdcard) for the first time. It solves the problem of the memory shortage which is caused by storing too many images in the Gallery app. After a period of time, Tag Visibility Monitor, the “brother” of Visibility Monitor, was born. Both of their functionalities are almost the same, except that Tag Visibility Monitor follows pre-assigned tag names to filter elements which need to be monitored. So, we are going to use the Tag Visibility Monitor as the example in the following sections. Of course, the Visibility Monitor is also applicable.

    For your information, the Visibility Monitor was done by JavaScript master David Flanagan. He is also the author of JavaScript: The Definitive Guide and works at Mozilla.

    Working Principle

    Basically, the Visibility Monitor removes the images that are outside of the visible screen from the DOM tree, so Gecko has the chance to release the image memory which is temporarily used by the image loader/decoder.

    You may ask: “The operation can be done on Gecko. Why do this on Gaia?” In fact, Gecko enables the Visibility Monitor by default; however, the Visibility Monitor only removes the images which are image buffers (the uncompressed ones by the image decoder). However, the original images are still temporarily stored in the memory. These images were captured by the image loader from the Internet or the local file system. However, the Visibility Monitor supported by Gaia will completely remove images from the DOM tree, even the original ones which are temporarily stored in the image loader as well. This feature is extremely important for the Tarako, the codename of the Firefox OS low-end device project, which only equips 128MB memory.

    To take the graphic above as the example, we can separate the whole image as:

    • display port
    • pre-rendered area
    • margin
    • all other area

    When the display port is moving up and down, the Visibility Monitor should dynamically load the pre-rendered area. At the same time, the image outside of the pre-rendered area will not be loaded or uncompressed. The Visibility Monitor will take the margin area as a dynamically adjustable parameter.

    • The higher the margin value is, the bigger the part of the image Gecko has to pre-render, which will lead to more memory usage and to scroll more smoothly (FPS will be higher)
    • vice versa: the lower the margin is, the smaller the part of the image Gecko has to pre-render, which will lead to less memory usage and to scroll less smoothly (FPS will be lower).

    Because of this working principle, we can adjust the parameters and image quality to match our demands.


    It’s impossible to “have your cake and eat it too”. Just like it’s impossible to “use the Visibility Monitor and be out of its influence.” The prerequisites to use the Visibility Monitor) are listed below:

    The monitored HTML DOM Elements are arranged from top to bottom

    The original layout of Web is from top to bottom, but we may change the layout from bottom to top with some CSS options, such as flex-flow. After applying them, the Visibility Monitor may become more complex and make the FPS lower (we do not like the result), and this kind of layout is not acceptable for the Visibility Monitor. When someone uses this layout, the Visibility Monitor shows nothing at the areas where it should display images and sends errors instead.

    The monitored HTML DOM Elements cannot be absolutely positioned

    The Visibility Monitor calculates the height of each HTML DOM Elements to decide whether to display the element or not. So, when the element is fixed at a certain location, the calculation becomes more complex, which is unacceptable. When someone uses this kind of arrangement, the Visibility Monitor shows nothing at the area where it should display images and sends error message.

    The monitored HTML DOM Elements should not dynamically change their position through JavaScript

    Similar to absolute location, dynamically changing HTML DOM Elements’ locations make calculations more complex, both of them are unacceptable. When someone uses this kind of arrangement, the Visibility Monitor shows nothing at the area.

    The monitored HTML DOM Elements cannot be resized or be hidden, but they can have different sizes

    The Visibility Monitor uses MutationObserver to monitor adding and removal operations of HTML DOM Elements, but not appearing, disappearing or resizing of an HTML DOM Element. When someone uses this kind of arrangement, the Visibility Monitor again shows nothing.

    The container which runs monitoring cannot use position: static

    Because the Visibility Monitor uses offsetTop to calculate the location of display port, it cannot use position: static. We recommend to use position: relative instead.

    The container which runs monitoring can only be resized by the resizing window

    The Visibility Monitor uses the window.onresize event to decide whether to re-calculate the pre-rendered area or not. So each change of the size should send a resize event.

    Tag Visibility Monitor API

    The Visibility Monitor API is very simple and has only one function:

    function monitorTagVisibility(

    The parameters it accepts are defined as follows:

    1. container: a real HTML DOM Element for users to scroll. It doesn’t necessarily have be the direct parent of the monitored elements, but it has to be one of their ancestors
    2. tag: a string to represent the element name which is going to be monitored
    3. scrollMargin: a number to define the margin size out of the display port
    4. scrollDelta: a number to define “how many pixels have been scrolled should that shoukd have a calculation to produce a new pre-rendered area”
    5. onscreenCallback: a callback function that will be called after a HTML DOM Element moved into the pre-rendered area
    6. offscreenCallback: a callback function that will be called after a HTML DOM Element moved out of the pre-rendered area

    Note: the “move into” and “move out” mentioned above means: as soon as only one pixel is in the pre-rendered area, we say it moves into or remains on the screen; as soon as none of the pixels are in the pre-rendered area, we say it moves out of or does not exist on the screen.

    Example: Music App (1.3T branch)

    One of my tasks is to the add the Visibility Monitor into the 1.3T Music app. Because lack of understanding for the structure of the Music app, I asked help from another colleague to find where I should add it in, which were in three locations:

    • TilesView
    • ListView
    • SearchView

    Here we only take TilesView as the example and demonstrate the way of adding it. First, we use the App Manager to find out the real HTML DOM Element in TilesView for scrolling:

    With the App Manager, we find that TilesView has views-tile, views-tiles-search, views-tiles-anchor, and li.tile (which is under all three of them). After the test, we can see that the scroll bar shows at views-tile; views-tiles-search will then automatically be scrolled to the invisible location. Then each tile exists in the way of li.tile. Therefore, we should set the container as views-tiles and set tag as li. The following code was used to call the Visibility Monitor:

        visibilityMargin,    // extra space top and bottom
        minimumScrollDelta,  // min scroll before we do work
        thumbnailOnscreen,   // set background image
        thumbnailOffscreen // remove background image

    In the code above, visibilityMargin is set as 360, which means 3/4 of the screen. minimumScrollDelta is set as 1, which means each pixel will be recalculated once. thumbnailOnScreen and thumbnailOffscreen can be used to set the background image of the thumbnail or clean it up.

    The Effect

    We performed practical tests on the Tarako device. We launched the Music app and made it load nearly 200 MP3 files with cover images, which were totally about 900MB. Without the Visibility Monitor, the memory usage of the Music app for images were as follows:

    ├──23.48 MB (41.04%) -- images
    │ ├──23.48 MB (41.04%) -- content
    │   │   ├──23.48 MB (41.04%) -- used
    │   │   │ ├──17.27 MB (30.18%) ── uncompressed-nonheap
    │   │   │ ├───6.10 MB (10.66%) ── raw
    │   │   │ └───0.12 MB (00.20%) ── uncompressed-heap
    │   │   └───0.00 MB (00.00%) ++ unused
    │   └───0.00 MB (00.00%) ++ chrome

    With the Visibility Monitor, we re-gained the memory usage as follows:

    ├───6.75 MB (16.60%) -- images
    │   ├──6.75 MB (16.60%) -- content
    │   │  ├──5.77 MB (14.19%) -- used
    │   │  │  ├──3.77 MB (09.26%) ── uncompressed-nonheap
    │   │  │  ├──1.87 MB (04.59%) ── raw
    │   │  │  └──0.14 MB (00.34%) ── uncompressed-heap
    │   │  └──0.98 MB (02.41%) ++ unused
    │   └──0.00 MB (00.00%) ++ chrome

    To compare both of them:

    ├──-16.73 MB (101.12%) -- images/content
    │  ├──-17.71 MB (107.05%) -- used
    │  │  ├──-13.50 MB (81.60%) ── uncompressed-nonheap
    │  │  ├───-4.23 MB (25.58%) ── raw
    │  │  └────0.02 MB (-0.13%) ── uncompressed-heap
    │  └────0.98 MB (-5.93%) ── unused/raw

    To make sure the Visibility Monitor works properly, we added more MP3 files which reached about 400 files in total. At the same time, the usage of memory maintained around 7MB. It’s really a great progress for the 128MB device.


    Honestly, we don’t have to use the Visibility Monitor if there weren’t so many images. Because the Visibility Monitor always influences FPS, we can have Gecko deal with the situation. When talking about apps which use lots of images, we can control memory resources through the Visibility Monitor. Even if we increase the amount of images, the memory usage still keeps stable.

    The margin and delta parameters of the Visibility Monitor will affect the FPS and memory usage, which can be concluded as follows:

    • the value of higher marginvalue: more memory usage, FPS will be closer to Gecko native scrolling
    • the value of lower margin: less memory usage, lower FPS
    • The value of higher delta: memory usage increases slightly, higher FPS, higher possibility to see unloaded images
    • the value of lower delta: memory usage decreases slightly, lower FPS, lower possibility to see unloaded images
  7. New on MDN: Sign in with Github!

    MDN now gives users more options for signing in!

    Sign in with GitHub

    Signing in to MDN previously required a Mozilla Persona account. Getting a Persona account is free and easy, but MDN analytics showed a steep drop-off at the “Sign in with Persona” interface. For example, almost 90% of signed-out users who clicked “Edit” never signed in, which means they never got to edit. That’s a lot of missed opportunities!

    It should be easy to join and edit MDN. If you click “Edit,” we should make it easy for you to edit. Our analysis demonstrated that most potential editors stumbled at the Persona sign in. So, we looked for ways to improve sign in for potential contributors.

    Common sense suggests that many developers have a GitHub account, and analysis confirms it. Of the MDN users who list external accounts in their profiles, approximately 30% include a GitHub account. GitHub is the 2nd-most common external account listed, after Twitter.

    That got us thinking: If we integrated GitHub accounts with MDN profiles, we could one day share interesting GitHub activity with each other on MDN. We could one day use some of GitHub’s tools to create even more value for MDN users. Most immediately, we could offer “sign in with GitHub” to at least 30% (but probably more) of MDN’s existing users.

    And if we did that, we could also offer “sign in with GitHub” to over 3 million GitHub users.

    The entire engineering team and MDN community helped make it happen.

    Authentication Library

    Adding the ability to authenticate using GitHub accounts required us to extend the way MDN handles authentication so that MDN users can start to add their GitHub accounts without effort. We reviewed the current code of kuma (the code base that runs MDN) and realized that it was deeply integrated with how Mozilla Persona works technically.

    As we’re constantly trying to remove technical debt that meant revisiting some of the decisions we’ve made years ago when the code responsible for authentication was written. After a review process we decided to replace our home-grown system, django-browserid, with a 3rd party library called django-allauth as it is a well known system in the Django community that is able to use multiple authentication providers side-by-side – Mozilla Persona and GitHub in our case.

    One challenge was making sure that our existing user database could be ported over to the new system to reduce the negative impact on our users. To our surprise this was not a big problem and could be automated with a database migration–a special piece of code that would convert the data into the new format. We implemented the new authentication library and migrated accounts to it several months ago. MDN has been using django-allauth for Mozilla Persona authentication since then.

    UX Challenges

    We wanted our users to experience a fast and easy sign-up process with the goal of having them edit MDN content at the end. Some things we did in the interface to support this:

    • Remember why the user is signing up and return them to that task when sign up is complete.
    • Pre-fill the username and email address fields with data from GitHub (including pre-checking if they are available).
    • Trust GitHub as a source of confirmed email address so we do not have to confirm the email address before the user can complete signing up.
    • Standardise our language (this is harder than it sounds). Users on MDN “sign in” to their “MDN profile” by connecting “accounts” on other “services”. See the discussion.

    One of our biggest UX challenges was allowing existing users to sign in with a new authentication provider. In this case, the user needs to “claim” an existing MDN profile after signing in with a new service, or needs to add a new sign-in service to their existing profile. We put a lot of work into making sure this was easy both from the user’s profile if they signed in with Persona first and from the sign-up flow if they signed in with GitHub first.

    We started with an ideal plan for the UX but expected to make changes once we had a better understanding of what allauth and GitHub’s API are capable of. It was much easier to smooth the kinks out of the flow once we were able to click around and try it ourselves. This was facilitated by the way MDN uses feature toggles for testing.

    Phased Testing & Release

    This project could potentially corrupt profile or sign-in data, and changes one of our most essential interfaces – sign up and sign in. So, we made a careful release plan with several waves of functional testing.

    We love to alpha- and beta-test changes on MDN with feature toggles. To toggle features we use the excellent django-waffle feature-flipper by James Socol – MDN Manager Emeritus.

    We deployed the new code to our MDN development environment every day behind a feature toggle. During this time MDN engineers exercised the new features heavily, finding and filing bugs under our master tracking bug.

    When the featureset was relatively complete, we created our beta test page, toggled the feature on our MDN staging environment for even more review. We did the end-to-end UX testing, invited internal Mozilla staff to help us beta test, filed a lot of UX bugs, and started to triage and prioritize launch blockers.

    Next, we started an open beta by posting a site-wide banner on the live site, inviting anyone to test and file bugs. 365 beta testers participated in this round of QA. We also asked Mozilla WebQA to help deep-dive into the feature on our stage server. We only received a handful of bugs, which gave us great confidence about a final release.


    It was a lot of work, but all the pieces finally came together and we launched. Because of our extensive testing & release plan, we’ve 0 incidents with the launch – no down-time, no stacktraces, no new bugs reported. We’re very excited to release this feature. We’re excited to give more options and features to our incredible MDN users and contributors, and we’re excited to invite each and every GitHub user to join the Mozilla Developer Network. Together we can make the web even more awesome. Sign in now.


    Now that we have worked out the infrastructure and UX challenges associated with multi-account authentication, we can look for other promising authentication services to integrate with. For example, Firefox Accounts (FxA) is the authentication service that powers Firefox Sync. FxA is integrated with Firefox and will soon be integrated with a variety of other Mozilla services. As more developers sign up for Firefox Accounts, we will look for opportunities to add it to our authentication options.

  8. Creating a mobile app from a simple HTML site

    This article is a simple tutorial designed to teach you some fundamental skills for creating cross platform web applications. You will build a sample School Plan app, which will provide a dynamic “app-like” experience across many different platforms and work offline. It will use Apache Cordova and Mozilla’s Brick web components.

    The story behind the app, written by Piotr

    I’ve got two kids and I’m always forgetting their school plan, as are they. Certainly I could copy the HTML to JSFiddle and load the plan as a Firefox app. Unfortunately this would not load offline, and currently would not work on iOS. Instead I would like to create an app that could be used by everyone in our family, regardless of the device they choose to use.

    We will build

    A mobile application which will:

    1. Display school plan(s)
    2. Work offline
    3. Work on many platforms

    Prerequisite knowledge

    • You should understand the basics of HTML, CSS and JavaScript before getting started.
    • Please also read the instructions on how to load any stage in this tutorial.
    • The Cordova documentation would also be a good thing to read, although we’ll explain the bits you need to know below.
    • You could also read up on Mozilla Brick components to find out what they do.


    Before building up the sample app, you need to prepare your environment.

    Installing Cordova

    We’ve decided to use Apache Cordova for this project as it’s currently the best free tool for delivering HTML apps to many different platforms. You can build up your app using web technologies and then get Cordova to automatically port the app over to the different native platforms. Let’s get it installed first.

    1. First install NodeJS: Cordova is a NodeJS package.
    2. Next, install Cordova globally using the npm package manager:
      npm install -g cordova

    Note: On Linux or OS X, you may need to have root access.

    Installing the latest Firefox

    If you haven’t updated Firefox for a while, you should install the latest version to make sure you have all the tools you need.

    Installing Brick

    Mozilla Brick is a tool built for app developers. It’s a set of ready-to-use web components that allow you to build up and use common UI components very quickly.

    1. To install Brick we will need to use the Bower package manager. Install this, again using npm:
      npm install -g bower
    2. You can install Brick for your current project using
      bower install mozbrick/brick

      but don’t do this right now — you need to put this inside your project, not just anywhere.

    Getting some sample HTML

    Now you should find some sample HTML to use in the project — copy your own children’s online school plans for this purpose, or use our sample if you don’t have any but still want to follow along. Save your markup in a safe place for now.

    Stage 1: Setting up the basic HTML project

    In this part of the tutorial we will set up the basic project, and display the school plans in plain HTML. See the stage 1 code on Github if you want to see what the code should look like at the end of this section.

    1. Start by setting up a plain Cordova project. On your command line, go to the directory in which you want to create your app project, and enter the following command:
      cordova create school-plan com.example.schoolplan SchoolPlan

      This will create a school-plan directory containing some files.

    2. Inside school-plan, open www/index.html in your text editor and remove everything from inside the <body> element.
    3. Copy the school plan HTML you saved earlier into separate elements. This can be structured however you want, but we’d recommend using HTML <table>s for holding each separate plan:
    4. Change the styling contained within www/css/index.css if you wish, to make the tables look how you want. We’ve chosen to use “zebra striping” for ease of reading.
      table {
        width: 100%;
        border-collapse: collapse;
        font-size: 10px;
      th {
        font-size: 12px;
        font-weight: normal;
        color: #039;
        padding: 10px 8px;
      td {
        color: #669;
        padding: 8px;
      tbody tr:nth-child(odd) {
        background: #e8edff;
    5. To test the app quickly and easily, add the firefoxos platform as a cordova target and prepare the application by entering the following two commands:
      cordova platform add firefoxos
      cordova prepare

      The last step is needed every time you want to check the changes.

    6. Open the App Manager in the Firefox browser. Press the [Add Packaged App] button and navigate to the prepared firefoxos app directory, which should be available in school-plan/platforms/firefoxos/www.

      Note: If you are running Firefox Aurora or Nightly, you can do these tasks using our new WebIDE tool, which has a similar but slightly different workflow to the App Manager.

    7. Press the [Start Simulator] button then [Update] and you will see the app running in a Firefox OS simulator. You can inspect, debug and profile it using the App Manager — read Using the App Manager for more details. App Manager buttons<br />
    8. Now let’s export the app as a native Android APK so we can see it working on that platform. Add the platform and get Cordova to build the apk file with the following two commands:
      cordova platform add android
      cordova build android
    9. The apk is build in school-plan/platforms/android/ant-build/SchoolPlan-debug.apk — read the Cordova Android Platform Guide for more details on how to test this.

    Stage1 Result Screenshot<br />

    Stage 2

    In Stage 2 of our app implementation, we will look at using Brick to improve the user experience of our app. Instead of having to potentially scroll through a lot of lesson plans to find the one you want, we’ll implement a Brick custom element that allows us to display different plans in the same place.

    You can see the finished Stage 2 code on Github.

    1. First, run the following command to install the entire Brick codebase into the app/bower_components directory.
      bower install mozbrick/brick
    2. We will be using the brick-deck component. This provides a “deck of cards” type interface that displays one brick-card while hiding the others. To make use of it, add the following code to the <head> of your index.html file, to import its HTML and JavaScript:
      <script src="app/bower_components/brick/dist/platform/platform.js"></script>
      <link rel="import" href="app/bower_components/brick-deck/dist/brick-deck.html">
    3. Next, all the plans need to be wrapped inside a <brick-deck> custom element, and every individual plan should be wrapped inside a <brick-card> custom element — the structure should end up similar to this:
      <brick-deck id="plan-group" selected-index="0">
        <brick-card selected>
            <!-- school plan 1 -->
            <!-- school plan 2 -->
    4. The brick-deck component requires that you set the height of the <html> and <body> elements to 100%. Add the following to the css/index.css file:
      html, body {height: 100%}
    5. When you run the application, the first card should be visible while the others remain hidden. To handle this we’ll now add some JavaScript to the mix. First, add some <link> elements to link the necessary JavaScript files to the HTML:
      <script type="text/javascript" src="cordova.js"></script>
      <script type="text/javascript" src="js/index.js"></script>
    6. cordova.js contains useful general Cordova-specific helper functions, while index.js will contain our app’s specific JavaScript. index.js already contains a definition of an app variable. The app is running after app.initialize() is called. It’s a good idea to call this when window is loaded, so add the following:
      window.onload = function() {
    7. Cordova adds a few events; one of which — deviceready — is fired after all Cordova code is loaded and initiated. Let’s put the main app action code inside this event’s callback — app.onDeviceReady.
      onDeviceReady: function() {
          // starts when device is ready
    8. Brick adds a few functions and attributes to all its elements. In this case loop and nextCard are added to the <brick-deck> element. As it includes an id="plan-group" attribute, the appropriate way to get this element from the DOM is document.getElementById. We want the cards to switch when the touchstart event is fired; at this point nextCard will be called from the callback app.nextPlan.
      onDeviceReady: function() {
          app.planGroup = document.getElementById('plan-group');
          app.planGroup.loop = true;
          app.planGroup.addEventListener('touchstart', app.nextPlan);
      nextPlan: function() {

    Stage2 Result Animation<br />

    Stage 3

    In this section of the tutorial, we’ll add a menu bar with the name of the currently displayed plan, to provide an extra usability enhancement. See the finished Stage 3 code on GitHub.

    1. To implement the menu bar, we will use Brick’s brick-tabbar component. We first need to import the component. Add the following lines to the <head> of your HTML:
      <script src="app/bower_components/brick/dist/platform/platform.js"></script>
      <link rel="import" href="app/bower_components/brick-deck/dist/brick-deck.html">
      <link rel="import" href="app/bower_components/brick-tabbar/dist/brick-tabbar.html">
    2. Next, add an id to all the cards and include them as the values of target attributes on brick-tabbar-tab elements like so:
      <brick-tabbar id="plan-group-menu" selected-index="0">
          <brick-tabbar-tab target="angelica">Angelica</brick-tabbar-tab>
          <brick-tabbar-tab target="andrew">Andrew</brick-tabbar-tab>
      <brick-deck id="plan-group" selected-index="0">
          <brick-card selected id="angelica">
    3. The Deck’s nextCard method is called by Brick behind the scenes using tab’s reveal event. The cards will change when the tabbar element is touched. The app got simpler, as we are now using the in-built Brick functionality, rather than our own custom code, and Cordova functionality. If you wished to end the tutorial here you could safely remove the <script> elements that link to index.js and cordova.js from the index.html file.

    Stage3 Result Animation<br />

    Stage 4

    To further improve the user experience on touch devices, we’ll now add functionality to allow you to swipe left/right to navigate between cards. See the finished stage 4 code on GitHub.

    1. Switching cards is currently done using the tabbar component. To keep the selected tab in sync with the current card you need to link them back. This is done by listening to the show event of each card. For each tab from stored in app.planGroupMenu.tabs:
      tab.targetElement.addEventListener('show', function() {
          // select the tab
    2. Because of the race condition (planGroupMenu.tabs might not exist when the app is initialized) polling is used to wait until the right moment before trying to assign the events:
      function assignTabs() {
          if (!app.planGroupMenu.tabs) {
              return window.setTimeout(assignTabs, 100);
          // proceed

      The code for linking the tabs to their associated cards looks like so:

      onDeviceReady: function() {
          app.planGroupMenu = document.getElementById('plan-group-menu');
          function assignTabs() {
              if (!app.planGroupMenu.tabs) {
                  return window.setTimeout(assignTabs, 100);
              for (var i=0; i < app.planGroupMenu.tabs.length; i++) {
                  var tab = app.planGroupMenu.tabs[i];
                  tab.targetElement.tabElement = tab;
                  tab.targetElement.addEventListener('show', function() {
          // continue below ...
    3. Detecting a one finger swipe is pretty easy in a Firefox OS app. Two callbacks are needed to listen to the touchstart and touchend events and calculate the delta on the pageX parameter. Unfortunately Android and iOS do not fire the touchend event if the finger has moved. The obvious move would be to listen to the touchmove event, but that is fired only once as it’s intercepted by the scroll event. The best way forward is to stop the event from bubbling up by calling preventDefault() in the touchmove callback. That way scroll is switched off, and the functionality can work as expected:
      // ... continuation
      app.planGroup = document.getElementById('plan-group');
      var startX = null;
      var slideThreshold = 100;
      function touchStart(sX) {
          startX = sX;
      function touchEnd(endX) {
          var deltaX;
          if (startX) {
              deltaX = endX - startX;
              if (Math.abs(deltaX) > slideThreshold) {
                  startX = null;
                  if (deltaX > 0) {
                  } else {
      app.planGroup.addEventListener('touchstart', function(evt) {
          var touches = evt.changedTouches;
          if (touches.length === 1) {
      app.planGroup.addEventListener('touchmove', function(evt) {

    You can add as many plans as you like — just make sure that their titles fit on the screen in the tabbar. Actions will be assigned automatically.

    Stage4 Result Screenshot<br />

    To be continued …

    We’re preparing the next part, in which this app will evolve into a marketplace app with downloadable plans. Stay tuned!

  9. Passwordless authentication: Secure, simple, and fast to deploy

    Passwordless is an authentication middleware for Node.js that improves security for your users while being fast and easy to deploy.

    The last months were very exciting for everyone interested in web security and privacy: Fantastic articles, discussions, and talks but also plenty of incidents that raised awareness.

    Most websites are, however, still stuck with the same authentication mechanism as from the earliest days of the web: username and password.

    While username and password have their place, we should be much more challenging if they are the right solution for our projects. We know that most people use the same password on all the sites they visit. For projects without dedicated security experts, should we really open up our users to the risk that a breach of our site also compromises their Amazon account? Also, the classic mechanism has by default at least two attack vectors: the login page and the password recovery page. Especially the latter is often implemented hurried and hence inherently more risky.

    We’ve seen quite a bit of great ideas recently and I got particularly excited by one very straightforward and low-tech solution: one-time passwords. They are fast to implement, have a small attack surface, and require neither QR codes nor JavaScript. Whenever a user wants to login or has her session invalidated, she receives a short-lived one-time link with a token via email or text message. If you want to give it a spin, feel free to test the demo on

    Unfortunately—depending on your technology stack—there are few to none ready-made solutions out there. Passwordless changes this for Node.js.

    Getting started with Node.js & Express

    Getting started with Passwordless is straight-forward and you’ll be able to deploy a fully fledged and secure authentication solution for a small project within two hours:

    $ npm install passwordless --save

    gets you the basic framework. You’ll also want to install one of the existing storage interfaces such as MongoStore which store the tokens securely:

    $ npm install passwordless-mongostore --save

    To deliver the tokens to the users, email would be the most common option (but text message is also feasible) and you’re free to pick any of the existing email frameworks such as:

    $ npm install emailjs --save

    Setting up the basics

    Let’s require all of the above mentioned modules in the same file that you use to initialise Express:

    var passwordless = require('passwordless');
    var MongoStore = require('passwordless-mongostore');
    var email   = require("emailjs");

    If you’ve chosen emailjs for delivery that would also be a great moment to connect it to your email account (e.g. a Gmail account):

    var smtpServer  = email.server.connect({
       user:    yourEmail,
       password: yourPwd,
       host:    yourSmtp,
       ssl:     true

    The final preliminary step would be to tell Passwordless which storage interface you’ve chosen above and to initialise it:

    // Your MongoDB TokenStore
    var pathToMongoDb = 'mongodb://localhost/passwordless-simple-mail';
    passwordless.init(new MongoStore(pathToMongoDb));

    Delivering a token

    passwordless.addDelivery(deliver) adds a new delivery mechanism. deliver is called whenever a token has to be sent. By default, the mechanism you choose should provide the user with a link in the following format:{TOKEN}&amp;uid={UID}

    deliver will be called with all the needed details. Hence, the delivery of the token (in this case with emailjs) can be as easy as:

        function(tokenToSend, uidToSend, recipient, callback) {
            var host = 'localhost:3000';
                text:    'Hello!nAccess your account here: http://'
                + host + '?token=' + tokenToSend + '&amp;uid='
                + encodeURIComponent(uidToSend),
                from:    yourEmail,
                to:      recipient,
                subject: 'Token for ' + host
            }, function(err, message) {
                if(err) {

    Initialising the Express middleware

    app.use(passwordless.acceptToken({ successRedirect: '/'}));

    sessionSupport() makes the login persistent, so the user will stay logged in while browsing your site. Please make sure that you’ve already prepared your session middleware (such as express-session) beforehand.

    acceptToken() will intercept any incoming tokens, authenticate users, and redirect them to the correct page. While the option successRedirect is not strictly needed, it is strongly recommended to use it to avoid leaking valid tokens via the referrer header of outgoing HTTP links on your site.

    Routing & Authenticating

    The following takes for granted that you’ve already setup your router var router = express.Router(); as explained in the express docs

    You will need at least two URLs to:

    • Display a page asking for the user’s email
    • Accept the form details (via POST)
    /* GET: login screen */
    router.get('/login', function(req, res) {
    /* POST: login details */'/sendtoken',
        function(req, res, next) {
            // TODO: Input validation
        // Turn the email address into a user ID
            function(user, delivery, callback) {
                // E.g. if you have a User model:
                User.findUser(email, function(error, user) {
                    if(error) {
                    } else if(user) {
                        // return the user ID to Passwordless
                    } else {
                        // If the user couldn’t be found: Create it!
                        // You can also implement a dedicated route
                        // to e.g. capture more user details
                        User.createUser(email, '', '',
                            function(error, user) {
                                if(error) {
                                } else {
        function(req, res) {
            // Success! Tell your users that their token is on its way

    What happens here? passwordless.requestToken(getUserId) has two tasks: Making sure the email address exists and transforming it into a unique user ID that can be sent out via email and can be used for identifying users later on. Usually, you’ll already have a model that is taking care of storing your user details and you can simply interact with it as shown in the example above.

    In some cases (think of a blog edited by just a couple of users) you can also skip the user model entirely and just hardwire valid email addresses with their respective IDs:

    var users = [
        { id: 1, email: '' },
        { id: 2, email: '' }
    /* POST: login details */'/sendtoken',
            function(user, delivery, callback) {
                for (var i = users.length - 1; i >= 0; i--) {
                    if(users[i].email === user.toLowerCase()) {
                        return callback(null, users[i].id);
                callback(null, null);
            // Same as above…

    HTML pages

    All it needs is a simple HTML form capturing the user’s email address. By default, Passwordless will look for an input field called user:

            <form action="/sendtoken" method="POST">
                <br /><input name="user" type="text">
                <br /><input type="submit" value="Login">

    Protecting your pages

    Passwordless offers middleware to ensure only authenticated users get to see certain pages:

    /* Protect a single page */
    router.get('/restricted', passwordless.restricted(),
     function(req, res) {
      // render the secret page
    /* Protect a path with all its children */
    router.use('/admin', passwordless.restricted());

    Who is logged in?

    By default, Passwordless makes the user ID available through the request object: req.user. To display or reuse the ID it to pull further details from the database you can do the following:

    router.get('/admin', passwordless.restricted(),
        function(req, res) {
            res.render('admin', { user: req.user });

    Or, more generally, you can add another middleware that pulls the whole user record from your model and makes it available to any route on your site:

    app.use(function(req, res, next) {
        if(req.user) {
            User.findById(req.user, function(error, user) {
                res.locals.user = user;
        } else {

    That’s it!

    That’s all it takes to let your users authenticate securely and easily. For more details you should check out the deep dive which explains all the options and the example that will show you how to integrate all of the things above into a working solution.


    As mentioned earlier, all authentication systems have their tradeoffs and you should pick the right system for your needs. Token-based channels share one risk with the majority of other solutions incl. the classic username/password scheme: If the user’s email account is compromised and/or the channel between your SMTP server and the user’s, the user’s account on your site will be compromised as well. Two default options help mitigate (but not entirely eliminate!) this risk: short-lived tokens and automatic invalidation of the tokens after they’ve been used once.

    For most sites token-based authentication represents a step up in security: users don’t have to think of new passwords (which are usually too simple) and there is no risk of users reusing passwords. For us as developers, Passwordless offers a solution that has only one (and simple!) path of authentication that is easier to understand and hence to protect. Also, we don’t have to touch any user passwords.

    Another point is usability. We should consider both, the first time usage of your site and the following logons. For first-time users, token-based authentication couldn’t be more straight-forward: They will still have to validate their email address as they have to with classic login mechanisms, but in the best-case scenario there will be no additional details required. No creativity needed to come up with a password that fulfils all restrictions and nothing to memorise. If the user logins again, the experience depends on the specific use case. Most websites have relatively long session timeouts and logins are relatively rare. Or, people’s visits to the website are actually so infrequent that they will have difficulties recounting if they already had an account and if so what the password could have been. In those cases Passwordless presents a clear advantage in terms of usability. Also, there are few steps to take and those can be explained very clearly along the process. Websites that users visit frequently and/or that have conditioned people to login several times a week (think of Amazon) might however benefit from a classic (or even better: two-factor) approach as people will likely be aware of their passwords and there might be more opportunity to convince users about the importance of good passwords.

    While Passwordless is considered stable, I would love your comments and contributions on GitHub or your questions on Twitter: @thesumofall

  10. Unity games in WebGL: Owlchemy Labs’ conversion of Aaaaa! to asm.js

    You may have seen the big news today, but for those who’ve been living in an Internet-less cave, starting today through October 28 you can check out the brand spankin’ new Humble Mozilla Bundle. The crew here at Owlchemy Labs were given the unique opportunity to work closely with Unity, maker of the leading cross-platform game engine, and Humble to attempt to bring one of our games, Aaaaa! for the Awesome, a collaboration with Dejobaan Games, to the web via technologies like WebGL and asm.js.

    I’ll attempt to enumerate some of the technical challenges we hit along the way as well as provide some tips for developers who might follow our path in the future.

    Unity WebGL exporter

    Working with pre-release alpha versions of the Unity WebGL exporter (now in beta) was a surprisingly smooth experience overall! Jonas Echterhoff, Ralph Hauwert and the rest of the team at Unity did an amazing job getting the core engine running with asm.js and playing Unity content in the browser at incredible speeds; it was pretty staggering. When you look at the scope of the problem and the technical magic needed to go all the way from C# scripting down to the final 1-million-plus-line .js file, the technology is mind boggling.

    Thankfully, as content creators and game developers, Unity has allowed us to focus our worries away from the problem of getting our games to compile in this new build target by taking care of the heavy lifting under the hood. So did we just hit the big WebGL export button and sit back while Unity cranked out the html and js? Well, it’s a bit more involved than that, but it’s certainly better than some of the prior early-stage ports we’ve done.

    For example, our experience with bringing a game through the now defunct Unity to Stage3D/Flash exporter during the Flash in a Flash contest in late 2011 was more like taking a machete to a jungle of code, hacking away core bits, working around inexplicably missing core functionality (no generic lists?!) and making a mess of our codebase. WebGL was a breeze comparatively!

    The porting process

    Our porting process began in early June of this year when we gained alpha access to the WIP WebGL exporter to prove whether a complex game like Aaaaa! for the Awesome was going to be portable within a relatively short time frame with such an early framework. After two days of mucking about with the exporter, we knew it would be doable (and had content actually running in-browser!) but as with all tech endeavors like this, we were walking in blind as to the scope of the entire port that was ahead of us.

    Would we hit one or two bugs? Hundreds? Could it be completed in the short timespan we were given? Thankfully we made it out alive and dozens of bug reports and fixes later, we have a working game! Devs jumping into this process now (October 2014 and onward) fortunately get all of these fixes built in from the start and can benefit from a much smoother pipeline from Unity to WebGL. The exporter has improved by a huge amount since June!

    Initial issues

    We came across some silly issues that were either caused by our project’s upgrade from Unity 4 to Unity 5 or simply the exporter being in such “early days”. Fun little things such as all mouse cursor coordinates being inverted inexplicably caused some baffled faces but of course has been fixed at the time of writing. We also hit some physics-related bugs that turned out to have been caused by the Unity 4 to Unity 5 upgrade — this led to a hilarious bug where players wouldn’t smash through score plates and get points but instead slammed into score plates as if they were made of concrete, instantly crushing the skydiving player. A fun new feature!

    Additionally, we came across a very hard-to-track-down memory leak bug that only exhibited itself after playing the game for an extended session. With a hunch that the leak revolved around scene loading and unloading, we built a hands-off repro case that loaded and unloaded the same scene hundreds of times, causing the crash and helping the Unity team find and fix the leak! Huzzah!

    Bandwidth considerations

    Above examples are fun to talk about but have essentially been solved by this point. That leaves developers with two core development issues that they’ll need to keep in mind when bringing games to the Web: bandwidth considerations, and form factor / user experience changes.

    Aaaaa! Is a great test case for a worst case scenario when it comes to file size. We have a game with over 200 levels or zones, over with 300 level assets that can be spawned at runtime in any level, 48 unique skyboxes (6 textures per sky!), and 38 full-length songs. Our standalone PC/Mac build weighs in at 388mb uncompressed. Downloading almost 400 megabytes to get to the title screen of our game would be completely unacceptable!

    In our case, we were able to rely on Unity’s build process to efficiently strip and pack the build into a much smaller size, but also took advantage of Unity’s AudioClip streaming solution to stream in our music at runtime on demand! The file size savings of streaming music was huge and highly recommended for all Unity games. To glean additional file size savings, Asset Bundles can be used for loading levels on demand, but are best used in simple games or when building games from the ground up with web in mind.

    In the end, our final *compressed* WebGL build size, which includes all of our loaded assets as well as the Unity engine itself ended up weighing in at 68.8 MB, compared to a *compressed* standalone size of 192 MB, almost 3x smaller than our PC build!

    Form factor/user experience changes

    User experience considerations are the other important factor to keep in mind when developing games for the Web or porting existing games to be fun, playable Web experiences. Examples of keeping the form factor of the Web include avoiding “sacred” key presses, such as Escape. Escape is used as pause in many games but many browsers eat up the Escape key and reserve it for exiting full-screen mode or releasing mouse lock. Mouse lock and full-screen are both important to creating fully-fledged gaming experiences on the web so you’ll want to find a way to re-bind keys to avoid these special key presses that are off-limits when in the browser.

    Secondly, you’ll want to remember that you’re working within a sandboxed environment on the Web so loading in custom music from the user’s hard drive or saving large files locally can be problematic due to this sandboxing. It might be worth evaluating which features in your game you might want to be modified to fit the Web experience vs. a desktop experience.

    Players also notice the little things that key someone into a game being a rushed port. For example, if you have a quit button on the title screen of your PC game, you should definitely remove it in your web build as quitting is not a paradigm used on the Web. At any point the user can simply navigate away from the page, so watch out for elements in your game that don’t fit the current web ecosystem.

    Lastly you’ll want to think about ways to allow your data to persist across multiple browsers on different machines. Gamers don’t always sit on the same machine to play their games, which is why many services allow for cloud save functionality. The same goes for the Web, and if you can build a system (like the wonderfully talented Edward Rudd created for the Humble Player, it will help the overall web experience for the player.

    Bringing games to the Web!

    So with all of that being said, the Web seems like a very viable place to be bringing Unity content as the WebGL exporter solidifies. You can expect Owlchemy Labs to bring more of their games to the Web in the near future, so keep an eye out for those! ;) With our content running at almost the same speed as native desktop builds, we definitely have a revolution on our hands when it comes to portability of content, empowering game developers with another outlet for their creative content, which is always a good thing.

    Thanks to Dejobaan Games, the team at Humble Bundle, and of course the team at Unity for making all of this possible!