Canvas Articles

Sort by:


  1. Canvas 2D: New docs, Path2D objects, hit regions

    Over the last year, a couple of new HTML Canvas 2D features were implemented in Firefox and other recent browsers, with the help of the Adobe Web platform team. Over on MDN, the documentation for Canvas 2D got a major update to reflect the current canvas standard and browser implementation status. Let’s have a look what is new and how you can use it to enhance your canvas graphics and gaming content.

    Path2D objects

    The new Path2D API (available from Firefox 31+) lets you store paths, which simplifies your canvas drawing code and makes it run faster. The constructor provides three ways to create a Path2D object:

    new Path2D();     // empty path object
    new Path2D(path); // copy from another path
    new Path2D(d);    // path from from SVG path data

    The third version, which takes SVG path data to construct, is especially handy. You can now re-use your SVG paths to draw the same shapes directly on a canvas as well:

    var p = new Path2D("M10 10 h 80 v 80 h -80 Z");

    When constructing an empty path object, you can use the usual path methods, which might be familiar to you from using them directly on the CanvasRenderingContext2D context.

    // create a circle
    var circle = new Path2D();
    circle.arc(50, 50, 50, 0, 2 * Math.PI);
    // stroke the circle onto the context ctx

    To actually draw the path onto the canvas, several APIs of the context have been updated to take an optional Path2D path:

    Hit regions

    Starting with Firefox 32, experimental support for hit regions has been added. You need to switch the canvas.hitregions.enabled preference to true in order to test them. Hit regions provide a much easier way to detect if the mouse is in a particular area without relying on manually checking coordinates, which can be really difficult to check for complex shapes. The Hit Region API is pretty simple:

    Adds a hit region to the canvas.
    Removes the hit region with the specified id from the canvas.
    Removes all hit regions from the canvas.

    The addHitRegion method, well, adds a hit region to a current path or a Path2D path. The MouseEvent interface got extended with a region property, which you can use to check whether the mouse hit the region or not.

    Check out the code example on MDN to see it in action (and be sure to enable flags/preferences in at least Firefox and Chrome).

    Focus rings

    Also in Firefox 32, the drawFocusIfNeeded(element) property has been made available without a preference switch. This API allows you to draw focus rings on a canvas, if a provided fallback element inside the <canvas></canvas> element gains focus. If the fallback element gets focused (for example when tabbing through the page that contains the canvas), the pixel representation / shape of that element on the canvas can draw a focus ring to indicate the current focus.

    CSS/SVG filters exposed to Canvas

    Although still behind a preference (canvas.filters.enabled), and not yet in the latest canvas specification (but expected to be added), Firefox 35 gained support for filters on the canvas rendering context. The syntax is the same as the CSS filter property.



    If you would like to read more about Canvas 2D graphics, check out the Canvas tutorial on MDN, which guides you through the Canvas APIs. A good bookmark is also the large CanvasRenderingContext2D interface, which you will use often when working with Canvas.

  2. PlayCanvas Goes Open Source

    This is a guest post by Will Eastcott of the PlayCanvas engine. As outlined in What Mozilla Hacks is, we constantly cover interesting information about open source and the Open Web, both from external as well as Mozilla authors, so feel free to share with us!

    On March 22nd 2011, Mozilla released Firefox 4.0 which enabled WebGL by default. A month later, we formed PlayCanvas and started building a game engine unlike anything that had gone before. Fast forward three years, and WebGL is everywhere. Only this week, Apple announced support for WebGL in both OS X and iOS 8. So what better time pass on some more exciting news for you:

    The PlayCanvas Engine has been open sourced!


    Introducing the PlayCanvas Engine

    The PlayCanvas Engine is a JavaScript library engineered specifically for building video games. It implements all of the major components that you need to write high quality games:

    • Graphics: model loading, per-pixel lighting, shadow mapping, post effects
    • Physics: rigid body simulation, ray casting, joints, trigger volumes, vehicles
    • Animation: keyframing, skeletal blending, skinning
    • Audio engine: 2D and 3D audio sources
    • Input devices: mouse, keyboard, touch and gamepad support
    • Entity-component system: high level game object management

    We had a couple of goals in mind when we originally designed the engine.

    1. It had to be easy to work with.
    2. It had to be blazingly fast.

    Simple Yet Powerful

    As a developer, you want well documented and well architected APIs. But you also want to be able to understand what’s going on under the hood and to debug when things go wrong. For this, there’s no substitute for a carefully hand-crafted, unminified, open source codebase.

    Additionally, you need great graphics, physics and audio engines. But the PlayCanvas Engine takes things a step further. It exposes a game framework that implements an entity-component system, allowing you to build the objects in your games as if they were made of Lego-like blocks of functionality. So what does this look like? Let’s check out a simple example on CodePen: a cannonball smashing a wall:


    As you can see from the Pen’s JS panel, in just over 100 lines of code, you can create, light, simulate and view interesting 3D scenes. Try forking the CodePen and change some values for yourself.

    Need For Speed

    To ensure we get great performance, we’ve built PlayCanvas as a hybrid of hand-written JavaScript and machine generated asm.js. The most performance critical portion of the codebase is the physics engine. This is implemented as a thin, hand-written layer that wraps Ammo.js, the Emscripten-generated JavaScript port of the open source physics engine Bullet. If you haven’t heard of Bullet before, it powers amazing AAA games like Red Dead Redemption and GTAV. So thanks to Mozilla’s pioneering work on Emscripten and asm.js, all of this power is also exposed via the PlayCanvas engine. Ammo.js executes at approximately 1.5x native code speed in recent builds of Firefox so if you think that complex physics simulation is just not practical with JavaScript, think again.

    But what about the non-asm.js parts of the codebase? Performance is clearly still super-important, especially for the graphics engine. The renderer is highly optimized to sort draw calls by material and eliminate redundant WebGL calls. It has also been carefully written to avoid making dynamic allocations to head off potential stalls due to garbage collection. So the code performs brilliantly but is also lightweight and human readable.

    Powering Awesome Projects

    The PlayCanvas Engine is already powering some great projects. By far and away, the biggest is the PlayCanvas web site: the world’s first cloud-hosted game development platform.

    For years, we’ve been frustrated with the limitations of current generation game engines. So shortly after starting work on the PlayCanvas Engine, we began designing a new breed of game development environment that would be:

    using any device with a web browser, plug in a URL and instantly access simple, intuitive yet powerful tools.
    See what you teammates are working on in real-time or just sit back and watch a game as it’s built live before your eyes.
    Making games is easier with the help of others. Be part of an online community of developers like you.

    PlayCanvas ticks all of these boxes beautifully. But don’t take our word for it – head to and discover a better way to make games.

    In fact, here’s a game we have built using these very tools. It’s called SWOOOP:


    It’s a great demonstration of what you can achieve with HTML5 and WebGL today. The game runs great in both mobile and desktop browsers, and you are free to deploy your PlayCanvas games to app stores as well. For Google Play and the iOS App Store, there are wrapping technologies available that can generate a native app of your game. Examples of these are Ludei’s CocoonJS and the open source Ejecta project. For Firefox OS, the process is a breeze since the OS treats HTML5 apps as first class citizens. PlayCanvas games will run out of the box.


    So if you think this is sounding tasty, where should you go to get started? The entire engine sourcebase is now live on GitHub:

    Get cloning, starring and forking while it’s fresh!

    Stay in the Loop

    Lastly, I want to give you some useful links that should help you stay informed and find help whenever you need it.

    We’re super excited to see what the open source community will do with the PlayCanvas Engine. So get creative and be sure to let us know about your projects.

    Toodle pip!

  3. How fast is PDF.js?

    Hi, my name is Thorben and I work at Opera Software in Oslo, not at Mozilla. So, how did I end up writing for Mozilla Hacks? Maybe you know that there is no default PDF viewer in the Opera Browser, something we would like to change. But how to include one? Buy it from Adobe or Foxit? Start our own?

    Introducing PDF.js

    While investigating our options we quickly stumbled upon PDF.js. The project aims to create a full-featured PDF viewer in the browser using JavaScript and Canvas. Yeah, it sounds a bit crazy, but it makes sense: browsers need to be good at processing text, images, fonts, and vector graphics — exactly the things a PDF viewer has to be good at. The draw commands in PDFs are a subset of Postscript, and they are not so different from what Canvas offers. Also security is virtually no issue: using PDF.js is as secure as opening any other website.

    Working on PDF.js

    So Christian Krebs, Mathieu Henri and myself began looking at PDF.js in more detail and were impressed: it’s well designed, seems fast and big parts of the code are just wow!

    But we also discovered some problems, mainly with performance on very large or graphics-heavy PDFs. We decided that the best way to get to know PDF.js better and to push the project further, was to help the project and address the major issues we found. This gave us a pretty good understanding of the project and its high potential. We were also very impressed by how much the performance of PDF.js improved while we worked on it. This is an active and well managed project.

    Benchmarking PDF.js

    Of course, our tests gave us the wrong impression about performance. We tried to find super large, awkward and hard-to-render PDFs, but that is not what most people want to view. Most PDFs you actually want to view in PDF.js are fine. But how to test that?

    Well, you could check the most popular PDFs on the Internet – as these are the ones you probably want to view – and benchmark them. A snapshot of 5 to 10k PDFs should be enough … but how do you get them?

    I figured that search engines would be my friend. If you tell them to search for PDFs only, they give you the most relevant PDFs for that keyword, which in turn are probably the most popular ones. And if you use the most searched keywords you end up with a good approximation.

    Benchmarking that many PDFs is a big task. So I got myself a small cluster of old computers and built a nice server application that supplied them with tasks. The current repository has almost 7000 PDFs and benchmarking one version of PDF.js takes around eight hours.

    The results

    Let’s skip to the interesting part with the pretty pictures. This graph


    gives us almost all the interesting results at one look. You see a histogram of the time it took to process all the pages in the PDFs in relation to the average time it takes to process the average page of the Tracemonkey Paper (the default PDF you see when opening PDF.js). The User Experience when viewing the Tracemonkey Paper is good and from my tests even 3 to 4 times slower is still okay. That means from all benchmarked pages over 96% (exclude pdfs that crashed) will translate to a good user experience. That is really good news! Or to use a very simple pie chart (in % of pages):


    You probably already noticed the small catch: around 0.8% of the PDFs crashed PDF.js when we tested them. We had a closer look at most of them and at least a third are actually so heavily damaged that probably no PDF viewer could ever display them.

    And this leads us to another good point: we have to keep in mind that these results just stand here without comparison. There are some PDFs on the Internet that are so complex that there is no hope that even native PDF viewers could display them nice and fast. The slowest tested PDF is an incredibly detailed vector map of the public transport system of Lisbon. Try to open it in Adobe Reader, it’s not fun!


    From these results we concluded that PDF.js is a very valid candidate to be used as the default PDF viewer in the Opera Browser. There is still a lot of work to do to integrate PDF.js nicely into it, but we are working right now on integrating it behind an experimental flag (BTW: There is an extension that adds PDF.js with the default Mozilla viewer. The “nice” integration I am talking about would be deeper and include a brand new viewer). Thanks Mozilla! We are looking forward to working on PDF.js together with you guys!

    PS: Both the code of the computational system and the results are publicly available. Have a look and tell us if you find them useful!

    PPS: If anybody works at a big search engine company and could give me a list with the actual 10k most used PDFs, that would be awesome :)

    Appendix: What’s next?

    The corpus and the computational framework I described, could be used to do all kinds of interesting things. In the next step, we hope to classify PDFs by used fonts formats, image formats and the like. So you can quickly get PDFs to test a new feature with. We also want to look at which drawing instructions are used with which frequency in the Postscript so we can better optimise for the very common ones, like we did with HTML in browsers. Let’s see what we can actually do ;)

  4. Introducing the Canvas Debugger in Firefox Developer Tools

    The Canvas Debugger is a new tool we’ll be demoing at the Game Developers Conference in San Francisco. It’s a tool for debugging animation frames rendered on a Canvas element. Whether you’re creating a visualization, animation or debugging a game, this tool will help you understand and optimize your animation loop. It will let you debug either a WebGL or 2D Canvas context.

    Canvas Debugger Screenshot

    You can debug an animation using a traditional debugger, like our own JavaScript Debugger in Firefox’ Developer Tools. However, this can be difficult as it becomes a manual search for all of the various canvas methods you may wish to step through. The Canvas Debugger is designed to let you view the rendering calls from the perspective of the animation loop itself, giving you a much better overview of what’s happening.

    How it works

    The Canvas Debugger works by creating a snapshot of everything that happens while rendering a frame. It records all canvas context method calls. Each frame snapshot contains a list of context method calls and the associated JavaScript stack. By inspecting this stack, a developer can trace the call back to the higher level function invoked by the app or engine that caused something to be drawn.

    Certain types of Canvas context functions are highlighted to make them easier to spot in the snapshot. Quickly scrolling through the list, a developer can easily spot draw calls or redundant operations.

    Canvas Debugger Call Highlighting Detail

    Each draw call has an associated screenshot arranged in a timeline at the bottom of the screen as a “film-strip” view. You can “scrub” through this film-strip using a slider to quickly locate a draw call associated with a particular bit of rendering. You can also click a thumbnail to be taken directly to the associated draw call in the animation frame snapshot.

    Canvas Debugger Timeline Picture

    The thumbnail film-strip gives you get a quick overview of the drawing process. You can easily see how the scene is composed to get the final rendering.

    Stepping Around

    You might notice a familiar row of buttons in the attached screenshot. They’ve been borrowed from the JavaScript Debugger and provide the developer a means to navigate through the animation snapshot. These buttons may change their icons at final release, but for now, we’ll describe them as they currently look.

    Canvas Debugger Buttons image

    • “Resume” – Jump to the next draw call.
    • “Step Over” – Goes over the current context call.
    • “Step Out” – Jumps out of the animation frame (typically to the next requestAnimationFrame call).
    • “Step In” – Goes to the next non-context call in the JavaScript debugger

    Jumping to the JavaScript debugger by “stepping in” on a snapshot function call, or via a function’s stack, allows you to add a breakpoint and instantly pause if the animation is still running. Much convenience!

    Future Work

    We’re not done. We have some enhancements to make this tool even better.

    • Add the ability to inspect the context’s state at each method call. Highlight the differences in state between calls.
    • Measure Time spent in each draw call. This will readily show expensive canvas operations.
    • Make it easier to know which programs and shaders are currently in use at each draw call, allowing you to jump to the Shader Editor and tinkering with shaders in real time. Better linkage to the Shader Editor in general.
    • Inspecting Hit Regions by either drawing individual regions separately, colored differently by id, or showing the hit region id of a pixel when hovering over the preview panel using the mouse.

    And we’re just getting started. The Canvas Debugger should be landing in Firefox Nightly any day now. Watch this space for news of its landing and more updates.

  5. Halloween artist

    A while back, I made a little toy that simulates carving pumpkins. It was during that narrow window when the WebOS-running TouchPad was new and hot.

    Since then, web browsers have gown up a lot, and nowadays Mozilla is executing the vision of a browser-based operating system with Firefox OS. In any case, I’ve been digging back and dusting off some of my old apps. When you get your app running on FirefoxOS, you don’t just port it to yet another device – you port it to the web. So now, Halloween Artist runs on near anything, including those awesome (and affordable!) Firefox phones that are starting to spring up everywhere.

    The platform-agnostic web app:
    On the Firefox Marketplace:

    Play around with it before reading on, if Halloween Artist is new to you.

    But enough history; the point of this post is to dive into the jack-o-lantern guts and talk about how the program actually works!


    Before we begin, some links!

    Let’s go already!

    The first step is to get ourselves a pumpkin image in the background, and a canvas layered on top. This canvas will track mouse and touch events and let the user trace out shapes.

    Next, we need an “inside” image that will show through the carved shapes. Over that, we’ll draw the pumpkin but with the user-drawn parts cut out (made transparent). As luck would have it, the canvas API has some handy compositing modes that are perfect for these tasks. The main operation we need is “source-out”. Keep the destination image, except where it intersects with the source shape. Then it’s just a matter of doing a normal, source-over composite.

    var face = document.getElementById("draw");
    var faceCtx = face.getContext("2d");
    var glow = document.getElementById("glow");
    var dest = document.getElementById("bg");
    var destCtx = dest.getContext("2d");
    var img = document.getElementById("pumpkin");
    dest.width = dest.width;  // reset the destination canvas
    faceCtx.globalCompositeOperation = "source-out";
                      0, 0, img.width, img.height,
                      0, 0, size, size);
    // draw glowing background (to dest)
    destCtx.drawImage(glow, 0, 0, glow.width, glow.height,
                      0, 0, size, size);
    // apply the face
    destCtx.globalAlpha = 1;
    destCtx.globalCompositeOperation = "source-over";
    destCtx.drawImage(face, 0, 0);

    It’s a start! But to look like an actual carved pumpkin, we need to add some 3D magic to draw the inside edges. Actually, scratch that – we’re going to cheat! :^) We’ll start with a lower-resolution, slightly-blown-up pumpkin image:

    …then we’ll lighten it up using canvas’s “lighter” globalCompositeOperation and a globalAlpha value of, oh let’s say 0.5:

    …then we’ll “source-out” the face, same as we did with the foreground:

    …then shrink it a bit, center it, and draw it between the inside background and the outer face:

    We’re getting there! But depending on the shape, our corners might not look very convincing.

    Fortunately, all this cheating we’ve been doing – these composite operations and scaling – it’s all very fast. Even on mobile browsers. Let’s turn up the cheating to maximum and draw that middle layer in a loop, shrinking it less each time.

    …While we’re at it, each step could lighten up the current layer to a lesser degree than the previous (more “inner”) one. And while we’re at that, let’s lighten it using a more yellow color; our first pass looks a little pinkish.

    // build pumpkin by layer and draw it shrunk, inner to outer (to dest)
    var i;
    var darken = 0.4;
    for(i = 56; i &gt; 0; i -= 4) {
        // mix a color for the layer
        scratchCtx.globalCompositeOperation = "source-over";
        scratchCtx.globalAlpha = 1;
        scratchCtx.drawImage(glow, 0, 0, glow.width, glow.height,
                             0, 0, 768, 768);       // bright glow...
        scratchCtx.globalAlpha = 0.3;
        scratchCtx.drawImage(flick,                 // light...
                             0, 0, 16, 16, 0, 0, 768, 768);
        scratchCtx.globalAlpha = darken;
        scratchCtx.drawImage(flesh,                 // darken with flesh
                             0, 0, 256, 256, -8, 0, 768 + 16, 768);
        darken += 0.02;                             // ...more each time
        // cut out the face
        // NOTE: "face" is already the outer layer.
        //       we want to copy its alpha mask, so "-in" instead of "-out".
        scratchCtx.globalCompositeOperation = "destination-in";
        scratchCtx.globalAlpha = 1;
        scratchCtx.drawImage(face, 0, 0);
        // draw layer
        destCtx.drawImage(scratch, 0, 0, 768, 768,
                          i, i, 768 - (i * 2), 768 - (i * 2));

    By putting that middle-layer step inside a loop and making a few tweaks, we get much more realistic edges:

    So, we draw a whole bunch of these middle layers, only to draw right over most of it during the next pass. It’s a bit wasteful if you think about it that way, but consider how much simpler this is than trying to simulate all these arbitrary cut-out surfaces “for real”. Instead, this code builds little bits of pumpkin shell, one layer at a time, from the inside out. It’s a fairly elegant illusion if I do say so myself; we can make a passably-realistic image with a sense of depth, without actually doing any complex calculations or intensive processing.

    Adding polish

    Warning: more history ahead. I’ll try to keep it brief and on-point. :^)

    Figuring out how “deep” to start (how much to shrink), and how many steps to draw in between, all while keeping the “carve” function reasonably fast, was one of those fun bits of experimentation and compromise. On a high-resolution display, there can still be artifacts if you draw very steep, jagged shapes. But as far as bang per buck, compatibility with low-end devices, and the 99%-of-shapes use cases, I’m pretty pleased.

    I wanted this to be pick-up-and-play friendly, so every day I’d load the latest version onto my TouchPad and hand it to my coworkers, giving them no instructions.

    Early on, it was suggested that the carved image should flicker as if the candle inside were burning unevenly. Easy! The carve function now produces two images, the normal one and a slightly brighter version, which is positioned (using CSS) right over the main one. It fades in and out using a randomized timeout and CSS animation on the “opacity” property. A few minutes of polish, and the illusion was even better.

    .fade {
        transition: opacity 0.5s linear;
        -moz-transition: opacity 0.5s linear;
        -webkit-transition: opacity 0.5s linear;
    .fade.out {
        opacity: 0;
    function animateFlicker() {
        var flicker = document.getElementById("flicker");
        if(HA.flickerTimer) {
            HA.flickerTimer = null;
        if(!HA.settings.flicker) {
        if(flicker.className === "fade out") {
            flicker.className = "fade ";
        } else {
            flicker.className = "fade out";
        HA.flickerTimer = setTimeout(animateFlicker, (Math.random() * 1000));

    I forget who, but one coworker went right for the “Carve” button before drawing anything. Natural enough instinct. So, added some logic to see if any shapes had been drawn yet, and if not, give the user some quick instructions. I’m really glad I caught that, because in showing the app to more people later, about a quarter did the same thing. Much better to show a hint popup than have users wonder why nothing’s happening.

    What if the user drew outside the pumpkin? All kinds of goofy artifacts, that’s what. So, I used that handy canvas compositing and masked the user’s input before carving. As a nice side effect, you can now carve all the way out to the edge of a pumpkin, as though you’d chopped it in half.

    This solution led to another problem. When people realized they could recklessly carve giant holes, they’d see the empty, glowing inside surface of the pumpkin. Where was the light coming from?

    So, between drawing the background and drawing the scaled flesh layers, I dropped in a candle. I actually took some photos of a nice white candle and GIMPed it into the shape of my blocky “penduin” avatar. May as well include some kind of signature in this program, eh? I made two different versions, one for each of the flicker-fading images, and made the flame a bit bigger on the brighter version. It gives a nice little touch of animation, and adds a bit more to the illusion.

    Then of course there was the less exciting stuff. Take out those nasty hard-coded values and make it work at any screen size. Make sure touch and mouse support both work as expected. Rearrange the buttons if the screen is small and they’d be in the way. All that jazz.

    Code on GitHub – Happy Halloween!

    Well, that about covers it. You’re free and encouraged to poke around in the source if you’d like to learn more or add your own tweaks. (Mind the mess; I left some experimental tweaks and previous-attempts in there, commented out.) Halloween Artist is GPLv3, and since you might not feel like scraping down the source from the web app itself (and since my lousy DSL might be down at any given moment) I’ve made it available on GitHub.

    Have fun, and Happy Halloween! :^)

  6. Building a simple paint game with HTML5 Canvas and Vanilla JavaScript

    When the talk is about HTML5 Canvas you mostly hear about libraries to make it work for legacy browsers, performance tricks like off-screen Canvas and ways to draw and animate sprites and tiles. This is only one part of Canvas, though. On the lowest level, Canvas is a way to manipulate pixels of a portion of the screen. Either via a painting API or by directly manipulating the pixel array (which by the way is a typed array and thus performs admirably).

    Using this knowledge, I thought it’d be fun to create a small game I saw in an ad for a tablet: a simple game for kids to paint letters. The result is a demo for FirefoxOS called Letterpaint which will show up soon on the Marketplace. The code is on GitHub.


    The fun thing about building Letterpaint was that I took a lot of shortcuts. Painting on a canvas is easy (and gets much easier using Jacob Seidelin’s Canvas cheatsheet), but on the first glance making sure that users stay in a certain shape is tricky. So is finding out how much of the letter has been filled in. However, by going back to seeing a Canvas as a collection of pixels, I found a simple way to make this work:

    • When I paint the letter, I read out the amount of pixels that have the colour of the letter
    • When you click the mouse button or touch the screen I test the colour of the pixel at the current mouse/finger position
    • When the pixel is not transparent, you are inside the letter as the main Canvas by default is transparent
    • When you release the mouse or stop touching the screen I compare the amount of pixels of the paint colour with the ones of the letter.

    Simple, isn’t it? And it is all possible with two re-usable functions:

      getpixelcolour(x, y)
      returns the rgba value of the pixel at position x and y
    function getpixelcolour(x, y) {
      var pixels = cx.getImageData(0, 0, c.width, c.height);
      var index = ((y * (pixels.width * 4)) + (x * 4));
      return {
        g:[index + 1],
        b:[index + 2],
        a:[index + 3]
      getpixelamount(r, g, b)
      returns the amount of pixels in the canvas of the colour
    function getpixelamount(r, g, b) {
      var pixels = cx.getImageData(0, 0, c.width, c.height);
      var all =;
      var amount = 0;
      for (i = 0; i < all; i += 4) {
        if ([i] === r &&
  [i + 1] === g &&
  [i + 2] === b) {
      return amount;

    Add some painting functions to that and you have the game done. You can see a step by step guide of this online (and pull the code from GitHub) and there is a screencast describing the tricks and decisions on YouTube.

    The main thing to remember here is that it is very tempting to reach for libraries and tools to get things done quickly, but that it could mean that you think too complex. Browsers have very powerful tools built in for us, and in many cases it means you just need to be up-to-date and fearless in trying something “new” that comes out-of-the-box.

  7. Koalas to the Max – a case study

    One day I was browsing reddit when I came across this peculiar link posted on it:

    The game was addictive and I loved it but I found several design elements flawed. Why did it start with four circles and not one? Why was the color split so jarring? Why was it written in flash? (What is this, 2010?) Most importantly, it was missing a golden opportunity to split into dots that form an image instead of just doing random colors.

    Creating the project

    This seemed like a fun project, and I reimplemented it (with my design tweaks) using D3 to render with SVG.

    The main idea was to have the dots split into the pixels of an image, with each bigger dot having the average color of the four dots contained inside of it recursively, and allow the code to work on any web-based image.
    The code sat in my ‘Projects’ folder for some time; Valentines day was around the corner and I thought it could be a cute gift. I bought the domain name, found a cute picture, and thus “ (KttM)” was born.


    While the user-facing part of KttM has changed little since its inception, the implementation has been revisited several times to incorporate bug fixes, improve performance, and bring support to a wider range of devices.

    Notable excerpts are presented below and the full code can be found on GitHub.

    Load the image

    If the image is hosted on (same) domain then loading it is as simple as calling new Image()

    var img = new Image();
    img.onload = function() {
     // Awesome rendering code omitted
    img.src = the_image_source;

    One of the core design goals for KttM was to let people use their own images as the revealed image. Thus, when the image is on an arbitrary domain, it needs to be given special consideration. Given the same origin restrictions, there needs to be a image proxy that could channel the image from the arbitrary domain or send the image data as a JSONP call.

    Originally I used a library called $.getImageData but I had to switch to a self hosted solution after KttM went viral and brought the $.getImageData App Engine account to its limits.

    Extract the pixel data

    Once the image loads, it needs to be resized to the dimensions of the finest layer of circles (128 x 128) and its pixel data can be extracted with the help of an offscreen HTML5 canvas element.

    koala.loadImage = function(imageData) {
     // Create a canvas for image data resizing and extraction
     var canvas = document.createElement('canvas').getContext('2d');
     // Draw the image into the corner, resizing it to dim x dim
     canvas.drawImage(imageData, 0, 0, dim, dim);
     // Extract the pixel data from the same area of canvas
     // Note: This call will throw a security exception if imageData
     // was loaded from a different domain than the script.
     return canvas.getImageData(0, 0, dim, dim).data;

    dim is the number of smallest circles that will appear on a side. 128 seemed to produce nice results but really any power of 2 could be used. Each circle on the finest level corresponds to one pixel of the resized image.

    Build the split tree

    Resizing the image returns the data needed to render the finest layer of the pixelization. Every successive layer is formed by grouping neighboring clusters of four dots together and averaging their color. The entire structure is stored as a (quaternary) tree so that when a circle splits it has easy access to the dots from which it was formed. During construction each subsequent layer of the tree is stored in an efficient 2D array.

    // Got the data now build the tree
    var finestLayer = array2d(dim, dim);
    var size = minSize;
    // Start off by populating the base (leaf) layer
    var xi, yi, t = 0, color;
    for (yi = 0; yi < dim; yi++) {
     for (xi = 0; xi < dim; xi++) {
       color = [colorData[t], colorData[t+1], colorData[t+2]];
       finestLayer(xi, yi, new Circle(vis, xi, yi, size, color));
       t += 4;

    Start by going through the color data extracted in from the image and creating the finest circles.

    // Build up successive nodes by grouping
    var layer, prevLayer = finestLayer;
    var c1, c2, c3, c4, currentLayer = 0;
    while (size < maxSize) {
     dim /= 2;
     size = size * 2;
     layer = array2d(dim, dim);
     for (yi = 0; yi < dim; yi++) {
       for (xi = 0; xi < dim; xi++) {
         c1 = prevLayer(2 * xi    , 2 * yi    );
         c2 = prevLayer(2 * xi + 1, 2 * yi    );
         c3 = prevLayer(2 * xi    , 2 * yi + 1);
         c4 = prevLayer(2 * xi + 1, 2 * yi + 1);
         color = avgColor(c1.color, c2.color, c3.color, c4.color);
         c1.parent = c2.parent = c3.parent = c4.parent = layer(xi, yi,
           new Circle(vis, xi, yi, size, color, [c1, c2, c3, c4], currentLayer, onSplit)
     splitableByLayer.push(dim * dim);
     splitableTotal += dim * dim;
     prevLayer = layer;

    After the finest circles have been created, the subsequent circles are each built by merging four dots and doubling the radius of the resulting dot.

    Render the circles

    Once the split tree is built, the initial circle is added to the page.

    // Create the initial circle
    Circle.addToVis(vis, [layer(0, 0)], true);

    This employs the Circle.addToVis function that is used whenever the circle is split. The second argument is the array of circles to be added to the page.

    Circle.addToVis = function(vis, circles, init) {
     var circle = vis.selectAll('.nope').data(circles)
     if (init) {
       // Setup the initial state of the initial circle
       circle = circle
         .attr('cx',   function(d) { return d.x; })
         .attr('cy',   function(d) { return d.y; })
         .attr('r', 4)
         .attr('fill', '#ffffff')
     } else {
       // Setup the initial state of the opened circles
       circle = circle
         .attr('cx',   function(d) { return d.parent.x; })
         .attr('cy',   function(d) { return d.parent.y; })
         .attr('r',    function(d) { return d.parent.size / 2; })
         .attr('fill', function(d) { return String(d.parent.rgb); })
         .attr('fill-opacity', 0.68)
     // Transition the to the respective final state
       .attr('cx',   function(d) { return d.x; })
       .attr('cy',   function(d) { return d.y; })
       .attr('r',    function(d) { return d.size / 2; })
       .attr('fill', function(d) { return String(d.rgb); })
       .attr('fill-opacity', 1)
       .each('end',  function(d) { d.node = this; });

    Here the D3 magic happens. The circles in circles are added (.append('circle')) to the SVG container and animated to their position. The initial circle is given special treatment as it fades in from the center of the page while the others slide over from the position of their “parent” circle.

    In typical D3 fashion circle ends up being a selection of all the circles that were added. The .attr calls are applied to all of the elements in the selection. When a function is passed in it shows how to map the split tree node onto an SVG element.

    .attr('cx', function(d) { return d.parent.x; }) would set the X coordinate of the center of the circle to the X position of the parent.

    The attributes are set to their initial state then a transition is started with .transition() and then the attributes are set to their final state; D3 takes care of the animation.

    Detect mouse (and touch) over

    The circles need to split when the user moves the mouse (or finger) over them; to be done efficiently the regular structure of the layout can be taken advantage of.

    The described algorithm vastly outperforms native “onmouseover” event handlers.

    // Handle mouse events
    var prevMousePosition = null;
    function onMouseMove() {
     var mousePosition = d3.mouse(vis.node());
     // Do nothing if the mouse point is not valid
     if (isNaN(mousePosition[0])) {
       prevMousePosition = null;
     if (prevMousePosition) {
       findAndSplit(prevMousePosition, mousePosition);
     prevMousePosition = mousePosition;
    // Initialize interaction
     .on('mousemove.koala', onMouseMove)

    Firstly a body wide mousemove event handler is registered. The event handler keeps track of the previous mouse position and calls on the findAndSplit function passing it the line segments traveled by the user’s mouse.

    function findAndSplit(startPoint, endPoint) {
     var breaks = breakInterval(startPoint, endPoint, 4);
     var circleToSplit = []
     for (var i = 0; i < breaks.length - 1; i++) {
       var sp = breaks[i],
           ep = breaks[i+1];
       var circle = splitableCircleAt(ep);
       if (circle && circle.isSplitable() && circle.checkIntersection(sp, ep)) {

    The findAndSplit function splits a potentially large segment traveled by the mouse into a series of small segments (not bigger than 4px long). It then checks each small segment for a potential circle intersection.

    function splitableCircleAt(pos) {
     var xi = Math.floor(pos[0] / minSize),
         yi = Math.floor(pos[1] / minSize),
         circle = finestLayer(xi, yi);
     if (!circle) return null;
     while (circle && !circle.isSplitable()) circle = circle.parent;
     return circle || null;

    The splitableCircleAt function takes advantage of the regular structure of the layout to find the one circle that the segment ending in the given point might be intersecting. This is done by finding the leaf node of the closest fine circle and traversing up the split tree to find its visible parent.

    Finally the intersected circle is split (circle.split()).

    Circle.prototype.split = function() {
     if (!this.isSplitable()) return;;
     delete this.node;
     Circle.addToVis(this.vis, this.children);

    Going viral

    Sometime after Valentines day I meet with Mike Bostock (the creator of D3) regarding D3 syntax and I showed him KttM, which he thought was tweet-worthy – it was, after all, an early example of a pointless artsy visualization done with D3.

    Mike has a twitter following and his tweet, which was retweeted by some members of the Google Chrome development team, started getting some momentum.

    Since the koala was out of the bag, I decided that it might as well be posted on reddit. I posted it on the programing subreddit with the tile “A cute D3 / SVG powered image puzzle. [No IE]” and it got a respectable 23 points which made me happy. Later that day it was reposted to the funny subreddit with the title “Press all the dots :D” and was upvoted to the front page.

    The traffic went exponential. Reddit was a spike that quickly dropped off, but people have picked up on it and spread it to Facebook, StumbleUpon, and other social media outlets.

    The traffic from these sources decays over time but every several months KttM gets rediscovered and traffic spikes.

    Such irregular traffic patterns underscore the need to write scalable code. Conveniently KttM does most of the work within the user’s browser; the server needs only to serve the page assets and one (small) image per page load allowing KttM to be hosted on a dirt-cheap shared hosting service.

    Measuring engagement

    After KttM became popular I was interested in exploring how people actually interacted with the application. Did they even realize that the initial single circle can split? Does anyone actually finish the whole image? Do people uncover the circles uniformly?

    At first the only tracking on KttM was the vanilla GA code that tracks pageviews. This quickly became underwhelming. I decided to add custom event tracking for when an entire layer was cleared and when a percentage of circles were split (in increments of 5%). The event value is set to the time in seconds since page load.

    As you can see such event tracking offers both insights and room for improvement. The 0% clear event is fired when the first circle is split and the average time for that event to fire seems to be 308 seconds (5 minutes) which does not sound reasonable. In reality this happens when someone opens KttM and leaves it open for days then, if a circle is split, the event value would be huge and it would skew the average. I wish GA had a histogram view.

    Even basic engagement tracking sheds vast amounts of light into how far people get through the game. These metrics proved very useful when the the mouse-over algorithm was upgraded. I could, after several days of running the new algorithm, see that people were finishing more of the puzzle before giving up.

    Lessons learned

    While making, maintaining, and running KttM I learned several lessons about using modern web standards to build web applications that run on a wide range of devices.

    Some native browser utilities give you 90% of what you need, but to get your app behaving exactly as you want, you need to reimplement them in JavaScript. For example, the SVG mouseover events could not cope well with the number of circles and it was much more efficient to implement them in JavaScript by taking advantage of the regular circle layout. Similarly, the native base64 functions (atob, btoa) are not universally supported and do not work with unicode. It is surprisingly easy to support the modern Internet Explorers (9 and 10) and for the older IEs Google Chrome Frame provides a great fallback.

    Despite the huge improvements in standard compliance it is still necessary to test the code on a wide variety of browsers and devices, as there are still differences in how certain features are implemented. For example, in IE10 running on the Microsoft Surface html {-ms-touch-action: none; } needed to be added to allow KttM to function correctly.

    Adding tracking and taking time to define and collect the key engagement metrics allows you to evaluate the impact of changes that get deployed to users in a quantitative manner. Having well defined metrics allows you to run controlled tests to figure out how to streamline your application.

    Finally, listen to your users! They pick up on things that you miss – even if they don’t know it. The congratulations message that appears on completion was added after I received complaints that is was not clear when a picture was fully uncovered.

    All projects are forever evolving and if you listen to your users and run controlled experiments then there is no limit to how much you can improve.

  8. Firefox Development Highlights – Per Window Private Browsing & Canvas' globalCompositeOperation new values

    On a regular basis, we like to highlight the latest features in Firefox for developers, as part of our Bleeding Edge series, and most examples only work in Firefox Nightly (and could be subject to change).

    Per Window Private Browsing

    Private browsing is very useful for web developers. A new private session doesn’t include existing persistent data (cookies and HTML5 storage). It’s convenient if we want to test a website that stores data (login and persistent informations) without cleaning every time these cached data, or if we want to login to a service with 2 different users.

    Until now, when entering Private Browsing in Firefox, it was closing the existing session to start a new one. Now, Firefox will keep the current session and open a new private window. You can test it in Firefox Nightly (to be Firefox 20). There’s still some frontend work to do, but the feature works.

    Canvas’ globalCompositeOperation new values

    The ‘globalCompositeOperation’ canvas property lets you define how you want canvas to draw images over an existing image. By default, when canvas draws an image over existing pixels, the new image is just replacing the pixels. But there are other ways to mix pixels. For example, if you set ctx.globalCompositeOperation = "lighter", pixel color values are added, and it creates a different visual effect.

    There are several effects available, and more on them can be found in globalCompositeOperation on MDN.

    Rik Cabanier from Adobe has extended the Canvas specification to include more effects. He has also implemented these new effects in Firefox Nightly. These new effects are called “blend modes”. These are more advanced ways of mixing colors.

    Please take a look at the list of these new blending modes.

    And here’s an example of how to use them:

    JS Bin demo.

    If you don’t use Firefox Nightly, here is a (long) screenshot:


  9. Comic Gen – a canvas-run comic generator

    The first time I wanted to participate on Dev Derby was on the May 2012 challenge, where the rules were that you should use websockets. At that time I thought that I could use NodeJS and SocketIO. But the time kept running and I ended not having any cool ideas for an app.

    Since then I have been just watching the monthly challenges, until the December 2012 one: offline apps!

    Once again I got very excited to do something for Dev Derby, specially because I think the App Cache and Local Storage APIs are just amazing and the main APIs to use when thinking of offline apps.

    Anyway, after passing 30 minutes wondering how amazing offline apps can be I decided it was time to think of something for the challenge.

    As a husband and professional its hard to find the time I wanted to build cool stuff for this kind of challenge. So I wondered if I could reuse a demo I had already built a couple months ago, and add the necessary things to make it available offline. Looking at the demos I had done, the one that better suits to the challenge was a comic generator, which I very creatively called Comic Gen.

    Comic Gen (source code) is capable of adding and manipulating characters on a screen, and it gives you a chance to export the results to a PNG file.

    Ragaboom Framework

    Canvas was the first HTML5 API that comes to my attention. It is very cool to have the ability to draw anything, import images, write texts and interact with them all. The possibilities seem to be limitless.

    I started to make my first experiments with Canvas and it didn’t took me too long to realize I could build a framework to ease things. That was when I created the Ragaboom framework.

    At that time it was the most advanced code I had ever written with JavaScript. Looking at it today I realize how complex and hard to use it is, and how much I could improve it… Well, that’s something I still plan to do, but the responsibilities and priorities prevent me to do so right now.

    So, I had to test Ragaboom somehow. Then I created a simple game called CATcher. The objective of the game is to catch the falling kitties with a basket until the time runs out. The player earns 15 more seconds for every 100 points.

    My wife helped me with the drawings and she said she loved the game. Yeah, right… She also says I’m handsome…

    CATcher and Ragaboom were two big accomplishments for me. I submitted the game to be published in some HTML5 demo websites and surprisingly a lot of people contacted me to talk about JavaScript advanced programming and HTML5. What I loved most was that some people had very good ideas for improve the framework!

    Why a comic generator

    After I had built Ragaboom, I started to think of a new challenge where I could use Canvas and for improve the framework. At that time there was (and there still is) too many discussion about the future of HTML5, and people started to argue about if it was possible to HTML5 to replace Flash.

    That was the kind of challenge I was looking for: to try to reproduce something with Canvas that already existed with Flash. I had seen a couple comic generators using Flash. Some of them that were very simple, and some that were very well built.

    At that moment I decided what I wanted to do. I needed someone to draw the characters for me. Then, my friend Ana Zugaib, a very talented artist made them for me! Her work was simply wonderful!

    Planning the demo

    Ok, I had a subject, I had a reason, now I needed to plan what it should be created. There’s not much to think about when building a comic generator. All I needed was a toolbox and a screen where the characters should be placed.

    I wanted the user to be able to choose different screen sizes and that they should have the option to export their work to an image or something like that.

    The objects in the screen should be allowed to be manipulated, moved, resized, inverted and removed from the screen. At the moment something came into my mind: How the hell am I gonna do all that????

    How the hell to do all that

    Oh well, if I wanted challenge, I had got a challenge. Luckily, some of the features I needed were already implemented on Ragaboom framework, or were already presented on the DOM API.

    So I had to focus on the stuff they didn’t offer me:

    • How to invert the images horizontally;
    • How to resize the screen without losing the current content;
    • How to export the content to a PNG file;

    Initializing the objects

    First of all, I needed to create a Canvas object and its context, and create a big white rectangle on it. That would be my comic screen.

    var c = $('#c')[0];
    var ctx = c.getContext('2d');
    var scene = new RB.Scene(c);
    var w = c.width;
    var h = c.height;
    scene.add( scene.rect(w, h, 'white') );

    The scene.add method (from my framework) adds a new object to the screen, as you can see above, where a white rectangle was added to the screen. Next the screen is updated to show all drawn objects so far. Internally, the framework creates an array within the objects that should be drawn to the screen, storing their properties like x and y positions and their type (rectangle, circle, image, etc).
    The scene.update method iterates over that array, repainting every object in the Canvas area.

    For the toolbar I had created two arrays. The first kept the URL for each image of the toolbar, and the second stored the URL for the actual image that should be placed on the screen. The elements of toolbar image array corresponded to actual images array. So the pirate icon from the first array was in the same position as the actual pirate image from the second array, and so on.

    I then had to iterate the arrays to build buttons linking each of them to the corresponding image. After that the result of the iteration was appended to a DOM element.

    //toolbar icons array
    var miniUrls = ["sm01_mini.png", "sm02_mini.png", "sm03_mini.png", "sushi_mini.png", "z01_mini.png", "z02_mini.png", "z03_mini.png", "z04_mini.png", "balao.png", "toon01_mini.png", "toon02_mini.png", "toon03_mini.png", "toon04_mini.png", "toon05_mini.png", "toon06_mini.png"];
    //actual images array
    var toonUrls = ["sm01.png", "sm02.png", "sm03.png", "sushi.png", "z01.png", "z02.png", "z03.png", "z04.png", "balao.png", "toon01.png", "toon02.png", "toon03.png", "toon04.png", "toon05.png", "toon06.png"];
    //building the toolbar
    cg.buildMinis = function() {
        var buffer = '';
        var imgString = "&lt;img src='toons/IMG_URL' class='rc mini'&gt;&lt;/img&gt;";
        var link = "&lt;a href="javascript:cg.createImage('toons/IMG_URL')"&gt;";
        for(var i=0; i &lt; miniUrls.length; i++) {
            buffer += link.replace(/IMG_URL/, toonUrls[i]);
            buffer += imgString.replace(/IMG_URL/, miniUrls[i]) + '&lt;/a&gt;';
        $('#menuContainer').append( $('#instructs').clone() );

    Adding objects to the screen

    When you are loading an image via JavaScript it is important to keep in mind that the browser will download the image asynchronously. That means that you cannot be sure that the image will be loaded and ready for use yet.

    To solve this, one of the techniques is using the onload method from the Image object:

    var img = new Image();
    img.onload = function() {
        alert('the image download was completed!');
    img.src = "my_image.png";

    Ragaboom framework uses this same trick. That’s why the image method second parameter is a callback function which will be fired when the image is ready for use.

    cg.createImage = function(url) {
        scene.image(url, function(obj) {
            obj.draggable = true;
            obj.setXY(30, 30);
            obj.onmousedown = function(e) {
                currentObj = obj;
                scene.zIndex(obj, 1);
            currentObj = obj;

    On the example above the image is stored on the objects array, converted to draggable and positioned at coordinates x=30, y=30. Then a mousedown event is attached to the object setting it to the current object. At the end, the canvas is updated.


    To increase the size of objects I simply added a small portion of pixels on both the width and the height of the object. The same was done when trying to decrease the size of objects by subtracting a portion of pixels. I only had to handle situations where width and height were lower than zero to prevent bugs.

    In order to offer a smooth and uniform zooming I decided to apply a 5% of the current width and height instead of using a fixed number of pixels.

    var w = obj.w * 0.05;
    var h = obj.h * 0.05;

    The complete “zoom in” function is like this:

    cg.zoomIn = function(obj) {
        var w = obj.w * 0.05;
        var h = obj.h * 0.05;
        obj.w += w;
        obj.h += h;
        obj.x -= (w/2);
        obj.y -= (h/2);

    Exporting a PNG file

    The Canvas object has a method called toDataURL, which returns a URL representation of the canvas as an image, according to the format specified as a parameter. Using this method, I created a variable that stored the image URL representation and opened a new browser window.

    Then I created an Image object, setting the src attribute with the value of the URL and appended it to the new window’s document.
    The user has to right click the image and “Save as” it themselves. I know it’s not the best solution, but it was what I could come up with, then.

    var data = c.toDataURL('png');
    var win =;
    var b = win.document.body;
    var img = new Image();
    img.src = data;

    Rescaling the screen

    For screen resizing, there wasn’t any inconvenience, really. After all, the Canvas object has the width and height attributes. So everything I have to do is set these values and the screen will be rescaled, right? You wish… When you set either the width or height attributes from a canvas object, its context gets lost somehow.

    To fix that problem I had to update every single object of the context after rescaling the canvas object. At that moment I realized the advantages of using a framework. Because it kept information of every object and their attributes, and it did all dirty work of redrawing every image back to the canvas context.

    c.width = w;
    c.height = h;
    scene.update(); // thanks, confusing framework

    Final considerations

    I have always liked the front end side of programming. It took me a long time to realize and accept that, for some reason. I found that JavaScript is as a very powerful language capable of innumerable awesome stuff.

    A couple years ago I thought I could never build such things as I have done nowadays. And gladly I was wrong!

    If you love to code, dedicate some time at learning new things. The web is on your side. I mean, look at MDN! You have just anything you need to become an excellent developer.

    What are you waiting for to become a great dev?

  10. Firefox Development Highlights – Viewport percentage, canvas.toBlob() and WebRTC

    To keep you updated on the latest features in Firefox, here is again a blog post highlighting the most recent changes. This is part of our Bleeding Edge series, and most examples only work in Firefox Nightly (and could be subject to change).

    Viewport-percentage lengths

    Gecko now supports new lenght units: vh, vw, vmin and vmax. 1vh is 1% of the viewport height, and the length doesn’t depend on its container size. We can build designs that are directly proportional to the page size (think about HTML slides for example, which are supposed to keep the same appearance regardless the size of the page).

    1/100th of the height of the viewport.
    1/100th of the width of the viewport.
    1/100th of the minimum value between the height and the width of the viewport.
    1/100th of the maximum value between the height and the width of the viewport.

    Read more about CSS Viewport-percentage lengths on MDN.


    A Blob object represents a file-like object of immutable, raw data. Blobs can be used by different APIs, like the File API or IndexedDB. We can create an alias URL the refers to the blob with window.URL.createObjectURL, which can be used in place of data URLs in some cases (which is better memory wise).

    Now, a canvas element can export its content as an image blob with the toBlob() method (this replaces the non standard mozGetAsFile function). toBlob is asynchronous:

    toBlob(callback, type) // type is "image/png" by default

    For more information, see Example: Getting a file representing the canvas on MDN.

    WebRTC in Firefox Nightly and Firefox Aurora (Firefox 18)

    To enable our WebRTC code in Firefox’s Nightly desktop build, browse to about:config and change the media.peerconnection.enabled preference to true. More WebRTC documentation on MDN, and we plan to have future blog posts about WebRTC here on Mozilla Hacks.

    Additionally, if you are interested in a steady flow of the latest Firefox highlights, you can also follow @FirefoxNightly on Twitter.