Canvas Articles

Sort by:


  1. Introducing the Canvas Debugger in Firefox Developer Tools

    The Canvas Debugger is a new tool we’ll be demoing at the Game Developers Conference in San Francisco. It’s a tool for debugging animation frames rendered on a Canvas element. Whether you’re creating a visualization, animation or debugging a game, this tool will help you understand and optimize your animation loop. It will let you debug either a WebGL or 2D Canvas context.

    Canvas Debugger Screenshot

    You can debug an animation using a traditional debugger, like our own JavaScript Debugger in Firefox’ Developer Tools. However, this can be difficult as it becomes a manual search for all of the various canvas methods you may wish to step through. The Canvas Debugger is designed to let you view the rendering calls from the perspective of the animation loop itself, giving you a much better overview of what’s happening.

    How it works

    The Canvas Debugger works by creating a snapshot of everything that happens while rendering a frame. It records all canvas context method calls. Each frame snapshot contains a list of context method calls and the associated JavaScript stack. By inspecting this stack, a developer can trace the call back to the higher level function invoked by the app or engine that caused something to be drawn.

    Certain types of Canvas context functions are highlighted to make them easier to spot in the snapshot. Quickly scrolling through the list, a developer can easily spot draw calls or redundant operations.

    Canvas Debugger Call Highlighting Detail

    Each draw call has an associated screenshot arranged in a timeline at the bottom of the screen as a “film-strip” view. You can “scrub” through this film-strip using a slider to quickly locate a draw call associated with a particular bit of rendering. You can also click a thumbnail to be taken directly to the associated draw call in the animation frame snapshot.

    Canvas Debugger Timeline Picture

    The thumbnail film-strip gives you get a quick overview of the drawing process. You can easily see how the scene is composed to get the final rendering.

    Stepping Around

    You might notice a familiar row of buttons in the attached screenshot. They’ve been borrowed from the JavaScript Debugger and provide the developer a means to navigate through the animation snapshot. These buttons may change their icons at final release, but for now, we’ll describe them as they currently look.

    Canvas Debugger Buttons image

    • “Resume” – Jump to the next draw call.
    • “Step Over” – Goes over the current context call.
    • “Step Out” – Jumps out of the animation frame (typically to the next requestAnimationFrame call).
    • “Step In” – Goes to the next non-context call in the JavaScript debugger

    Jumping to the JavaScript debugger by “stepping in” on a snapshot function call, or via a function’s stack, allows you to add a breakpoint and instantly pause if the animation is still running. Much convenience!

    Future Work

    We’re not done. We have some enhancements to make this tool even better.

    • Add the ability to inspect the context’s state at each method call. Highlight the differences in state between calls.
    • Measure Time spent in each draw call. This will readily show expensive canvas operations.
    • Make it easier to know which programs and shaders are currently in use at each draw call, allowing you to jump to the Shader Editor and tinkering with shaders in real time. Better linkage to the Shader Editor in general.
    • Inspecting Hit Regions by either drawing individual regions separately, colored differently by id, or showing the hit region id of a pixel when hovering over the preview panel using the mouse.

    And we’re just getting started. The Canvas Debugger should be landing in Firefox Nightly any day now. Watch this space for news of its landing and more updates.

  2. Halloween artist

    A while back, I made a little toy that simulates carving pumpkins. It was during that narrow window when the WebOS-running TouchPad was new and hot.

    Since then, web browsers have gown up a lot, and nowadays Mozilla is executing the vision of a browser-based operating system with Firefox OS. In any case, I’ve been digging back and dusting off some of my old apps. When you get your app running on FirefoxOS, you don’t just port it to yet another device – you port it to the web. So now, Halloween Artist runs on near anything, including those awesome (and affordable!) Firefox phones that are starting to spring up everywhere.

    The platform-agnostic web app:
    On the Firefox Marketplace:

    Play around with it before reading on, if Halloween Artist is new to you.

    But enough history; the point of this post is to dive into the jack-o-lantern guts and talk about how the program actually works!


    Before we begin, some links!

    Let’s go already!

    The first step is to get ourselves a pumpkin image in the background, and a canvas layered on top. This canvas will track mouse and touch events and let the user trace out shapes.

    Next, we need an “inside” image that will show through the carved shapes. Over that, we’ll draw the pumpkin but with the user-drawn parts cut out (made transparent). As luck would have it, the canvas API has some handy compositing modes that are perfect for these tasks. The main operation we need is “source-out”. Keep the destination image, except where it intersects with the source shape. Then it’s just a matter of doing a normal, source-over composite.

    var face = document.getElementById("draw");
    var faceCtx = face.getContext("2d");
    var glow = document.getElementById("glow");
    var dest = document.getElementById("bg");
    var destCtx = dest.getContext("2d");
    var img = document.getElementById("pumpkin");
    dest.width = dest.width;  // reset the destination canvas
    faceCtx.globalCompositeOperation = "source-out";
                      0, 0, img.width, img.height,
                      0, 0, size, size);
    // draw glowing background (to dest)
    destCtx.drawImage(glow, 0, 0, glow.width, glow.height,
                      0, 0, size, size);
    // apply the face
    destCtx.globalAlpha = 1;
    destCtx.globalCompositeOperation = "source-over";
    destCtx.drawImage(face, 0, 0);

    It’s a start! But to look like an actual carved pumpkin, we need to add some 3D magic to draw the inside edges. Actually, scratch that – we’re going to cheat! :^) We’ll start with a lower-resolution, slightly-blown-up pumpkin image:

    …then we’ll lighten it up using canvas’s “lighter” globalCompositeOperation and a globalAlpha value of, oh let’s say 0.5:

    …then we’ll “source-out” the face, same as we did with the foreground:

    …then shrink it a bit, center it, and draw it between the inside background and the outer face:

    We’re getting there! But depending on the shape, our corners might not look very convincing.

    Fortunately, all this cheating we’ve been doing – these composite operations and scaling – it’s all very fast. Even on mobile browsers. Let’s turn up the cheating to maximum and draw that middle layer in a loop, shrinking it less each time.

    …While we’re at it, each step could lighten up the current layer to a lesser degree than the previous (more “inner”) one. And while we’re at that, let’s lighten it using a more yellow color; our first pass looks a little pinkish.

    // build pumpkin by layer and draw it shrunk, inner to outer (to dest)
    var i;
    var darken = 0.4;
    for(i = 56; i > 0; i -= 4) {
        // mix a color for the layer
        scratchCtx.globalCompositeOperation = "source-over";
        scratchCtx.globalAlpha = 1;
        scratchCtx.drawImage(glow, 0, 0, glow.width, glow.height,
                             0, 0, 768, 768);       // bright glow...
        scratchCtx.globalAlpha = 0.3;
        scratchCtx.drawImage(flick,                 // light...
                             0, 0, 16, 16, 0, 0, 768, 768);
        scratchCtx.globalAlpha = darken;
        scratchCtx.drawImage(flesh,                 // darken with flesh
                             0, 0, 256, 256, -8, 0, 768 + 16, 768);
        darken += 0.02;                             // ...more each time
        // cut out the face
        // NOTE: "face" is already the outer layer.
        //       we want to copy its alpha mask, so "-in" instead of "-out".
        scratchCtx.globalCompositeOperation = "destination-in";
        scratchCtx.globalAlpha = 1;
        scratchCtx.drawImage(face, 0, 0);
        // draw layer
        destCtx.drawImage(scratch, 0, 0, 768, 768,
                          i, i, 768 - (i * 2), 768 - (i * 2));

    By putting that middle-layer step inside a loop and making a few tweaks, we get much more realistic edges:

    So, we draw a whole bunch of these middle layers, only to draw right over most of it during the next pass. It’s a bit wasteful if you think about it that way, but consider how much simpler this is than trying to simulate all these arbitrary cut-out surfaces “for real”. Instead, this code builds little bits of pumpkin shell, one layer at a time, from the inside out. It’s a fairly elegant illusion if I do say so myself; we can make a passably-realistic image with a sense of depth, without actually doing any complex calculations or intensive processing.

    Adding polish

    Warning: more history ahead. I’ll try to keep it brief and on-point. :^)

    Figuring out how “deep” to start (how much to shrink), and how many steps to draw in between, all while keeping the “carve” function reasonably fast, was one of those fun bits of experimentation and compromise. On a high-resolution display, there can still be artifacts if you draw very steep, jagged shapes. But as far as bang per buck, compatibility with low-end devices, and the 99%-of-shapes use cases, I’m pretty pleased.

    I wanted this to be pick-up-and-play friendly, so every day I’d load the latest version onto my TouchPad and hand it to my coworkers, giving them no instructions.

    Early on, it was suggested that the carved image should flicker as if the candle inside were burning unevenly. Easy! The carve function now produces two images, the normal one and a slightly brighter version, which is positioned (using CSS) right over the main one. It fades in and out using a randomized timeout and CSS animation on the “opacity” property. A few minutes of polish, and the illusion was even better.

    .fade {
        transition: opacity 0.5s linear;
        -moz-transition: opacity 0.5s linear;
        -webkit-transition: opacity 0.5s linear;
    .fade.out {
        opacity: 0;
    function animateFlicker() {
        var flicker = document.getElementById("flicker");
        if(HA.flickerTimer) {
            HA.flickerTimer = null;
        if(!HA.settings.flicker) {
        if(flicker.className === "fade out") {
            flicker.className = "fade ";
        } else {
            flicker.className = "fade out";
        HA.flickerTimer = setTimeout(animateFlicker, (Math.random() * 1000));

    I forget who, but one coworker went right for the “Carve” button before drawing anything. Natural enough instinct. So, added some logic to see if any shapes had been drawn yet, and if not, give the user some quick instructions. I’m really glad I caught that, because in showing the app to more people later, about a quarter did the same thing. Much better to show a hint popup than have users wonder why nothing’s happening.

    What if the user drew outside the pumpkin? All kinds of goofy artifacts, that’s what. So, I used that handy canvas compositing and masked the user’s input before carving. As a nice side effect, you can now carve all the way out to the edge of a pumpkin, as though you’d chopped it in half.

    This solution led to another problem. When people realized they could recklessly carve giant holes, they’d see the empty, glowing inside surface of the pumpkin. Where was the light coming from?

    So, between drawing the background and drawing the scaled flesh layers, I dropped in a candle. I actually took some photos of a nice white candle and GIMPed it into the shape of my blocky “penduin” avatar. May as well include some kind of signature in this program, eh? I made two different versions, one for each of the flicker-fading images, and made the flame a bit bigger on the brighter version. It gives a nice little touch of animation, and adds a bit more to the illusion.

    Then of course there was the less exciting stuff. Take out those nasty hard-coded values and make it work at any screen size. Make sure touch and mouse support both work as expected. Rearrange the buttons if the screen is small and they’d be in the way. All that jazz.

    Code on GitHub – Happy Halloween!

    Well, that about covers it. You’re free and encouraged to poke around in the source if you’d like to learn more or add your own tweaks. (Mind the mess; I left some experimental tweaks and previous-attempts in there, commented out.) Halloween Artist is GPLv3, and since you might not feel like scraping down the source from the web app itself (and since my lousy DSL might be down at any given moment) I’ve made it available on GitHub.

    Have fun, and Happy Halloween! :^)

  3. Building a simple paint game with HTML5 Canvas and Vanilla JavaScript

    When the talk is about HTML5 Canvas you mostly hear about libraries to make it work for legacy browsers, performance tricks like off-screen Canvas and ways to draw and animate sprites and tiles. This is only one part of Canvas, though. On the lowest level, Canvas is a way to manipulate pixels of a portion of the screen. Either via a painting API or by directly manipulating the pixel array (which by the way is a typed array and thus performs admirably).

    Using this knowledge, I thought it’d be fun to create a small game I saw in an ad for a tablet: a simple game for kids to paint letters. The result is a demo for FirefoxOS called Letterpaint which will show up soon on the Marketplace. The code is on GitHub.


    The fun thing about building Letterpaint was that I took a lot of shortcuts. Painting on a canvas is easy (and gets much easier using Jacob Seidelin’s Canvas cheatsheet), but on the first glance making sure that users stay in a certain shape is tricky. So is finding out how much of the letter has been filled in. However, by going back to seeing a Canvas as a collection of pixels, I found a simple way to make this work:

    • When I paint the letter, I read out the amount of pixels that have the colour of the letter
    • When you click the mouse button or touch the screen I test the colour of the pixel at the current mouse/finger position
    • When the pixel is not transparent, you are inside the letter as the main Canvas by default is transparent
    • When you release the mouse or stop touching the screen I compare the amount of pixels of the paint colour with the ones of the letter.

    Simple, isn’t it? And it is all possible with two re-usable functions:

      getpixelcolour(x, y)
      returns the rgba value of the pixel at position x and y
    function getpixelcolour(x, y) {
      var pixels = cx.getImageData(0, 0, c.width, c.height);
      var index = ((y * (pixels.width * 4)) + (x * 4));
      return {
        g:[index + 1],
        b:[index + 2],
        a:[index + 3]
      getpixelamount(r, g, b)
      returns the amount of pixels in the canvas of the colour 
    function getpixelamount(r, g, b) {
      var pixels = cx.getImageData(0, 0, c.width, c.height);
      var all =;
      var amount = 0;
      for (i = 0; i < all; i += 4) {
        if ([i] === r &&
  [i + 1] === g &&
  [i + 2] === b) {
      return amount;

    Add some painting functions to that and you have the game done. You can see a step by step guide of this online (and pull the code from GitHub) and there is a screencast describing the tricks and decisions on YouTube.

    The main thing to remember here is that it is very tempting to reach for libraries and tools to get things done quickly, but that it could mean that you think too complex. Browsers have very powerful tools built in for us, and in many cases it means you just need to be up-to-date and fearless in trying something “new” that comes out-of-the-box.

  4. Koalas to the Max – a case study

    One day I was browsing reddit when I came across this peculiar link posted on it:

    The game was addictive and I loved it but I found several design elements flawed. Why did it start with four circles and not one? Why was the color split so jarring? Why was it written in flash? (What is this, 2010?) Most importantly, it was missing a golden opportunity to split into dots that form an image instead of just doing random colors.

    Creating the project

    This seemed like a fun project, and I reimplemented it (with my design tweaks) using D3 to render with SVG.

    The main idea was to have the dots split into the pixels of an image, with each bigger dot having the average color of the four dots contained inside of it recursively, and allow the code to work on any web-based image.
    The code sat in my ‘Projects’ folder for some time; Valentines day was around the corner and I thought it could be a cute gift. I bought the domain name, found a cute picture, and thus “ (KttM)” was born.


    While the user-facing part of KttM has changed little since its inception, the implementation has been revisited several times to incorporate bug fixes, improve performance, and bring support to a wider range of devices.

    Notable excerpts are presented below and the full code can be found on GitHub.

    Load the image

    If the image is hosted on (same) domain then loading it is as simple as calling new Image()

    var img = new Image();
    img.onload = function() {
     // Awesome rendering code omitted
    img.src = the_image_source;

    One of the core design goals for KttM was to let people use their own images as the revealed image. Thus, when the image is on an arbitrary domain, it needs to be given special consideration. Given the same origin restrictions, there needs to be a image proxy that could channel the image from the arbitrary domain or send the image data as a JSONP call.

    Originally I used a library called $.getImageData but I had to switch to a self hosted solution after KttM went viral and brought the $.getImageData App Engine account to its limits.

    Extract the pixel data

    Once the image loads, it needs to be resized to the dimensions of the finest layer of circles (128 x 128) and its pixel data can be extracted with the help of an offscreen HTML5 canvas element.

    koala.loadImage = function(imageData) {
     // Create a canvas for image data resizing and extraction
     var canvas = document.createElement('canvas').getContext('2d');
     // Draw the image into the corner, resizing it to dim x dim
     canvas.drawImage(imageData, 0, 0, dim, dim);
     // Extract the pixel data from the same area of canvas
     // Note: This call will throw a security exception if imageData
     // was loaded from a different domain than the script.
     return canvas.getImageData(0, 0, dim, dim).data;

    dim is the number of smallest circles that will appear on a side. 128 seemed to produce nice results but really any power of 2 could be used. Each circle on the finest level corresponds to one pixel of the resized image.

    Build the split tree

    Resizing the image returns the data needed to render the finest layer of the pixelization. Every successive layer is formed by grouping neighboring clusters of four dots together and averaging their color. The entire structure is stored as a (quaternary) tree so that when a circle splits it has easy access to the dots from which it was formed. During construction each subsequent layer of the tree is stored in an efficient 2D array.

    // Got the data now build the tree
    var finestLayer = array2d(dim, dim);
    var size = minSize;
    // Start off by populating the base (leaf) layer
    var xi, yi, t = 0, color;
    for (yi = 0; yi < dim; yi++) {
     for (xi = 0; xi < dim; xi++) {
       color = [colorData[t], colorData[t+1], colorData[t+2]];
       finestLayer(xi, yi, new Circle(vis, xi, yi, size, color));
       t += 4;

    Start by going through the color data extracted in from the image and creating the finest circles.

    // Build up successive nodes by grouping
    var layer, prevLayer = finestLayer;
    var c1, c2, c3, c4, currentLayer = 0;
    while (size < maxSize) {
     dim /= 2;
     size = size * 2;
     layer = array2d(dim, dim);
     for (yi = 0; yi < dim; yi++) {
       for (xi = 0; xi < dim; xi++) {
         c1 = prevLayer(2 * xi    , 2 * yi    );
         c2 = prevLayer(2 * xi + 1, 2 * yi    );
         c3 = prevLayer(2 * xi    , 2 * yi + 1);
         c4 = prevLayer(2 * xi + 1, 2 * yi + 1);
         color = avgColor(c1.color, c2.color, c3.color, c4.color);
         c1.parent = c2.parent = c3.parent = c4.parent = layer(xi, yi,
           new Circle(vis, xi, yi, size, color, [c1, c2, c3, c4], currentLayer, onSplit)
     splitableByLayer.push(dim * dim);
     splitableTotal += dim * dim;
     prevLayer = layer;

    After the finest circles have been created, the subsequent circles are each built by merging four dots and doubling the radius of the resulting dot.

    Render the circles

    Once the split tree is built, the initial circle is added to the page.

    // Create the initial circle
    Circle.addToVis(vis, [layer(0, 0)], true);

    This employs the Circle.addToVis function that is used whenever the circle is split. The second argument is the array of circles to be added to the page.

    Circle.addToVis = function(vis, circles, init) {
     var circle = vis.selectAll('.nope').data(circles)
     if (init) {
       // Setup the initial state of the initial circle
       circle = circle
         .attr('cx',   function(d) { return d.x; })
         .attr('cy',   function(d) { return d.y; })
         .attr('r', 4)
         .attr('fill', '#ffffff')
     } else {
       // Setup the initial state of the opened circles
       circle = circle
         .attr('cx',   function(d) { return d.parent.x; })
         .attr('cy',   function(d) { return d.parent.y; })
         .attr('r',    function(d) { return d.parent.size / 2; })
         .attr('fill', function(d) { return String(d.parent.rgb); })
         .attr('fill-opacity', 0.68)
     // Transition the to the respective final state
       .attr('cx',   function(d) { return d.x; })
       .attr('cy',   function(d) { return d.y; })
       .attr('r',    function(d) { return d.size / 2; })
       .attr('fill', function(d) { return String(d.rgb); })
       .attr('fill-opacity', 1)
       .each('end',  function(d) { d.node = this; });

    Here the D3 magic happens. The circles in circles are added (.append('circle')) to the SVG container and animated to their position. The initial circle is given special treatment as it fades in from the center of the page while the others slide over from the position of their “parent” circle.

    In typical D3 fashion circle ends up being a selection of all the circles that were added. The .attr calls are applied to all of the elements in the selection. When a function is passed in it shows how to map the split tree node onto an SVG element.

    .attr('cx', function(d) { return d.parent.x; }) would set the X coordinate of the center of the circle to the X position of the parent.

    The attributes are set to their initial state then a transition is started with .transition() and then the attributes are set to their final state; D3 takes care of the animation.

    Detect mouse (and touch) over

    The circles need to split when the user moves the mouse (or finger) over them; to be done efficiently the regular structure of the layout can be taken advantage of.

    The described algorithm vastly outperforms native “onmouseover” event handlers.

    // Handle mouse events
    var prevMousePosition = null;
    function onMouseMove() {
     var mousePosition = d3.mouse(vis.node());
     // Do nothing if the mouse point is not valid
     if (isNaN(mousePosition[0])) {
       prevMousePosition = null;
     if (prevMousePosition) {
       findAndSplit(prevMousePosition, mousePosition);
     prevMousePosition = mousePosition;
    // Initialize interaction
     .on('mousemove.koala', onMouseMove)

    Firstly a body wide mousemove event handler is registered. The event handler keeps track of the previous mouse position and calls on the findAndSplit function passing it the line segments traveled by the user’s mouse.

    function findAndSplit(startPoint, endPoint) {
     var breaks = breakInterval(startPoint, endPoint, 4);
     var circleToSplit = []
     for (var i = 0; i < breaks.length - 1; i++) {
       var sp = breaks[i],
           ep = breaks[i+1];
       var circle = splitableCircleAt(ep);
       if (circle && circle.isSplitable() && circle.checkIntersection(sp, ep)) {

    The findAndSplit function splits a potentially large segment traveled by the mouse into a series of small segments (not bigger than 4px long). It then checks each small segment for a potential circle intersection.

    function splitableCircleAt(pos) {
     var xi = Math.floor(pos[0] / minSize),
         yi = Math.floor(pos[1] / minSize),
         circle = finestLayer(xi, yi);
     if (!circle) return null;
     while (circle && !circle.isSplitable()) circle = circle.parent;
     return circle || null;

    The splitableCircleAt function takes advantage of the regular structure of the layout to find the one circle that the segment ending in the given point might be intersecting. This is done by finding the leaf node of the closest fine circle and traversing up the split tree to find its visible parent.

    Finally the intersected circle is split (circle.split()).

    Circle.prototype.split = function() {
     if (!this.isSplitable()) return;;
     delete this.node;
     Circle.addToVis(this.vis, this.children);

    Going viral

    Sometime after Valentines day I meet with Mike Bostock (the creator of D3) regarding D3 syntax and I showed him KttM, which he thought was tweet-worthy – it was, after all, an early example of a pointless artsy visualization done with D3.

    Mike has a twitter following and his tweet, which was retweeted by some members of the Google Chrome development team, started getting some momentum.

    Since the koala was out of the bag, I decided that it might as well be posted on reddit. I posted it on the programing subreddit with the tile “A cute D3 / SVG powered image puzzle. [No IE]” and it got a respectable 23 points which made me happy. Later that day it was reposted to the funny subreddit with the title “Press all the dots :D” and was upvoted to the front page.

    The traffic went exponential. Reddit was a spike that quickly dropped off, but people have picked up on it and spread it to Facebook, StumbleUpon, and other social media outlets.

    The traffic from these sources decays over time but every several months KttM gets rediscovered and traffic spikes.

    Such irregular traffic patterns underscore the need to write scalable code. Conveniently KttM does most of the work within the user’s browser; the server needs only to serve the page assets and one (small) image per page load allowing KttM to be hosted on a dirt-cheap shared hosting service.

    Measuring engagement

    After KttM became popular I was interested in exploring how people actually interacted with the application. Did they even realize that the initial single circle can split? Does anyone actually finish the whole image? Do people uncover the circles uniformly?

    At first the only tracking on KttM was the vanilla GA code that tracks pageviews. This quickly became underwhelming. I decided to add custom event tracking for when an entire layer was cleared and when a percentage of circles were split (in increments of 5%). The event value is set to the time in seconds since page load.

    As you can see such event tracking offers both insights and room for improvement. The 0% clear event is fired when the first circle is split and the average time for that event to fire seems to be 308 seconds (5 minutes) which does not sound reasonable. In reality this happens when someone opens KttM and leaves it open for days then, if a circle is split, the event value would be huge and it would skew the average. I wish GA had a histogram view.

    Even basic engagement tracking sheds vast amounts of light into how far people get through the game. These metrics proved very useful when the the mouse-over algorithm was upgraded. I could, after several days of running the new algorithm, see that people were finishing more of the puzzle before giving up.

    Lessons learned

    While making, maintaining, and running KttM I learned several lessons about using modern web standards to build web applications that run on a wide range of devices.

    Some native browser utilities give you 90% of what you need, but to get your app behaving exactly as you want, you need to reimplement them in JavaScript. For example, the SVG mouseover events could not cope well with the number of circles and it was much more efficient to implement them in JavaScript by taking advantage of the regular circle layout. Similarly, the native base64 functions (atob, btoa) are not universally supported and do not work with unicode. It is surprisingly easy to support the modern Internet Explorers (9 and 10) and for the older IEs Google Chrome Frame provides a great fallback.

    Despite the huge improvements in standard compliance it is still necessary to test the code on a wide variety of browsers and devices, as there are still differences in how certain features are implemented. For example, in IE10 running on the Microsoft Surface html {-ms-touch-action: none; } needed to be added to allow KttM to function correctly.

    Adding tracking and taking time to define and collect the key engagement metrics allows you to evaluate the impact of changes that get deployed to users in a quantitative manner. Having well defined metrics allows you to run controlled tests to figure out how to streamline your application.

    Finally, listen to your users! They pick up on things that you miss – even if they don’t know it. The congratulations message that appears on completion was added after I received complaints that is was not clear when a picture was fully uncovered.

    All projects are forever evolving and if you listen to your users and run controlled experiments then there is no limit to how much you can improve.

  5. Firefox Development Highlights – Per Window Private Browsing & Canvas’ globalCompositeOperation new values

    On a regular basis, we like to highlight the latest features in Firefox for developers, as part of our Bleeding Edge series, and most examples only work in Firefox Nightly (and could be subject to change).

    Per Window Private Browsing

    Private browsing is very useful for web developers. A new private session doesn’t include existing persistent data (cookies and HTML5 storage). It’s convenient if we want to test a website that stores data (login and persistent informations) without cleaning every time these cached data, or if we want to login to a service with 2 different users.

    Until now, when entering Private Browsing in Firefox, it was closing the existing session to start a new one. Now, Firefox will keep the current session and open a new private window. You can test it in Firefox Nightly (to be Firefox 20). There’s still some frontend work to do, but the feature works.

    Canvas’ globalCompositeOperation new values

    The ‘globalCompositeOperation’ canvas property lets you define how you want canvas to draw images over an existing image. By default, when canvas draws an image over existing pixels, the new image is just replacing the pixels. But there are other ways to mix pixels. For example, if you set ctx.globalCompositeOperation = "lighter", pixel color values are added, and it creates a different visual effect.

    There are several effects available, and more on them can be found in globalCompositeOperation on MDN.

    Rik Cabanier from Adobe has extended the Canvas specification to include more effects. He has also implemented these new effects in Firefox Nightly. These new effects are called “blend modes”. These are more advanced ways of mixing colors.

    Please take a look at the list of these new blending modes.

    And here’s an example of how to use them:

    JS Bin demo.

    If you don’t use Firefox Nightly, here is a (long) screenshot:


  6. Comic Gen – a canvas-run comic generator

    The first time I wanted to participate on Dev Derby was on the May 2012 challenge, where the rules were that you should use websockets. At that time I thought that I could use NodeJS and SocketIO. But the time kept running and I ended not having any cool ideas for an app.

    Since then I have been just watching the monthly challenges, until the December 2012 one: offline apps!

    Once again I got very excited to do something for Dev Derby, specially because I think the App Cache and Local Storage APIs are just amazing and the main APIs to use when thinking of offline apps.

    Anyway, after passing 30 minutes wondering how amazing offline apps can be I decided it was time to think of something for the challenge.

    As a husband and professional its hard to find the time I wanted to build cool stuff for this kind of challenge. So I wondered if I could reuse a demo I had already built a couple months ago, and add the necessary things to make it available offline. Looking at the demos I had done, the one that better suits to the challenge was a comic generator, which I very creatively called Comic Gen.

    Comic Gen (source code) is capable of adding and manipulating characters on a screen, and it gives you a chance to export the results to a PNG file.

    Ragaboom Framework

    Canvas was the first HTML5 API that comes to my attention. It is very cool to have the ability to draw anything, import images, write texts and interact with them all. The possibilities seem to be limitless.

    I started to make my first experiments with Canvas and it didn’t took me too long to realize I could build a framework to ease things. That was when I created the Ragaboom framework.

    At that time it was the most advanced code I had ever written with JavaScript. Looking at it today I realize how complex and hard to use it is, and how much I could improve it… Well, that’s something I still plan to do, but the responsibilities and priorities prevent me to do so right now.

    So, I had to test Ragaboom somehow. Then I created a simple game called CATcher. The objective of the game is to catch the falling kitties with a basket until the time runs out. The player earns 15 more seconds for every 100 points.

    My wife helped me with the drawings and she said she loved the game. Yeah, right… She also says I’m handsome…

    CATcher and Ragaboom were two big accomplishments for me. I submitted the game to be published in some HTML5 demo websites and surprisingly a lot of people contacted me to talk about JavaScript advanced programming and HTML5. What I loved most was that some people had very good ideas for improve the framework!

    Why a comic generator

    After I had built Ragaboom, I started to think of a new challenge where I could use Canvas and for improve the framework. At that time there was (and there still is) too many discussion about the future of HTML5, and people started to argue about if it was possible to HTML5 to replace Flash.

    That was the kind of challenge I was looking for: to try to reproduce something with Canvas that already existed with Flash. I had seen a couple comic generators using Flash. Some of them that were very simple, and some that were very well built.

    At that moment I decided what I wanted to do. I needed someone to draw the characters for me. Then, my friend Ana Zugaib, a very talented artist made them for me! Her work was simply wonderful!

    Planning the demo

    Ok, I had a subject, I had a reason, now I needed to plan what it should be created. There’s not much to think about when building a comic generator. All I needed was a toolbox and a screen where the characters should be placed.

    I wanted the user to be able to choose different screen sizes and that they should have the option to export their work to an image or something like that.

    The objects in the screen should be allowed to be manipulated, moved, resized, inverted and removed from the screen. At the moment something came into my mind: How the hell am I gonna do all that????

    How the hell to do all that

    Oh well, if I wanted challenge, I had got a challenge. Luckily, some of the features I needed were already implemented on Ragaboom framework, or were already presented on the DOM API.

    So I had to focus on the stuff they didn’t offer me:

    • How to invert the images horizontally;
    • How to resize the screen without losing the current content;
    • How to export the content to a PNG file;

    Initializing the objects

    First of all, I needed to create a Canvas object and its context, and create a big white rectangle on it. That would be my comic screen.

    var c = $('#c')[0];
    var ctx = c.getContext('2d');
    var scene = new RB.Scene(c);
    var w = c.width;
    var h = c.height;
    scene.add( scene.rect(w, h, 'white') );

    The scene.add method (from my framework) adds a new object to the screen, as you can see above, where a white rectangle was added to the screen. Next the screen is updated to show all drawn objects so far. Internally, the framework creates an array within the objects that should be drawn to the screen, storing their properties like x and y positions and their type (rectangle, circle, image, etc).
    The scene.update method iterates over that array, repainting every object in the Canvas area.

    For the toolbar I had created two arrays. The first kept the URL for each image of the toolbar, and the second stored the URL for the actual image that should be placed on the screen. The elements of toolbar image array corresponded to actual images array. So the pirate icon from the first array was in the same position as the actual pirate image from the second array, and so on.

    I then had to iterate the arrays to build buttons linking each of them to the corresponding image. After that the result of the iteration was appended to a DOM element.

    //toolbar icons array
    var miniUrls = ["sm01_mini.png", "sm02_mini.png", "sm03_mini.png", "sushi_mini.png", "z01_mini.png", "z02_mini.png", "z03_mini.png", "z04_mini.png", "balao.png", "toon01_mini.png", "toon02_mini.png", "toon03_mini.png", "toon04_mini.png", "toon05_mini.png", "toon06_mini.png"];
    //actual images array
    var toonUrls = ["sm01.png", "sm02.png", "sm03.png", "sushi.png", "z01.png", "z02.png", "z03.png", "z04.png", "balao.png", "toon01.png", "toon02.png", "toon03.png", "toon04.png", "toon05.png", "toon06.png"];
    //building the toolbar
    cg.buildMinis = function() {
        var buffer = '';
        var imgString = "&lt;img src='toons/IMG_URL' class='rc mini'&gt;&lt;/img&gt;";
        var link = "&lt;a href=\"javascript:cg.createImage('toons/IMG_URL')\"&gt;";
        for(var i=0; i &lt; miniUrls.length; i++) {
            buffer += link.replace(/IMG_URL/, toonUrls[i]);
            buffer += imgString.replace(/IMG_URL/, miniUrls[i]) + '&lt;/a&gt;';
        $('#menuContainer').append( $('#instructs').clone() );

    Adding objects to the screen

    When you are loading an image via JavaScript it is important to keep in mind that the browser will download the image asynchronously. That means that you cannot be sure that the image will be loaded and ready for use yet.

    To solve this, one of the techniques is using the onload method from the Image object:

    var img = new Image();
    img.onload = function() {
        alert('the image download was completed!');
    img.src = "my_image.png";

    Ragaboom framework uses this same trick. That’s why the image method second parameter is a callback function which will be fired when the image is ready for use.

    cg.createImage = function(url) {
        scene.image(url, function(obj) {
            obj.draggable = true;
            obj.setXY(30, 30);
            obj.onmousedown = function(e) {
                currentObj = obj;
                scene.zIndex(obj, 1);
            currentObj = obj;

    On the example above the image is stored on the objects array, converted to draggable and positioned at coordinates x=30, y=30. Then a mousedown event is attached to the object setting it to the current object. At the end, the canvas is updated.


    To increase the size of objects I simply added a small portion of pixels on both the width and the height of the object. The same was done when trying to decrease the size of objects by subtracting a portion of pixels. I only had to handle situations where width and height were lower than zero to prevent bugs.

    In order to offer a smooth and uniform zooming I decided to apply a 5% of the current width and height instead of using a fixed number of pixels.

    var w = obj.w * 0.05;
    var h = obj.h * 0.05;

    The complete “zoom in” function is like this:

    cg.zoomIn = function(obj) {
        var w = obj.w * 0.05;
        var h = obj.h * 0.05;
        obj.w += w;
        obj.h += h;
        obj.x -= (w/2);
        obj.y -= (h/2);

    Exporting a PNG file

    The Canvas object has a method called toDataURL, which returns a URL representation of the canvas as an image, according to the format specified as a parameter. Using this method, I created a variable that stored the image URL representation and opened a new browser window.

    Then I created an Image object, setting the src attribute with the value of the URL and appended it to the new window’s document.
    The user has to right click the image and “Save as” it themselves. I know it’s not the best solution, but it was what I could come up with, then.

    var data = c.toDataURL('png');
    var win =;
    var b = win.document.body;
    var img = new Image();
    img.src = data;

    Rescaling the screen

    For screen resizing, there wasn’t any inconvenience, really. After all, the Canvas object has the width and height attributes. So everything I have to do is set these values and the screen will be rescaled, right? You wish… When you set either the width or height attributes from a canvas object, its context gets lost somehow.

    To fix that problem I had to update every single object of the context after rescaling the canvas object. At that moment I realized the advantages of using a framework. Because it kept information of every object and their attributes, and it did all dirty work of redrawing every image back to the canvas context.

    c.width = w;
    c.height = h;
    scene.update(); // thanks, confusing framework

    Final considerations

    I have always liked the front end side of programming. It took me a long time to realize and accept that, for some reason. I found that JavaScript is as a very powerful language capable of innumerable awesome stuff.

    A couple years ago I thought I could never build such things as I have done nowadays. And gladly I was wrong!

    If you love to code, dedicate some time at learning new things. The web is on your side. I mean, look at MDN! You have just anything you need to become an excellent developer.

    What are you waiting for to become a great dev?

  7. Firefox Development Highlights – Viewport percentage, canvas.toBlob() and WebRTC

    To keep you updated on the latest features in Firefox, here is again a blog post highlighting the most recent changes. This is part of our Bleeding Edge series, and most examples only work in Firefox Nightly (and could be subject to change).

    Viewport-percentage lengths

    Gecko now supports new lenght units: vh, vw, vmin and vmax. 1vh is 1% of the viewport height, and the length doesn’t depend on its container size. We can build designs that are directly proportional to the page size (think about HTML slides for example, which are supposed to keep the same appearance regardless the size of the page).

    1/100th of the height of the viewport.
    1/100th of the width of the viewport.
    1/100th of the minimum value between the height and the width of the viewport.
    1/100th of the maximum value between the height and the width of the viewport.

    Read more about CSS Viewport-percentage lengths on MDN.


    A Blob object represents a file-like object of immutable, raw data. Blobs can be used by different APIs, like the File API or IndexedDB. We can create an alias URL the refers to the blob with window.URL.createObjectURL, which can be used in place of data URLs in some cases (which is better memory wise).

    Now, a canvas element can export its content as an image blob with the toBlob() method (this replaces the non standard mozGetAsFile function). toBlob is asynchronous:

    toBlob(callback, type) // type is "image/png" by default

    For more information, see Example: Getting a file representing the canvas on MDN.

    WebRTC in Firefox Nightly and Firefox Aurora (Firefox 18)

    To enable our WebRTC code in Firefox’s Nightly desktop build, browse to about:config and change the media.peerconnection.enabled preference to true. More WebRTC documentation on MDN, and we plan to have future blog posts about WebRTC here on Mozilla Hacks.

    Additionally, if you are interested in a steady flow of the latest Firefox highlights, you can also follow @FirefoxNightly on Twitter.

  8. getUserMedia is ready to roll!

    We blogged about some of our WebRTC efforts back in April. Today we have an exciting update for you on that front: getUserMedia has landed on mozilla-central! This means you will be able to use the API on the latest Nightly versions of Firefox, and it will eventually make its way to a release build.

    getUserMedia is a DOM API that allows web pages to obtain video and audio input, for instance, from a webcam or microphone. We hope this will open the possibility of building a whole new class of web pages and applications. This DOM API is one component of the WebRTC project, which also includes APIs for peer-to-peer communication channels that will enable exchange of video steams, audio streams and arbitrary data.

    We’re still working on the PeerConnection API, but getUserMedia is a great first step in the progression towards full WebRTC support in Firefox! We’ve certainly come a long way since the first image from a webcam appeared on a web page via a DOM API. (Not to mention audio recording support in Jetpack before that.)

    We’ve implemented a prefixed version of the “Media Capture and Streams” standard being developed at the W3C. Not all portions of the specification have been implemented yet; most notably, we do not support the Constraints API (which allows the caller to request certain types of audio and video based on various parameters).

    We have also implemented a Mozilla specific extension to the API: the first argument to mozGetUserMedia is a dictionary that will also accept the property {picture: true} in addition to {video: true} or {audio: true}. The picture API is an experiment to see if there is interest in a dedicated mechanism to obtain a single picture from the user’s camera, without having to set up a video stream. This could be useful in a profile picture upload page, or a photo sharing application, for example.

    Without further ado, let’s start with a simple example! Make sure to create a pref named “media.navigator.enabled” and set it to true via about:config first. We’ve put the pref in place because we haven’t implemented a permissions model or any UI for prompting the user to authorize access to the camera or microphone. This release of the API is aimed at developers, and we’ll enable the pref by default after we have a permission model and UI that we’re happy with.

    There’s also a demo page where you can test the audio, video and picture capabilities of the API. Give it a whirl, and let us know what you think! We’re especially interested in feedback from the web developer community about the API and whether it will meet your use cases. You can leave comments on this post, or on the dev-media mailing list or newsgroup.

    We encourage you to get involved with the project – there’s a lot of information about our ongoing efforts on the project wiki page. Posting on the mailing list with your questions, comments and suggestions is great way to get started. We also hang out on the #media IRC channel, feel free to drop in for an informal chat.

    Happy hacking!

  9. The Web Developer Toolbox: ThreeJS

    This is the second of a series of articles dedicated to the useful libraries that all web developers should have in their toolbox. The intent is to show you what those libraries can do and help you to use them at their best. This second article is dedicated to the ThreeJS library.


    ThreeJS is a library originally written by Ricardo Cabello Miguel aka “Mr. Doob“.

    This library makes WebGL accessible to common human beings. WebGL is a powerful API to manipulate 3D environments. This web technology is standardized by the Kronos group and Firefox, Chrome and Opera now implement it as a 3D context for the HTML canvas tag. WebGL is basically a web version of another standard : OpenGL ES 2.0. As a consequence, this API is a “low level” API that require skills and knowledge beyond what web designers are used to. That’s where ThreeJS comes into play. ThreeJS gives web developers access to the power of WebGL without all the knowledge required by the underlying API.

    Basic usage

    The library has good documentation with many examples. You’ll notice that some parts of the documentation are not complete yet (feel free to help). However, the library and examples source code are very well structured, so do not hesitate to read the source.

    Even though ThreeJS simplifies many things, you still have to be comfortable with some basic 3D concepts. Basically, ThreeJS uses the following concepts:

    1. The scene: the place where all 3D objects will be placed and manipulated in a 3D space.
    2. The camera: a special 3D object that will define the rendering point of view as well as the type of spatial rendering (perspective or isometric)
    3. The renderer: the object in charge of using the scene and the camera to render your 3D image.

    Within the scene, you will have several 3D objects which can be of the following types:

    • A mesh: A mesh is an object made of a geometry (the shape of your object) and a material (its colors and texture)
    • A light point: A special object that defines a light source to highlight all your meshes.
    • A camera, as described above.

    The following example will draw a simple wireframe sphere inside an HTML element with the id “myPlanet”.

     * First, let's prepare some context
    // The WIDTH of the scene to render
    var __WIDTH__  = 400,
    // The HEIGHT of the scene to render
        __HEIGHT__ = 400,
    // The angle of the camera that will show the scene
    // It is expressed in degrees
        __ANGLE__ = 45,
    // The shortest distance the camera can see
        __NEAR__  = 1,
    // The farthest distance the camera can see
        __FAR__   = 1000
    // The basic hue used to color our object
        __HUE__   = 0;
     * To render a 3D scene, ThreeJS needs 3 elements:
     * A scene where to put all the objects
     * A camera to manage the point of view
     * A renderer place to show the result
    var scene  = new THREE.Scene(), 
        camera = new THREE.PerspectiveCamera(__ANGLE__, 
                                             __WIDTH__ / __HEIGHT__, 
        renderer = new THREE.WebGLRenderer();
     * Let's prepare the scene
    // Add the camera to the scene
    // As all objects, the camera is put at the 
    // 0,0,0 coordinate, let's pull it back a little
    camera.position.z = 300;
    // We need to define the size of the renderer
    renderer.setSize(__WIDTH__, __HEIGHT__);
    // Let's attach our rendering zone to our page
     * Now we are ready, we can start building our sphere
     * To do this, we need a mesh defined with:
     *  1. A geometry (a sphere) 
     *  2. A material (a color that reacts to light)
    var geometry, material, mesh;
    // First let's build our geometry
    // There are other parameters, but you basically just 
    // need to define the radius of the sphere and the 
    // number of its vertical and horizontal divisions.
    // The 2 last parameters determine the number of 
    // vertices that will be produced: The more vertices you use, 
    // the smoother the form; but it will be slower to render. 
    // Make a wise choice to balance the two.
    geometry = new THREE.SphereGeometry( 100, 20, 20 );
    // Then, prepare our material
    var myMaterial = {
        wireframe : true,
        wireframeLinewidth : 2
    // We just have to build the material now
    material = new THREE.MeshPhongMaterial( myMaterial );
    // Add some color to the material
    material.color.setHSV(__HUE__, 1, 1);
    // And we can build our the mesh
    mesh = new THREE.Mesh( geometry, material );
    // Let's add the mesh to the scene
    scene.add( mesh );
     * To be sure that we will see something, 
     * we need to add some light to the scene
    // Let's create a point light
    var pointLight = new THREE.PointLight(0xFFFFFF);
    // and set its position
    pointLight.position.x = -100;
    pointLight.position.y = 100;
    pointLight.position.z = 400;
    // Now, we can add it to the scene
    scene.add( pointLight );
    // And finally, it's time to see the result
    renderer.render( scene, camera );

    And if you want to animate it (for example, make the sphere spin), it’s this easy:

    function animate() {
        // beware, you'll maybe need a shim 
        // to use requestAnimationFrame properly
        requestAnimationFrame( animate );
        // First, rotate the sphere
        mesh.rotation.y -= 0.003;
        // Then render the scene
        renderer.render( scene, camera );

    JSFiddle demo.

    Advanced usage

    Once you master the basics, ThreeJS provides you with some advanced tools.

    Rendering system

    As an abstraction layer, ThreeJS offer options to render a scene other than with WebGL. You can use the Canvas 2D API as well as SVG to perform your rendering. There is some difference between all these rendering contexts. The most obvious one is performance. Because WebGL is hardware accelerated, the rendering of complex scene is amazingly faster with it. On the other hand, because WebGL does not deal always well with anti-aliasing, the SVG or Canvas2D rendering can be better if you want to perform some cell-shading (cartoon-like) stuff. As a special advantage, SVG rendering gives you a full DOM tree of objects, which can be useful if you want access to those objects. It can have a high cost in term of performance (especially if you animate your scene), but it allows you to not rebuild a full retained mode graphic API.

    Mesh and particles

    ThreeJS is perfect for rendering on top of WebGL, but it is not an authoring tool. To model 3D objects, you have a choice of 3D software. Conveniently, ThreeJS is available with many scripts that make it easy to import meshes from several sources (Examples include: Blender, 3DSMax or the widely supported OBJ format).

    It’s also possible to easily deploy particle systems as well as using Fog, Matrix and custom shaders. ThreeJS also comes with a few pre-built materials: Basic, Face, Lambert, Normal, and Phong). A WebGL developer will be able to build his own on top of the library, which provide some really good helpers. Obviously, building such custom things requires really specific skills.

    Animating mesh

    If using requestAnimationFrame is the easiest way to animate a scene, ThreeJS provides a couple of useful tools to animate meshes individually: a full API to define how to animate a mesh and the ability to use “bones” to morph and change a mesh.

    Limits and precaution

    One of the biggest limitations of ThreeJS is related to WebGL. If you want to use it to render your scene, you are constrained by the limitations of this technology. You become hardware dependent. All browsers that claim to support WebGL have strong requirements in terms of hardware support. Some browsers will not render anything if they do not run with an appropriate hardware. The best way to avoid trouble is to use a library such as modernizr to switch between rendering systems based on each browser’s capabilities. However, take care when using non-WebGL rendering systems because they are limited (e.g. the Phong material is only supported in a WebGL context) and infinitely slower.

    In terms of browser support, ThreeJS supports all browsers that support WebGL, Canvas2D or SVG, which means: Firefox 3.6+, Chrome 9+, Opera 11+, Safari 5+ and even Internet Explorer 9+ if you do not use the WebGL rendering mode. If you want to rely on WebGL, the support is more limited: Firefox 4+, Chrome 9+, Opera 12+, Safari 5.1+ only. You can forget Internet Explorer (even the upcoming IE10) and almost all mobile browsers currently available.


    ThreeJS drastically simplifies the process of producing 3D images directly in the browser. It gives the ability to do amazing visual effects with an easy to use API. By empowering you, it allow you to unleash your creativity.

    In conclusion, here are some cool usages of ThreeJS: