JavaScript Articles

Sort by:


  1. The Web Developer Toolbox: Raphaël

    This is the first of a series of articles dedicated to the useful libraries that all web developers should have in their toolbox. My intent is to show you what those libraries can do and help you to use them at their best. This first article is dedicated to the Raphaël library.


    Raphaël is a library originally written by Dmitry Baranovskiy and is now part of Sencha Labs.

    The goal of this library is to simplify work with vector graphics on the Web. Raphaël relies on the SVG W3C Recommendation (which is well supported in all modern browsers) and falls back to the Micrsoft VML language in order to address legacy versions of Internet Explorer. It also tries to harmonize some working issues across SVG implementations such as the SVG Animations. As a consequence, Raphaël is a very nice wrapper to produce consistent kick-ass graphics all over the Web.

    Basic usage

    The library has very good documentation with many examples. Do not hesitate to use it extensively.

    The following example will draw a simple red circle inside an HTML element with the id “myPaper”.

    // the following example creates a drawing zone
    // that is 100px wide by 100px high.
    // This drawing zone is created at the top left corner
    // of the #myPaper element (or its top right corner in
    // dir="rtl" elements)
    var paper = Raphael("myPaper", 100, 100); 
    // The circle will have a radius of 40
    // and its center will be at coordinate 50,50
    var c =, 50, 40); 
    // The circle will be filled with red
    // Note that the name of each element property
    // follow the SVG recommendation
        fill: "#900"

    Advanced usage

    Despite the fact that Raphaël reduces the possibilities offered by SVG (mainly because of the VML fallback) it allows one to perform very advanced stuff:

    • Advance Matrix transformation
    • Advance event handler
    • Cross browser animations
    • Easy drag system
    • Path intersection detection

    Raphaël is also extensible through an extension system that allows you to build custom functions.

    For example, here’s an extension to draw pie charts:

     * Pie method
     * cx: x position of the rotating center of the pie
     * cy: y position of the rotating center of the pie
     * r : radius of the pie 
     * a1: angle expressed in degrees where the pie start
     * a2: angle expressed in degrees where the pie end
    Raphael.fn.pie = function (cx, cy, r, a1, a2) {
        var d,
            flag = (a2 - a1) > 180; 
        a1 = (a1 % 360) * Math.PI / 180;
        a2 = (a2 % 360) * Math.PI / 180; 
        d = [
            // Setting the rotating axe of the pie
            "M", cx, cy,
            // Go to the start of the curve
            "l", r * Math.cos(a1), r * Math.sin(a1),
            // Drawing the curve to its end
            "A", r, r, "0", +flag, "1",
            cx + r * Math.cos(a2), cy + r * Math.sin(a2),
            // Closing the path
        return this.path(d.join(' '));

    Note: In the example above, you have to be familiar with the SVG path syntax (Raphaël will convert it to the VML syntax under the hood), but once it’s done you can reuse it as any other Raphaël primitive. Look at this extension working to draw a color wheel on jsFiddle.

    JSFiddle demo.

    Limits and precaution

    If you are not familiar with SVG and/or want to support legacy MS Internet Explorer browsers, this tool is made for you. However, it’s a JavaScript library, which means that you have to know JavaScript to use it. You cannot use SVG and ask Raphaël to parse it and interpret it (to do that, it exists other libraries).

    In terms of browser support, Raphaël gives you a large base. Raphaël currently supports Firefox 3.0+, Safari 3.0+, Chrome 5.0+, Opera 9.5+ and Internet Explorer 6.0+.

    In fact, the only browser that can not take advantage of Raphaël is the native browser for Android 1.x and 2.x (and obviously many feature phone browsers). This is due to the fact that those browsers do not support any vector language. Android starts (partially) supporting SVG with Android 3.0 so take care if you want to work with all mobile devices.


    Raphaël was the first library to allow web designers and web developers to use SVG in the wild. If you want to write some nice applications without the need of the full SVG DOM API or if you have to support legacy browsers, this library will give you some power.

    In conclusion, here are some cool usages of Raphaël:

  2. Click highlights with CSS transitions

    When you watch screencasts from time to time you’ll see that some software adds growing dots to the clicks the person explaining does to make them more obvious. Using CSS transitions this can be done very simply in JavaScript, too.
    click highlighting with CSS

    Check out the following demo on JSFiddle and you see what we mean. When you click on the document a dot grows where you clicked and it vanishes again. If you keep the mouse pressed the dot stays and you can move it around.

    JSFiddle demo.

    Moving the dot

    The code is incredibly simple. We generate a DIV element and move it with the mouse. For this, we need JavaScript. Check the comments to see what is going on:

      // create a DIV element, give it an ID and add it
      // to the body
      var plot = document.createElement('div'),
          pressed = false; = 'clickhighlightplot';
      // define offset as half the width of the DIV 
      // (this is needed to put the mouse cursor in
      // its centre)
      var offset = plot.offsetWidth / 2;
      // when the mouse is moved and the mouse button is pressed, 
      // move the DIV to the current position of the mouse cursor
      document.addEventListener('mousemove', function(ev) {
        if (pressed) { moveplot(ev.pageX, ev.pageY); }
      }, false);
      // when the mouse button is pressed, add a class called 
      // 'down' to the body of the element and define pressed
      // as true. Then move the DIV to the current mouse 
      // position.
      document.addEventListener('mousedown', function(ev) {
        pressed = true;
        moveplot(ev.pageX, ev.pageY);
      }, false);
      // when the mouse is released, remove the 'down' class 
      // from the body and set pressed to false
      document.addEventListener('mouseup', function(ev) {
        pressed = false;
      },  false);
      // move the DIV to x and y with the correct offset
      function moveplot(x, y) { = x - offset + 'px'; = y - offset + 'px';

    This takes care of creating and moving the DIV and also gives us classes on the body element to play with.

    Growing the dot

    The growing of the dot uses CSS transitions. We change the scale of the dot from 0,0 to 1,1 in a certain amount of time. Notice that we need to scale down rather than up as Webkit zooms scaled elements instead of leaving crisp outlines like Firefox does (the first iteration of this script scaled a 10×10 pixel dot upwards and looked horrible).

    #clickhighlightplot {
      cursor: pointer;
      pointer-events: none;
      background: rgba(255, 255, 10, 0.7);
      height: 100px;
      position: absolute;
      border-radius: 100px;
      -webkit-transition: -webkit-transform 1s;
         -moz-transition: -moz-transform 1s;
          -ms-transition: -ms-transform 1s;
           -o-transition: -o-transform 1s;
              transition: transform 1s;
       -webkit-transform: scale(0, 0);
          -moz-transform: scale(0, 0);
           -ms-transform: scale(0, 0);
            -o-transform: scale(0, 0);
               transform: scale(0, 0);
    .down #clickhighlightplot {
      -webkit-transform: scale(1, 1);
         -moz-transform: scale(1, 1);
          -ms-transform: scale(1, 1);
           -o-transform: scale(1, 1);
              transform: scale(1, 1);

    Fixing the “covered clickable elements” issue

    The main annoyance with the way the script works now is that you cover elements with the growing dot making them effectively unclickable. That might not be what you want which is why we need to make sure the dot covers them but still allows for clicks to go through. The good news is that there is a thing called pointer events for that. This is supported in Firefox and Webkit but sadly enough not in IE and Opera.

    Moving from JS and CSS to pure JS (but using CSS)

    Now, whilst it is cool to be able to maintain all the look and feel in CSS, the issue is that we need to repeat all the vendor prefixes and we run into issues that browsers might not support what we want to do. That’s why sometimes it makes more sense to move the whole functionality into JavaScript as we have the chance to test support there and write less code.

    Clickhighlight.js does all that. Instead of keeping the maintenance in the CSS (and requiring you to add all the vendor prefixes) you can now have the effect by simply adding the script and calling its init() method:

    <script src="clickhighlight.js"></script>

    Elements with the class “nohighlight” will not get the effect. You can change the look by passing an initialisation object:

      size:        '300px', // the maximum size of the dot
      duration:    '2s',    // duration of the effect (seconds) 
      colour:      'green', // the dot colour - RGBA = transparency 
      nohighlight: 'notme'  // class of elements not highlighted

    You can see it in action in this video on YouTube:

    The next steps could be to add touch support and to turn this into a bookmarklet so you can use it on any page. Any other ideas?

  3. Porting “Me & My Shadow” to the Web – C++ to JavaScript/Canvas via Emscripten

    Editors note: This is a guest post by Alon Zakai of the Mozilla Emscripten team. Thanks Alon!

    Me & My Shadow is an open source 2D game, with clever gameplay in which you control not one character but two. I happened to hear about it recently when they released a 0.3 version:

    Since I’m looking for games to port to the web, I figured this was a good candidate. It was quite easy to port, here is the result: Me & My Shadow on the Web

    Me and my shadow

    You can also get the source on GitHub.

    The port was done automatically by compiling the original code to JavaScript using Emscripten, an open-source C++ to JavaScript compiler that uses LLVM. Using a compiler like this allows the game to just be compiled, instead of manually rewriting it in JavaScript, so the process can take almost no time.

    The compiled game works almost exactly like the desktop version does on the machines and browsers I’ve tested on. Interestingly, performance looks very good. In this case, it’s mainly because most of what the game does is blit images. It uses the cross-platform SDL API, which is a wrapper library for things like opening a window, getting input, loading images, rendering text, etc. (so it is exactly what a game like this needs). Emscripten supports SDL through native canvas calls, so when you compile a game that uses SDL into JavaScript, it will use Emscripten’s SDL implementation. That implementation implements SDL blit operations using drawImage calls and so forth, which browsers generally hardware accelerate these days, so the game runs as fast as it would natively.

    For example, if the C++ code has

    SDL_BlitSurface(sprite, NULL, screen, position)

    then that means to blit the entire bitmap represented by sprite into the screen, at a specific position. Emscripten’s SDL implementation does some translation of arguments, and then calls

    ctx.drawImage(src.canvas, sr.x, sr.y, sr.w, sr.h, dr.x, dr.y, sr.w, sr.h);

    which draws the sprite, contained in src.canvas, into the context representing the screen, at the correct position and size. In other words, the C++ code is translated automatically into code that uses native HTML canvas operations in an efficient manner.

    There are some caveats though. The main problem is browser support for necessary features, the main problems I ran into here are typed arrays and the Blob constructor:

    • Typed arrays are necessary to run compiled C++ code quickly and with maximum compatibility. Emscripten can compile code without them, but the result is slower and needs manual correction for compatibility. Thankfully, all browsers are getting typed arrays. Firefox, Chrome and Opera already have them, Safari was only missing FloatArray64 until recently I believe, and IE will get them in IE10.
    • The Blob constructor is necessary because this game uses Emscripten’s new compression option. It takes all the datafiles (150 or so), packs them into a single file, does LZMA on that, and then the game in the browser downloads that, decompresses, and splits it up. This makes the download much smaller (but does mean there is a short pause to decompress). The issue though is that we end up with data for each file in a typed array. It’s easy to use the BlobBuilder for images, but for audio, they need the mimetype set or they fail to decode, and only the Blob constructor supports that. It looks like only Firefox has the Blob constructor so far, I’ve been told on Twitter there might be a workaround for Chrome that I am hoping to hear more about. Not sure about other browsers. But, the game should still work, just without sound effects and music.

    Another caveat is that there is some unavoidable amount of manual porting necessary:

    JavaScript main loops must be written in an asynchronous way: A callback for each frame. Thankfully, games are usually written in a way that the main loop can easily be refactored into a function that does one iteration, and that was the case here. Then that function that does one main loop iteration is called each frame from JavaScript. However, there are other cases of synchronous code that are more annoying, for example fadeouts that happen when a menu item is picked are done synchronously (draw, SDL_Delay, draw, etc.). This same problem turned up when I ported Doom, I guess it’s a common code pattern. So I just disabled those fadeouts for now; if you do want them in a game you port, you’d need to refactor them to be asynchronous.

    Aside from that, everything pretty much just worked. (The sole exception was that this code fell prey to an LLVM LTO bug, but Rafael fixed it.) So in conclusion I would argue that there is no reason not to run games like these on the web: They are easy to port, and they run nice and fast.

  4. Old tricks for new browsers – a talk at jQuery UK 2012

    Last Friday around 300 developers went to Oxford, England to attend jQuery UK and learn about all that is hot and new about their favourite JavaScript library. Imagine their surprise when I went on stage to tell them that a lot of what jQuery is used for these days doesn’t need it. If you want to learn more about the talk itself, there is a detailed report, slides and the audio recording available.

    The point I was making is that libraries like jQuery were first and foremost there to give us a level playing field as developers. We should not have to know the quirks of every browser and this is where using a library allows us to concentrate on the task at hand and not on how it will fail in 10 year old browsers.

    jQuery’s revolutionary new way of looking at web design was based on two main things: accessing the document via CSS selectors rather than the unwieldy DOM methods and chaining of JavaScript commands. jQuery then continued to make event handling and Ajax interactions easier and implemented the Easing equations to allow for slick and beautiful animations.

    However, this simplicity came with a prize: developers seem to forget a few very simple techniques that allow you to write very terse and simple to understand JavaScripts that don’t rely on jQuery. Amongst others, the most powerful ones are event delegation and assigning classes to parent elements and leave the main work to CSS.

    Event delegation

    Event Delegation means that instead of applying an event handler to each of the child elements in an element, you assign one handler to the parent element and let the browser do the rest for you. Events bubble up the DOM of a document and happen on the element you want to get and each of its parent elements. That way all you have to do is to compare with the target of the event to get the one you want to access. Say you have a to-do list in your document. All the HTML you need is:

    <ul id="todo">
      <li>Go round Mum's</li>
      <li>Get Liz back</li>
      <li>Sort life out!</li>

    In order to add event handlers to these list items, in jQuery beginners are tempted to do a $('#todo li').click(function(ev){...}); or – even worse – add a class to each list item and then access these. If you use event delegation all you need in JavaScript is:

    document.querySelector('#todo').addEventListener( 'click', 
      function( ev ) {
        var t =;
        if ( t.tagName === 'LI' ) {
          alert( t + t.innerHTML ); 
    }, false);

    Newer browsers have a querySelector and querySelectorAll method (see support here) that gives you access to DOM elements via CSS selectors – something we learned from jQuery. We use this here to access the to-do list. Then we apply an event listener for click to the list.

    We read out which element has been clicked with and compare its tagName to LI (this property is always uppercase). This means we will never execute the rest of the code when the user for example clicks on the list itself. We call preventDefault() to tell the browser not to do anything – we now take over.

    You can try this out in this fiddle or embedded below:

    JSFiddle demo.

    The benefits of event delegation is that you can now add new items without having to ever re-assign handlers. As the main click handler is on the list new items automatically will be added to the functionality. Try it out in this fiddle or embedded below:

    JSFiddle demo.

    Leaving styling and DOM traversal to CSS

    Another big use case of jQuery is to access a lot of elements at once and change their styling by manipulating their styles collection with the jQuery css() method. This is seemingly handy but is also annoying as you put styling information in your JavaScript. What if there is a rebranding later on? Where do people find the colours to change? It is a much simpler to add a class to the element in question and leave the rest to CSS. If you think about it, a lot of times we repeat the same CSS selectors in jQuery and the style document. Seems redundant.

    Adding and removing classes in the past was a bit of a nightmare. The way to do it was using the className property of a DOM element which contained a string. It was then up to you to find if a certain class name is in that string and to remove and add classes by adding to or using replace() on the string. Again, browsers learned from jQuery and now have a classList object (support here) that allows easy manipulation of CSS classes applied to elements. You have add(), remove(), toggle() and contains() to play with.

    This makes it dead easy to style a lot of elements and to single them out for different styling. Let’s say for example we have a content area and want to show one at a time. It is tempting to loop over the elements and do a lot of comparison, but all we really need is to assign classes and leave the rest to CSS. Say our content is a navigation pointing to articles. This works in all browsers:

      <h1>Profit plans</h1>
    <section id="content">
      <nav id="nav">
          <li><a href="#step1">Step 1: Collect Underpants</a></li>
          <li><a href="#step2">Step 2: ???</a></li>
          <li><a href="#step3">Step 3: Profit!</a></li>
      <article id="step1">
        <header><h1>Step 1: Collect Underpants</h1></header>
            Make sure Tweek doesn't expect anything, then steal underwear 
            and bring it to the mine.
        <footer><a href="#nav">back to top</a></footer>
      <article id="step2">
        <header><h1>Step 2: ???</h1></header>
        <footer><a href="#nav">back to top</a></footer>
      <article id="step3">
        <header><h1>Step 3: Profit</h1></header>
          <p>Yes, profit will come. Let's sing the underpants gnome song.</p>
        <footer><a href="#nav">back to top</a></footer>

    Now in order to hide all the articles, all we do is assign a ‘js’ class to the body of the document and store the first link and first article in the content section in variables. We assign a class called ‘current’ to each of those.

    /* grab all the elements we need */
    var nav = document.querySelector( '#nav' ),
        content = document.querySelector( '#content' ),
    /* grab the first article and the first link */
        article = document.querySelector( '#content article' ),
        link = document.querySelector( '#nav a' );
    /* hide everything by applying a class called 'js' to the body */
    document.body.classList.add( 'js' );
    /* show the current article and link */ 
    article.classList.add( 'current' );
    link.classList.add( 'current' );

    Together with a simple CSS, this hides them all off screen:

    /* change content to be a content panel */
    .js #content {
      position: relative;
      overflow: hidden;
      min-height: 300px;
    /* push all the articles up */
    .js #content article {
      position: absolute;
      top: -700px;
      left: 250px;
    /* hide 'back to top' links */
    .js article footer {
      position: absolute;
      left: -20000px;

    In this case we move the articles up. We also hide the “back to top” links as they are redundant when we hide and show the articles. To show and hide the articles all we need to do is assign a class called “current” to the one we want to show that overrides the original styling. In this case we move the article down again.

    /* keep the current article visible */
    .js #content article.current {
      top: 0;

    In order to achieve that all we need to do is a simple event delegation on the navigation:

    /* event delegation for the navigation */
    nav.addEventListener( 'click', function( ev ) {
      var t =;
      if ( t.tagName === 'A' ) {
        /* remove old styles */
        link.classList.remove( 'current' );
        article.classList.remove( 'current' );
        /* get the new active link and article */
        link = t;
        article = document.querySelector( link.getAttribute( 'href' ) );
        /* show them by assigning the current class */
        link.classList.add( 'current' );
        article.classList.add( 'current' );
    }, false);

    The simplicity here lies in the fact that the links already point to the elements with this IDs on them. So all we need to do is to read the href attribute of the link that was clicked.

    See the final result in this fiddle or embedded below.

    JSFiddle demo.

    Keeping the visuals in the CSS

    Mixed with CSS transitions or animations (support here), this can be made much smoother in a very simple way:

    .js #content article {
      position: absolute;
      top: -300px;
      left: 250px;
      -moz-transition: 1s;
      -webkit-transition: 1s;
      -ms-transition: 1s;
      -o-transition: 1s;
      transition: 1s;

    The transition now simply goes smoothly in one second from the state without the ‘current’ class to the one with it. In our case, moving the article down. You can add more properties by editing the CSS – no need for more JavaScript. See the result in this fiddle or embedded below:

    JSFiddle demo.

    As we also toggle the current class on the link we can do more. It is simple to add visual extras like a “you are here” state by using CSS generated content with the :after selector (support here). That way you can add visual nice-to-haves without needing the generate HTML in JavaScript or resort to images.

    .js #nav a:hover:after, .js #nav a:focus:after, .js #nav a.current:after {
      content: '➭';
      position: absolute;
      right: 5px;

    See the final result in this fiddle or embedded below:

    JSFiddle demo.

    The benefit of this technique is that we keep all the look and feel in CSS and make it much easier to maintain. And by using CSS transitions and animations you also leverage hardware acceleration.

    Give them a go, please?

    All of these things work across browsers we use these days and using polyfills can be made to work in old browsers, too. However, not everything is needed to be applied to old browsers. As web developers we should look ahead and not cater for outdated technology. If the things I showed above fall back to server-side solutions or page reloads in IE6, nobody is going to be the wiser. Let’s build escalator solutions – smooth when the tech works but still available as stairs when it doesn’t.


  5. JavaScript on the server: Growing the Node.js Community

    Cloud9 IDE and Mozilla have been working together ever since their Bespin and ACE projects joined forces. Both organizations are committed to the success of Node.js, Mozilla due to its history with Javascript and Cloud9 IDE as a core contributor to Node.js and provider of the leading Node.js IDE. As part of this cooperation, this is a guest post written by Ruben Daniels and Zef Hemel of Cloud9 IDE.

    While we all know and love JavaScript as a language for browser-based scripting, few remember that, early on, it was destined to be used as a server-side language as well. Only about a year after JavaScript’s original release in Netscape Navigator 2.0 (1995), Netscape released Netscape Enterprise Server 2.0:

    Netscape Enterprise Server is the first web server to support the Java(TM) and JavaScript(TM) programming languages, enabling the creation, delivery and management of live online applications.

    This is how the web got started, all the way back in the mid-nineties. Sadly, it was not meant to be then. JavaScript on the server failed, while JavaScript in the browser became a hit. At the time, JavaScript was still very young. The virtual machines that executed JavaScript code were slow and heavy weight, and there were no tools to support and manage large JavaScript code bases. This was fine for JavaScript’s use case in the browser at the time, but not sufficient for server-side applications.


  6. Faster Canvas Pixel Manipulation with Typed Arrays

    Edit: See the section about Endiannes.

    Typed Arrays can significantly increase the pixel manipulation performance of your HTML5 2D canvas Web apps. This is of particular importance to developers looking to use HTML5 for making browser-based games.

    This is a guest post by Andrew J. Baker. Andrew is a professional software engineer currently working for Ibuildings UK where his time is divided equally between front- and back-end enterprise Web development. He is a principal member of the browser-based games channel #bbg on Freenode, spoke at the first HTML5 games conference in September 2011, and is a scout for Mozilla’s WebFWD innovation accelerator.

    Eschewing the higher-level methods available for drawing images and primitives to a canvas, we’re going to get down and dirty, manipulating pixels using ImageData.

    Conventional 8-bit Pixel Manipulation

    The following example demonstrates pixel manipulation using image data to generate a greyscale moire pattern on the canvas.

    JSFiddle demo.

    Let’s break it down.

    First, we obtain a reference to the canvas element that has an id attribute of canvas from the DOM.

    var canvas = document.getElementById('canvas');

    The next two lines might appear to be a micro-optimisation and in truth they are. But given the number of times the canvas width and height is accessed within the main loop, copying the values of canvas.width and canvas.height to the variables canvasWidth and canvasHeight respectively, can have a noticeable effect on performance.

    var canvasWidth  = canvas.width;
    var canvasHeight = canvas.height;

    We now need to get a reference to the 2D context of the canvas.

    var ctx = canvas.getContext('2d');

    Armed with a reference to the 2D context of the canvas, we can now obtain a reference to the canvas’ image data. Note that here we get the image data for the entire canvas, though this isn’t always necessary.

    var imageData = ctx.getImageData(0, 0, canvasWidth, canvasHeight);

    Again, another seemingly innocuous micro-optimisation to get a reference to the raw pixel data that can also have a noticeable effect on performance.

    var data =;

    Now comes the main body of code. There are two loops, one nested inside the other. The outer loop iterates over the y axis and the inner loop iterates over the x axis.

    for (var y = 0; y < canvasHeight; ++y) {
        for (var x = 0; x < canvasWidth; ++x) {

    We draw pixels to image data in a top-to-bottom, left-to-right sequence. Remember, the y axis is inverted, so the origin (0,0) refers to the top, left-hand corner of the canvas.

    The property referenced by the variable data is a one-dimensional array of integers, where each element is in the range 0..255. is arranged in a repeating sequence so that each element refers to an individual channel. That repeating sequence is as follows:

    data[0]  = red channel of first pixel on first row
    data[1]  = green channel of first pixel on first row
    data[2]  = blue channel of first pixel on first row
    data[3]  = alpha channel of first pixel on first row
    data[4]  = red channel of second pixel on first row
    data[5]  = green channel of second pixel on first row
    data[6]  = blue channel of second pixel on first row
    data[7]  = alpha channel of second pixel on first row
    data[8]  = red channel of third pixel on first row
    data[9]  = green channel of third pixel on first row
    data[10] = blue channel of third pixel on first row
    data[11] = alpha channel of third pixel on first row

    Before we can plot a pixel, we must translate the x and y coordinates into an index representing the offset of the first channel within the one-dimensional array.

            var index = (y * canvasWidth + x) * 4;

    We multiply the y coordinate by the width of the canvas, add the x coordinate, then multiply by four. We must multiply by four because there are four elements per pixel, one for each channel.

    Now we calculate the colour of the pixel.

    To generate the moire pattern, we multiply the x coordinate by the y coordinate then bitwise AND the result with hexadecimal 0xff (decimal 255) to ensure that the value is in the range 0..255.

            var value = x * y & 0xff;

    Greyscale colours have red, green and blue channels with identical values. So we assign the same value to each of the red, green and blue channels. The sequence of the one-dimensional array requires us to assign a value for the red channel at index, the green channel at index + 1, and the blue channel at index + 2.

            data[index]   = value;	// red
            data[++index] = value;	// green
            data[++index] = value;	// blue

    Here we’re incrementing index, as we recalculate it with each iteration, at the start of the inner loop.

    The last channel we need to take into account is the alpha channel at index + 3. To ensure that the plotted pixel is 100% opaque, we set the alpha channel to a value of 255 and terminate both loops.

            data[++index] = 255;	// alpha

    For the altered image data to appear in the canvas, we must put the image data at the origin (0,0).

    ctx.putImageData(imageData, 0, 0);

    Note that because data is a reference to, we don’t need to explicitly reassign it.

    The ImageData Object

    At time of writing this article, the HTML5 specification is still in a state of flux.

    Earlier revisions of the HTML5 specification declared the ImageData object like this:

    interface ImageData {
        readonly attribute unsigned long width;
        readonly attribute unsigned long height;
        readonly attribute CanvasPixelArray data;

    With the introduction of typed arrays, the type of the data attribute has altered from CanvasPixelArray to Uint8ClampedArray and now looks like this:

    interface ImageData {
        readonly attribute unsigned long width;
        readonly attribute unsigned long height;
        readonly attribute Uint8ClampedArray data;

    At first glance, this doesn’t appear to offer us any great improvement, aside from using a type that is also used elsewhere within the HTML5 specification.

    But, we’re now going to show you how you can leverage the increased flexibility introduced by deprecating CanvasPixelArray in favour of Uint8ClampedArray.

    Previously, we were forced to write colour values to the image data one-dimensional array a single channel at a time.

    Taking advantage of typed arrays and the ArrayBuffer and ArrayBufferView objects, we can write colour values to the image data array an entire pixel at a time!

    Faster 32-bit Pixel Manipulation

    Here’s an example that replicates the functionality of the previous example, but uses unsigned 32-bit writes instead.

    NOTE: If your browser doesn’t use Uint8ClampedArray as the type of the data property of the ImageData object, this example won’t work!

    JSFiddle demo.

    The first deviation from the original example begins with the instantiation of an ArrayBuffer called buf.

    var buf = new ArrayBuffer(;

    This ArrayBuffer will be used to temporarily hold the contents of the image data.

    Next we create two ArrayBuffer views. One that allows us to view buf as a one-dimensional array of unsigned 8-bit values and another that allows us to view buf as a one-dimensional array of unsigned 32-bit values.

    var buf8 = new Uint8ClampedArray(buf);
    var data = new Uint32Array(buf);

    Don’t be misled by the term ‘view’. Both buf8 and data can be read from and written to. More information about ArrayBufferView is available on MDN.

    The next alteration is to the body of the inner loop. We no longer need to calculate the index in a local variable so we jump straight into calculating the value used to populate the red, green, and blue channels as we did before.

    Once calculated, we can proceed to plot the pixel using only one assignment. The values of the red, green, and blue channels, along with the alpha channel are packed into a single integer using bitwise left-shifts and bitwise ORs.

            data[y * canvasWidth + x] =
                (255   << 24) |	// alpha
                (value << 16) |	// blue
                (value <<  8) |	// green
                 value;		// red

    Because we’re dealing with unsigned 32-bit values now, there’s no need to multiply the offset by four.

    Having terminated both loops, we must now assign the contents of the ArrayBuffer buf to We use the Uint8ClampedArray.set() method to set the data property to the Uint8ClampedArray view of our ArrayBuffer by specifying buf8 as the parameter.;

    Finally, we use putImageData() to copy the image data back to the canvas.

    Testing Performance

    We’ve told you that using typed arrays for pixel manipulation is faster. We really should test it though, and that’s what this jsperf test does.

    At time of writing, 32-bit pixel manipulation is indeed faster.

    Wrapping Up

    There won’t always be occasions where you need to resort to manipulating canvas at the pixel level, but when you do, be sure to check out typed arrays for a potential performance increase.

    EDIT: Endianness

    As has quite rightly been highlighted in the comments, the code originally presented does not correctly account for the endianness of the processor on which the JavaScript is being executed.

    The code below, however, rectifies this oversight by testing the endianness of the target processor and then executing a different version of the main loop dependent on whether the processor is big- or little-endian.

    JSFiddle demo.

    A corresponding jsperf test for this amended code has also been written and shows near-identical results to the original jsperf test. Therefore, our final conclusion remains the same.

    Many thanks to all commenters and testers.

  7. Firefox – tons of tools for web developers!

    One of the goals of Firefox have always been to make the lives of web developers as easy and productive as possible, by providing tools and a very extensible web browser to enable people to create amazing things. The idea here is to list a lot of the tools and options available to you as web developers using Firefox.