Performance Articles

Sort by:


  1. Faster Canvas Pixel Manipulation with Typed Arrays

    Edit: See the section about Endiannes.

    Typed Arrays can significantly increase the pixel manipulation performance of your HTML5 2D canvas Web apps. This is of particular importance to developers looking to use HTML5 for making browser-based games.

    This is a guest post by Andrew J. Baker. Andrew is a professional software engineer currently working for Ibuildings UK where his time is divided equally between front- and back-end enterprise Web development. He is a principal member of the browser-based games channel #bbg on Freenode, spoke at the first HTML5 games conference in September 2011, and is a scout for Mozilla’s WebFWD innovation accelerator.

    Eschewing the higher-level methods available for drawing images and primitives to a canvas, we’re going to get down and dirty, manipulating pixels using ImageData.

    Conventional 8-bit Pixel Manipulation

    The following example demonstrates pixel manipulation using image data to generate a greyscale moire pattern on the canvas.

    JSFiddle demo.

    Let’s break it down.

    First, we obtain a reference to the canvas element that has an id attribute of canvas from the DOM.

    var canvas = document.getElementById('canvas');

    The next two lines might appear to be a micro-optimisation and in truth they are. But given the number of times the canvas width and height is accessed within the main loop, copying the values of canvas.width and canvas.height to the variables canvasWidth and canvasHeight respectively, can have a noticeable effect on performance.

    var canvasWidth  = canvas.width;
    var canvasHeight = canvas.height;

    We now need to get a reference to the 2D context of the canvas.

    var ctx = canvas.getContext('2d');

    Armed with a reference to the 2D context of the canvas, we can now obtain a reference to the canvas’ image data. Note that here we get the image data for the entire canvas, though this isn’t always necessary.

    var imageData = ctx.getImageData(0, 0, canvasWidth, canvasHeight);

    Again, another seemingly innocuous micro-optimisation to get a reference to the raw pixel data that can also have a noticeable effect on performance.

    var data =;

    Now comes the main body of code. There are two loops, one nested inside the other. The outer loop iterates over the y axis and the inner loop iterates over the x axis.

    for (var y = 0; y < canvasHeight; ++y) {
        for (var x = 0; x < canvasWidth; ++x) {

    We draw pixels to image data in a top-to-bottom, left-to-right sequence. Remember, the y axis is inverted, so the origin (0,0) refers to the top, left-hand corner of the canvas.

    The property referenced by the variable data is a one-dimensional array of integers, where each element is in the range 0..255. is arranged in a repeating sequence so that each element refers to an individual channel. That repeating sequence is as follows:

    data[0]  = red channel of first pixel on first row
    data[1]  = green channel of first pixel on first row
    data[2]  = blue channel of first pixel on first row
    data[3]  = alpha channel of first pixel on first row
    data[4]  = red channel of second pixel on first row
    data[5]  = green channel of second pixel on first row
    data[6]  = blue channel of second pixel on first row
    data[7]  = alpha channel of second pixel on first row
    data[8]  = red channel of third pixel on first row
    data[9]  = green channel of third pixel on first row
    data[10] = blue channel of third pixel on first row
    data[11] = alpha channel of third pixel on first row

    Before we can plot a pixel, we must translate the x and y coordinates into an index representing the offset of the first channel within the one-dimensional array.

            var index = (y * canvasWidth + x) * 4;

    We multiply the y coordinate by the width of the canvas, add the x coordinate, then multiply by four. We must multiply by four because there are four elements per pixel, one for each channel.

    Now we calculate the colour of the pixel.

    To generate the moire pattern, we multiply the x coordinate by the y coordinate then bitwise AND the result with hexadecimal 0xff (decimal 255) to ensure that the value is in the range 0..255.

            var value = x * y & 0xff;

    Greyscale colours have red, green and blue channels with identical values. So we assign the same value to each of the red, green and blue channels. The sequence of the one-dimensional array requires us to assign a value for the red channel at index, the green channel at index + 1, and the blue channel at index + 2.

            data[index]   = value;	// red
            data[++index] = value;	// green
            data[++index] = value;	// blue

    Here we’re incrementing index, as we recalculate it with each iteration, at the start of the inner loop.

    The last channel we need to take into account is the alpha channel at index + 3. To ensure that the plotted pixel is 100% opaque, we set the alpha channel to a value of 255 and terminate both loops.

            data[++index] = 255;	// alpha

    For the altered image data to appear in the canvas, we must put the image data at the origin (0,0).

    ctx.putImageData(imageData, 0, 0);

    Note that because data is a reference to, we don’t need to explicitly reassign it.

    The ImageData Object

    At time of writing this article, the HTML5 specification is still in a state of flux.

    Earlier revisions of the HTML5 specification declared the ImageData object like this:

    interface ImageData {
        readonly attribute unsigned long width;
        readonly attribute unsigned long height;
        readonly attribute CanvasPixelArray data;

    With the introduction of typed arrays, the type of the data attribute has altered from CanvasPixelArray to Uint8ClampedArray and now looks like this:

    interface ImageData {
        readonly attribute unsigned long width;
        readonly attribute unsigned long height;
        readonly attribute Uint8ClampedArray data;

    At first glance, this doesn’t appear to offer us any great improvement, aside from using a type that is also used elsewhere within the HTML5 specification.

    But, we’re now going to show you how you can leverage the increased flexibility introduced by deprecating CanvasPixelArray in favour of Uint8ClampedArray.

    Previously, we were forced to write colour values to the image data one-dimensional array a single channel at a time.

    Taking advantage of typed arrays and the ArrayBuffer and ArrayBufferView objects, we can write colour values to the image data array an entire pixel at a time!

    Faster 32-bit Pixel Manipulation

    Here’s an example that replicates the functionality of the previous example, but uses unsigned 32-bit writes instead.

    NOTE: If your browser doesn’t use Uint8ClampedArray as the type of the data property of the ImageData object, this example won’t work!

    JSFiddle demo.

    The first deviation from the original example begins with the instantiation of an ArrayBuffer called buf.

    var buf = new ArrayBuffer(;

    This ArrayBuffer will be used to temporarily hold the contents of the image data.

    Next we create two ArrayBuffer views. One that allows us to view buf as a one-dimensional array of unsigned 8-bit values and another that allows us to view buf as a one-dimensional array of unsigned 32-bit values.

    var buf8 = new Uint8ClampedArray(buf);
    var data = new Uint32Array(buf);

    Don’t be misled by the term ‘view’. Both buf8 and data can be read from and written to. More information about ArrayBufferView is available on MDN.

    The next alteration is to the body of the inner loop. We no longer need to calculate the index in a local variable so we jump straight into calculating the value used to populate the red, green, and blue channels as we did before.

    Once calculated, we can proceed to plot the pixel using only one assignment. The values of the red, green, and blue channels, along with the alpha channel are packed into a single integer using bitwise left-shifts and bitwise ORs.

            data[y * canvasWidth + x] =
                (255   << 24) |	// alpha
                (value << 16) |	// blue
                (value <<  8) |	// green
                 value;		// red

    Because we’re dealing with unsigned 32-bit values now, there’s no need to multiply the offset by four.

    Having terminated both loops, we must now assign the contents of the ArrayBuffer buf to We use the Uint8ClampedArray.set() method to set the data property to the Uint8ClampedArray view of our ArrayBuffer by specifying buf8 as the parameter.;

    Finally, we use putImageData() to copy the image data back to the canvas.

    Testing Performance

    We’ve told you that using typed arrays for pixel manipulation is faster. We really should test it though, and that’s what this jsperf test does.

    At time of writing, 32-bit pixel manipulation is indeed faster.

    Wrapping Up

    There won’t always be occasions where you need to resort to manipulating canvas at the pixel level, but when you do, be sure to check out typed arrays for a potential performance increase.

    EDIT: Endianness

    As has quite rightly been highlighted in the comments, the code originally presented does not correctly account for the endianness of the processor on which the JavaScript is being executed.

    The code below, however, rectifies this oversight by testing the endianness of the target processor and then executing a different version of the main loop dependent on whether the processor is big- or little-endian.

    JSFiddle demo.

    A corresponding jsperf test for this amended code has also been written and shows near-identical results to the original jsperf test. Therefore, our final conclusion remains the same.

    Many thanks to all commenters and testers.

  2. Firefox 7 is lean and fast

    Based on a blog post originally posted here by Nicholas Nethercote, Firefox Developer.

    Firefox 7 now uses much less memory than previous versions: often 20% to 30% less, and sometimes as much as 50% less. This means that Firefox and the websites you use will be snappier, more responsive, and suffer fewer pauses. It also means that Firefox is less likely to crash or abort due to running out of memory.

    These benefits are most noticeable if you do any of the following:
    – keep Firefox open for a long time;
    – have many tabs open at once, particularly tabs with many images;
    – view web pages with large amounts of text;
    – use Firefox on Windows
    – use Firefox at the same time as other programs that use lots of memory.


    Mozilla engineers started an effort called MemShrink, the aim of which is to improve Firefox’s speed and stability by reducing its memory usage. A great deal of progress has been made, and thanks to Firefox’s faster development cycle, each improvement made will make its way into a final release in only 12–18 weeks. The newest update to Firefox is the first general release to benefit from MemShrink’s successes, and the benefits are significant.

    Quantifying the improvements
    Measuring memory usage is difficult: there are no standard benchmarks, there are several different metrics you can use, and memory usage varies enormously depending on what the browser is doing. Someone who usually has only a handful of tabs open will have an entirely different experience from someone who usually has hundreds of tabs open. (This latter case is not uncommon, by the way, even though the idea of anyone having that many tabs open triggers astonishment and disbelief in many people. E.g. see the comment threads here and here.)

    Endurance tests
    Dave Hunt and others have been using the MozMill add-on to perform “endurance tests“, where they open and close large numbers of websites and track memory usage in great detail. Dave recently performed an endurance test comparison of development versions of Firefox, repeatedly opening and closing pages from 100 widely used websites in 30 tabs.

    [The following numbers were run while the most current version of Firefox was in Beta and capture the average and peak “resident” memory usage for each browser version over five runs of the tests. “Resident” memory usage is the amount of physical RAM that is being used by Firefox, and is thus arguably the best measure of real machine resources being used.]


    The measurements varied significantly between runs. If we do a pair-wise comparison of runs, we see the following relative reductions in memory usage:

    Minimum resident: 1.1% — 23.5% (median 6.6%)
    Maximum resident: -3.5% — 17.9% (median 9.6%)
    Average resident: 4.4% — 27.3% (median 20.0%)

    The following two graphs showing how memory usage varied over time during Run 1 for each version. Firefox 6’s graph is first, with the latest version second. (Note: Compare only to the purple “resident” lines; the meaning of the green “explicit” line changed between the versions and so the two green lines cannot be sensibly compared.)
    Firefox 7 is clearly much better; its graph is both lower and has less variation.



    Gregor Wagner has a memory stress test called MemBench. It opens 150 websites in succession, one per tab, with a 1.5 second gap between each site. The sites are mostly drawn from Alexa’s Top sites list. I ran this test on 64-bit builds of Firefox 6 and 7 on my Ubuntu Linux machine, which has 16GB of RAM. Each time, I let the stress test complete and then opened about:memory to get measurements for the peak resident usage. Then I hit the “Minimize memory usage” button in about:memory several times until the numbers stabilized again, and then re-measured the resident usage. (Hitting this button is not something normal users do, but it’s useful for testing purposes because causes Firefox to immediately free up memory that would be eventually freed when garbage collection runs.)

    For Firefox 6, the peak resident usage was 2,028 MB and the final resident usage was 669 MB. For Firefox 7, the peak usage was 1,851 MB (a 8.7% reduction) and the final usage was 321 MB (a 52.0% reduction). This latter number clearly shows that fragmentation is a much smaller problem in Firefox 7.
    (On a related note, Gregor recently measured cutting-edge development versions of Firefox and Google Chrome on MemBench.)


    Obviously, these tests are synthetic and do not match exactly how users actually use Firefox. (Improved benchmarking is one thing we’re working on as part of MemShrink, but we’ve got a long way to go. ) Nonetheless, the basic operations (opening and closing web pages in tabs) are the same, and we expect the improvements in real usage will mirror improvements in the tests.

    This means that users should see Firefox 7 using less memory than earlier versions — often 20% to 30% less, and sometimes as much as 50% less — though the improvements will depend on the exact workload. Indeed, we have had lots of feedback from early users that the latest Firefox update feels faster, is more responsive, has fewer pauses, and is generally more pleasant to use than previous versions.

    Mozilla’s MemShrink efforts are continuing. The endurance test results above show that the Beta version of Firefox already has even better memory usage, and I expect we’ll continue to make further improvements as time goes on.

  3. Detecting and generating CSS animations in JavaScript

    When writing of the hypnotic spiral demo the issue appeared that I wanted to use CSS animation when possible but have a fallback to rotate an element. As I didn’t want to rely on CSS animation I also considered it pointless to write it by hand but instead create it with JavaScript when the browser supports it. Here’s how that is done.

    Testing for the support of animations means testing if the style attribute is supported:

    var animation = false,
        animationstring = 'animation',
        keyframeprefix = '',
        domPrefixes = 'Webkit Moz O ms Khtml'.split(' '),
        pfx  = '';
    if( ) { animation = true; }
    if( animation === false ) {
      for( var i = 0; i < domPrefixes.length; i++ ) {
        if([ domPrefixes[i] + 'AnimationName' ] !== undefined ) {
          pfx = domPrefixes[ i ];
          animationstring = pfx + 'Animation';
          keyframeprefix = '-' + pfx.toLowerCase() + '-';
          animation = true;

    [Update – the earlier code did not check if the browser supports animation without a prefix – this one does]

    This checks if the browser supports animation without any prefixes. If it does, the animation string will be ‘animation’ and there is no need for any keyframe prefixes. If it doesn’t then we go through all the browser prefixes (to date :)) and check if there is a property on the style collection called browser prefix + AnimationName. If there is, the loop exits and we define the right animation string and keyframe prefix and set animation to true. On Firefox this will result in MozAnimation and -moz- and on Chrome in WebkitAnimation and -webkit- so on. This we can then use to create a new CSS animation in JavaScript. If none of the prefix checks return a supported style property we animate in an alternative fashion.

    if( animation === false ) {
      // animate in JavaScript fallback
    } else {[ animationstring ] = 'rotate 1s linear infinite';
      var keyframes = '@' + keyframeprefix + 'keyframes rotate { '+
                        'from {' + keyframeprefix + 'transform:rotate( 0deg ) }'+
                        'to {' + keyframeprefix + 'transform:rotate( 360deg ) }'+
      if( document.styleSheets && document.styleSheets.length ) {
          document.styleSheets[0].insertRule( keyframes, 0 );
      } else {
        var s = document.createElement( 'style' );
        s.innerHTML = keyframes;
        document.getElementsByTagName( 'head' )[ 0 ].appendChild( s );

    With the animation string defined we can set a (shortcut notation) animation on our element. Now, adding the keyframes is trickier. As they are not part of the original Animation but disconnected from it in the CSS syntax (to give them more flexibility and allow re-use) we can’t set them in JavaScript. Instead we need to write them out as a CSS string.

    If there is already a style sheet applied to the document we add this keyframe definition string to that one, if there isn’t a style sheet available we create a new style block with our keyframe and add it to the document.

    You can see the detection in action and a fallback JavaScript solution on JSFiddle:

    JSFiddle demo.

    Quite simple, but also a bit more complex than I originally thought. You can also dynamically detect and change current animations as this post by Wayne Pan and this demo by Joe Lambert explains but this also seems quite verbose.

    I’d love to have a CSSAnimations collection for example where you could store different animations in JSON or as a string and have their name as the key. Right now, creating a new rule dynamically and adding it either to the document or append it to the ruleset seems to be the only cross-browser way. Thoughts?

  4. Aurora 7 is here

    Aurora Logo

    Download Aurora

    Keeping up the pace with our new development cycle, today we release Aurora 7. Enjoy its new features and performance improvements: CSS “text-overflow: ellipsis“, Navigation Timing API, reduced memory usage, a faster javascript parser, and the first steps of Azure, our new graphics API.

    text-overflow: ellipsis;

    It is now possible to get Firefox to display “” to give a visual clue that a text is longer than the element containing it.

    At last, with text-overflow implemented in Aurora 7 it’s now possible to create a cross-browser ellipsis!

    Navigation Timing

    Performance is a key parameter of the user experience on the Web. To help Web developers monitor efficiently the performance of their Web pages, Aurora 7 implements the Navigation Timing specification: using the window.performance.timing object, developers will be able to know the time when different navigation steps (such as navigationStart, connectStart/End, responseStart/End, domLoading/Complete) happened and deduce how long one step or a sequence of steps took to complete.

    Reduced Memory Usage

    Our continuous efforts to monitor and reduce memory consumption in Firefox will substantially pay off with Aurora 7:

    • The memory “zone” where javascript objects reside gets fragmented as objects are created and deleted. To reduce the negative impact of this fragmentation, long-lived objects created by the browser’s own UI have been separated from the objects created by Web pages. The browser can now free memory more efficiently when a tab is closed or after a garbage collection.
    • Speaking of garbage collection, as we successfully reduced the cost of this operation, we are able to execute it more often. Not only is memory freed more rapidly, but this also leads to shorter GC pauses(the period where javascript execution stops to let the garbage collector do his job, which is sometime noticeable during heavy animations).
    • All those improvements are reflected in the about:memory page, which is now able to tell how much memory a particular Web page or the browser’s own UI, is using.

    More frequent updates and detailed explanations of the memshrink effort are posted on Nicholas Nethercote’s blog.

    Faster Javascript Parsing

    A javascript parser is the part of the browser that reads the javascript before it gets executed by the javascript engine. With modern Web applications such as Gmail or Facebook sending close to 1Mb of javascript, being able to still read all of that code instantly matters in the quest of responsive user experience.
    Thanks to Nicholas’s work, our parser is now almost twice as fast as it used to. This adds up well with our constant work to improve the execution speed of our javascript engine.

    First Steps of Azure

    After the layout engine (Gecko) has computed the visual appearance (position, dimension, colors, …) of all elements in the window, the browser asks the Operating System to actually draw them on the screen. The browser needs an abstraction layer to be able to talk to the different graphics libraries of the different OSes, but this layer has to be as thin and as adaptable as possible to deliver the promises of hardware acceleration.
    Azure is the name of the new and better graphics API/abstraction layer that is going to gradually replace Cairo in hardware accelerated environments. In Aurora 7, it is already able to interact with Windows 7’s Direct2D API to render the content of a <canvas> element (in a 2D context). You can read a detailed explanation of the Azure project and its next steps on Joe Drew’s blog.

    Other Improvements



    • Specifying invalid values when calling setTransform(), bezierCurveTo(), or arcTo() no longer throws an exception; these calls are now correctly silently ignored.
    • Calling strokeRect with a zero width and height now correctly does nothing. (see bug 663190 )
    • Calling drawImage with a zero width or height <canvas> now throws INVALID_STATE_ERR. (see bug 663194 )
    • toDataURL() method now accepts a second argument to control JPEG quality (see bug 564388 )



    • XLink href has been restored and the MathML3 href attribute is now supported. Developers are encouraged to move to the latter syntax.
    • Support for the voffset attribute on <mpadded> elements has been added and behavior of lspace attribute has been fixed.
    • The top-level <math> element accepts any attributes of the <mstyle> element.
    • The medium line thickness of fraction bars in <mfrac> elements has been corrected to match the default thickness.
    • Names for negative spaces are now supported.


    • The File interface’s non-standard methods getAsBinary(), getAsDataURL(), and getAsText() have been removed as well as the non-standard properties fileName and fileSize.
    • The FileReader readAsArrayBuffer() method is now implemented. (see bug 632255 )
    • document.createEntityReference has been removed. It was never properly implemented and is not implemented in most other browsers. (see bug 611983 )
    • document.normalizeDocument has been removed. Use Node.normalize instead. (see bug 641190 )
    • DOMTokenList.item now returns undefined if the index is out of bounds, previously it returned null. (see bug 529328 )
    • Node.getFeature has been removed. (see bug 659053 )



    • WebSockets are now available on Firefox Mobile. (see bug 537787 )

    console API

    • Implement console.dir(), console.time(), console.timeEnd(), and console.groupEnd() methods.
    • Message logged with console.log before the WebConsole is opened are now stored and displayed once the WebConsole is opened.

    (see the Web Console page in the Wiki)

    Web Timing

  5. Firefox 5 is here

    Today, three months after the release of Firefox 4, we release Firefox 5, thanks to our new development cycle. Developers will be able to create richer animations using CSS3 Animations. This release comes with various improvements, performance optimization and bug fixes.

    CSS3 Animations

    CSS Animations (check out the documentation) are a new way to create animations using CSS. Like CSS Transitions, they are efficient and run smoothly (see David Baron’s article), and the developers have a better controls over the intermediate steps (keyframes), and can now create much more complex animations.

    Notable changes

    Other Bug Fixes and Performance Improvements:


    Canvas improvements

    • The <canvas> 2D drawing context now supports specifying an ImageData object as the input to the createImageData() method; this creates a new ImageData object initialized with the same dimensions as the specified object, but still with all pixels preset to transparent black.
    • Specifying non-finite values when adding color stops through a call to the CanvasGradient method addColorStop() now correctly throws INDEX_SIZE_ERR instead of SYNTAX_ERR.
    • The HTMLCanvasElement method toDataURL() now correctly lower-cases the specified MIME type before matching.
    • getImageData() now correctly accepts rectangles that extend beyond the bounds of the canvas; pixels outside the canvas are returned as transparent black.
    • drawImage() and createImageData() now handle negative arguments in accordance with the specification, by flipping the rectangle around the appropriate axis.
    • Specifying non-finite values when calling createImageData() now properly throws a NOT_SUPPORTED_ERR exception.
    • createImageData() and getImageData() now correctly return at least one pixel’s worth of image data if a rectangle smaller than one pixel is specified.
    • Specifying a negative radius when calling createRadialGradient() now correctly throws INDEX_SIZE_ERR.
    • Specifying a null or undefined image when calling createPattern() or drawImage() now correctly throws a TYPE_MISMATCH_ERR exception.
    • Specifying invalid values for globalAlpha no longer throws a SYNTAX_ERR exception; these are now correctly silently ignored.
    • Specifying invalid values when calling translate(), transform(), rect(), clearRect(), fillRect(), strokeRect(), lineTo(), moveTo(), quadraticCurveTo(), or arc() no longer throws an exception; these calls are now correctly silently ignored.
    • Setting the value of shadowOffsetX, shadowOffsetY, or shadowBlur to an invalid value is now silently ignored.
    • Setting the value of rotate or scale to an invalid value is now silently ignored.


    • Support for CSS animations has been added, using the -moz- prefix for now.


    • The selection object’s modify() method has been changed so that the “word” selection granularity no longer includes trailing spaces; this makes it more consistent across platforms and matches the behavior of WebKit’s implementation.
    • The window.setTimeout() method now clamps to send no more than one timeout per second in inactive tabs. In addition, it now clamps nested timeouts to the smallest value allowed by the HTML5 specification: 4 ms (instead of the 10 ms it used to clamp to).
    • Similarly, the window.setInterval() method now clamps to no more than one interval per second in inactive tabs.
    • XMLHttpRequest now supports the loadend event for progress listeners. This is sent after any transfer is finished (that is, after the abort, error, or load event). You can use this to handle any tasks that need to be performed regardless of success or failure of a transfer.
    • The Blob and, by extension, the File objects’ slice() method has been removed and replaced with a new, proposed syntax that makes it more consistent with Array.slice() and String.slice() methods in JavaScript. This method is named mozSlice() for now.
    • The value of window.navigator.language is now determined by looking at the value of the Accept-Language HTTP header.


    • Regular expressions are no longer callable as if they were functions; this change has been made in concert with the WebKit team to ensure compatibility (see WebKit bug 28285).
    • The Function.prototype.isGenerator() method is now supported; this lets you determine if a function is a generator.


    • The class SVG attribute can now be animated.
    • The following SVG-related DOM interfaces representing lists of objects are now indexable and can be accessed like arrays; in addition, they have a length property indicating the number of items in the lists: SVGLengthList , SVGNumberList , SVGPathSegList , and SVGPointList.


    • Firefox no longer sends the “Keep-Alive” HTTP header; we weren’t formatting it correctly, and it was redundant since we were also sending the Connection: or Proxy-Connection: header with the value “keep-alive” anyway.
    • The HTTP transaction model has been updated to be more intelligent about reusing connections in the persistent connection pool; instead of treating the pool as a FIFO queue, Necko now attempts to sort the pool with connections with the largest congestion window (CWND) first. This can reduce the round-trip time (RTT) of HTTP transactions by avoiding the need to grow connections’ windows in many cases.
    • Firefox now handles the Content-Disposition HTTP response header more effectively if both the filename and filename* parameters are provided; it looks through all provided names, using the filename* parameter if one is available, even if a filename parameter is included first. Previously, the first matching parameter would be used, thereby preventing a more appropriate name from being used. See bug 588781 .


    Developer tools

    • The Web Console’s Console object now has a debug() method, which is an alias for its log() method; this improves compatibility with certain existing sites.

  6. Doom on the Web

    Update: We had a doubt whether this port of the Open Source Doom respected its term of use. We decided to remove it from our Website before taking an informed and definitive decision.

    This is a guest post written by Alon Zakai. Alon is one of the Firefox Mobile developers, and in his spare time does experiments with JavaScript and new web technologies. One of those experiments is Emscripten, an LLVM-to-JavaScript compiler, and below Alon explains how it uses typed arrays to run the classic first-person shooter Doom on the web.

    As a longtime fan of first-person shooters, I’ve wanted to bring them to the web. Writing one from scratch is very hard, though, so instead I took the original Doom, which is open source, and compiled it from C to JavaScript using Emscripten. The result is a version of Doom that can be played on the web, using standard web technologies.

    Doom renders by writing out pixel data to memory, then copying that pixel data to the screen, after converting colors and so forth. For this demo, the compiled code has memory that is simulated using a large JavaScript array (so element N in that array represents the contents of memory address N in normal native code). That means that rendering, color conversion, and copying to the screen are all operations done on that large JavaScript array. Basically the code has large loops that copy or modify elements of that array. For that to be as fast as possible, the demo optionally uses JavaScript typed arrays, which look like normal JavaScript arrays but are guaranteed to be flat arrays of a particular data type.

    // Create an array which contains only 32-bit Integers
    var buffer = new Int32Array(1000);
    for ( var i = 0 ; i < 1000 ; i++ ) {
        buffer[i] = i;

    When using a typed array, the main difference from a normal JavaScript array is that the elements of the array all have the type that you set. That means that working on that array can be much faster than a normal array, because it corresponds very closely to a normal low-level C or C++ array. In comparison, a normal JavaScript array can also be sparse, which means that it isn't a single contiguous section of memory. In that case, each access of the array has a cost, that of calculating the proper memory address. Finding the memory address is much faster with a typed array because it is simple and direct. As a consequence, in the Doom demo the frame rate is almost twice as fast with typed arrays than without them.

    Typed arrays are very important in WebGL and in the Audio Data API, as well as in Canvas elements (the pixel data received from getImageData() is, in fact, a typed array). However, typed arrays can also be used independently if you are working on large amounts of array-like data, which is exactly the case with the Doom demo. Just be careful that your code also works if the user's browser does not support typed arrays. This is fairly easy to do because typed arrays look and behave, for the most part, like normal ones — you access their elements using square brackets, and so forth. The main potential pitfalls are:

    • Typed arrays do not have the slice(). Instead they have the subarray(), which does not create a copy of the array — instead it's a view into the same data.
    • Don't forget that the type of the typed array is silently enforced. If you write 5.25 to an element of an integer-typed array and then read back that exact same element, you get 5 and not 5.25.
  7. Firefox 4 Performance

    Dave Mandelin from the JS team and Joe Drew from the Graphics team summarize the key performance improvements in Firefox 4.

    The web wants fast browsers. Cutting-edge HTML5 web pages play games, mash up and share maps, sound, and videos, show spreadsheets and presentations, and edit photos. Only a high-performance browser can do that. What the web wants, it’s our job to make, and we’ve been working hard to make Firefox 4 fast.

    Firefox 4 comes with performance improvements in almost every area. The most dramatic improvements are in JavaScript and graphics, which are critical for modern HTML5 apps and games. In the rest of this article, we’ll profile the key performance technologies and show how they make the web that much “more awesomer”.

    Fast JavaScript: Uncaging the JägerMonkey
    JavaScript is the programming language of the web, powering most of the dynamic content and behavior, so fast JavaScript is critical for rich apps and games. Firefox 4 gets fast JavaScript from a beast we call JägerMonkey. In techno-gobbledygook, JägerMonkey is a multi-architecture per-method JavaScript JIT compiler with 64-bit NaN-boxing, inline caching, and register allocation. Let’s break that down:

      JägerMonkey has full support for x86, x64, and ARM processors, so we’re fast on both traditional computers and mobile devices. W00t!
      (Crunchy technical stuff ahead: if you don’t care how it works, skip the rest of the sections.)

      Per-method JavaScript JIT compilation

      The basic idea of JägerMonkey is to translate (compile) JavaScript to machine code, “just in time” (JIT). JIT-compiling JavaScript isn’t new: previous versions of Firefox feature the TraceMonkey JIT, which can generate very fast machine code. But some programs can’t be “jitted” by TraceMonkey. JägerMonkey has a simpler design that is able to compile everything in exchange for not doing quite as much optimization. But it’s still fast. And TraceMonkey is still there, to provide a turbo boost when it can.

      64-bit NaN-boxing
      That’s the technical name for the new 64-bit formats the JavaScript engine uses to represent program values. These formats are designed to help the JIT compilers and tuned for modern hardware. For example, think about floating-point numbers, which are 64 bits. With the old 32-bit value formats, floating-point calculations required the engine to allocate, read, write, and deallocate extra memory, all of which is slow, especially now that processors are much faster than memory. With the new 64-bit formats, no extra memory is required, and calculations are much faster. If you want to know more, see the technical article Mozilla’s new JavaScript value representation.
      Inline caching
      Property accesses, like o.p, are common in JavaScript. Without special help from the engine, they are complicated, and thus slow: first the engine has to search the object and its prototypes for the property, next find out where the value is stored, and only then read the value. The idea behind inline caching is: “What if we could skip all that other junk and just read the value?” Here’s how it works: The engine assigns every object a shape that describes its prototype and properties. At first, the JIT generates machine code for o.p that gets the property by laborious searching. But once that code runs, the JITs finds out what o‘s shape is and how to get the property. The JIT then generates specialized machine code that simply verifies that the shape is the same and gets the property. For the rest of the program, that o.p runs about as fast as possible. See the technical article PICing on JavaScript for fun and profit for much more about inline caching.

      Register allocation
      Code generated by basic JITs spends a lot of time reading and writing memory: for code like x+y, the machine code first reads x, then reads y, adds them, and then writes the result to temporary storage. With 64-bit values, that’s up to 6 memory accesses. A more advanced JIT, such as JägerMonkey, generates code that tries to hold most values in registers. JägerMonkey also does some related optimizations, like trying to avoid storing values at all when they are constant or just a copy of some other value.

    Here’s what JägerMonkey does to our benchmark scores:

    That’s more than 3x improvement on SunSpider and Kraken and more than 6x on V8!

    Fast Graphics: GPU-powered browsing.
    For Firefox 4, we sped up how Firefox draws and composites web pages using the Graphics Processing Unit (GPU) in most modern computers.

    On Windows Vista and Windows 7, all web pages are hardware accelerated using Direct2D . This provides a great speedup for many complex web sites and demo pages.

    On Windows and Mac, Firefox uses 3D frameworks (Direct3D or OpenGL) to accelerate the composition of web page elements. This same technique is also used to accelerate the display of HTML5 video .

    Final take
    Fast, hardware-accelerated graphics combined plus fast JavaScript means cutting-edge HTML5 games, demos, and apps run great in Firefox 4. You see it on some of the sites we enjoyed making fast. There’s plenty more to try in the Mozilla Labs Gaming entries and of course, be sure to check out the Web O’ Wonder.

  8. Upgrade your graphics drivers for best results with Firefox 4

    Benoit Jacob from the platform engineering team has a blog post on how to best take advantage of hardware acceleration and WebGL in Firefox 4, namely: Upgrade your graphics drivers!

    Firefox 4 automatically disables the hardware acceleration and WebGL features if the graphics driver on your system has bugs that cause Firefox to crash. You still get all the other benefits of Firefox 4, of course, just not the newest graphics features. But for best results, you need an up-to-date graphics driver that fixes those bugs.

    If you’re planning to develop using WebGL, you need to also spread this message to your users, so they will be able to experience the awesome results of your hard work.

  9. People of HTML5 – Remy Sharp

    HTML5 needs spokespeople to work. There are a lot of people out there who took on this role, and here at Mozilla we thought it is a good idea to introduce some of them to you with a series of interviews and short videos. The format is simple – we send the experts 10 questions to answer and then do a quick video interview to let them introduce themselves and ask for more detail on some of their answers.

    Leggi la traduzione in italiano

    Remy SharpToday we are featuring Remy Sharp co-author of Introducing HTML5 and organiser of the Full Frontal conference in Brighton, England.

    Remy is one of those ubiquitous people of HTML5. Whenever something needed fixing, there is probably something on GitHub that Remy wrote that helps you. He is also very English and doesn’t mince his words much.

    You can find Remy on Twitter as @rem.

    The video interview

    Watch the video on YouTube or Download it from as MP4 (98 MB), OGG (70 MB) or WebM (68MB)

    Ten questions about HTML5 for Remy Sharp

    1) Reading “Introducing HTML5″ it seems to me that you were more of the API – focused person and Bruce the markup guy. Is that a fair assumption? What is your background and passion?

    That’s spot on. Bruce asked me to join the project as the “JavaScript guy” – which is the slogan I wear under my clothes and frequently reveal in a superman ‘spinning around’ fashion (often at the surprise of clients).

    My background has always been coding – even from a young age, my dad had me copying out listings from old spectrum magazines only to result in hours of typing and some random error that I could never debug.

    As I got older I graduated to coding in C but those were the days the SDKs were 10Mb downloaded over a 14kb modem, and compile in to some really odd environment. Suffice to say I didn’t get very far.

    Then along came JavaScript. A programming language that didn’t require any special development environment. I could write the code in Notepad on my dodgy Window 95 box, and every machine came with the runtime: the browser. Score!

    From that point on the idea of instant gratification from the browser meant that I was converted – JavaScript was the way for me.

    Since then I’ve worked on backend environments too (yep, I’m a Perl guy, sorry!), but always worked and played in the front end in some way or another. However, since started on my own in 2006, it’s allowed me to move focus almost entirely on the front end, and specialise in JavaScript. Basically, work-wise: I’m a pig in shit [Ed: for our non-native English readers, he means “happy”)].

    2) From a programmer’s point of view, what are the most exciting bits about the HTML5 standard? What would you say is something every aspiring developer should get their head around first?

    For me, the most exciting aspects of HTML5 is the depth of the JavaScript APIs. It’s pretty tricky to explain to Joe Bloggs that actually this newly spec’ed version of HTML isn’t mostly HTML; it’s mostly JavaScript.

    I couldn’t put my finger on one single part of the spec, only because it’s like saying which is your favourite part of CSS (the :target selector – okay, so I can, but that’s not the point!). What’s most exciting to me is that HTML5 is saying that the browser is the platform that we can deliver real applications – take this technology seriously.

    If an aspiring developer wanted something quick and impressive, I’d say play around with the video API – by no means is this the best API, just an easy one.

    If they really wanted to blow people away with something amazing using HTML5, I’d say learn JavaScript (I’m assuming they’re already happy with HTML and CSS). Get a book like JavaScript: The Good Parts and then get JavaScript Patterns and master the language. Maybe, just maybe, then go buy Introducing HTML5, it’s written by two /really/ good looking (naked) guys: and [Ed: maybe NSFW, definitely disturbing].

    3) In your book you wrote a nice step-by-step video player for HTML5 video. What do you think works well with the Video APIs and what are still problems that need solving?

    The media API is dirt simple, so it means working with video and audio is a doddle. For me, most of it works really well (so long as you understand the loading process and the events).

    Otherwise what’s really quite neat, is the fact I can capture the video frames and mess with them in a canvas element – there’s lots of fun that can be had there (see some of Paul Rouget’s demos for that!).

    What sucks, and sucks hard, is the spec asks vendors, ie. browser makers, *not* to implement full screen mode. It uses security concerns as the reason (which I can understand), but Flash solved this long ago – so why not follow their lead on this particular problem? If native video won’t go full screen, it will never be a competitive alternative to Flash for video.

    That all said, I do like that the folks behind WebKit went and ignored the spec, and implemented full screen. The specs are just guidelines, and personally, I think browsers should be adding this feature.

    4) Let’s talk a bit about non-HTML5 standards, like Geolocation. I understand you did some work with that and found that some parts of the spec work well whilst others less so. Can you give us some insight?

    On top of HTML5 specification there’s a bunch more specs that make the browser really, really exciting. If we focus on the browser being released today (IE9 included) there’s a massive amount that can be done that we couldn’t do 10 years ago.

    There’s the “non-HTML5″ specs that actually were part of HTML5, but split out for good reason (so they can be better managed), like web storage, 2D canvas API and Web Sockets, but there’s also the /really/ “nothing-to-do-with-HTML5″ APIs (NTDWH5API!) like querySelector, XHR2 and the Device APIs. I’m super keen to try all of these out even if they’re not fully there in all the browsers.

    Geolocation is a great example of cherry picking technology. Playing against the idea that the technology isn’t fully implemented. Something I find myself ranting on and on about when it comes to the question of whether a developer should use HTML5. Only 50% of Geolocation is implemented in the browsers supporting it, in that they don’t have altitude, heading or speed – all of which are part of the spec. Does that stop mainstream apps like Google Maps from using the API? (clue: no).

    The guys writting the specs have done a pretty amazing job, and in particular there are few cases where the specs have been retrospectively written. XHR is one of these and now we’ve got a stable API being added in new browsers (i.e. IE6 sucks, yes, we all know that). Which leads us to drag and drop. The black sheep of the APIs. In theory a really powerful API that could make our applications rip, but the technical implementation is a shambles. PPK (Peter-Paul Koch) tore the spec a bit of a ‘new one’. It’s usable, but it’s confusing, and lacking.

    Generally, I’ve found the “non-HTML5″ specs to be a bit of mixed bag. Some are well supported in new browsers, some not at all. SVG is an oldie and now really viable with the help of JavaScript libraries such as Raphaël.js or SVGWeb (a Flash based solution). All in all, there’s lots of options available in JavaScript API nowadays compared to back in the dark ages.

    5) Let’s talk Canvas vs. SVG for a bit. Isn’t Canvas just having another pixel-based rectangle in the page much like Java Applets used to be? SVG, on the other hand is Vector based and thus would be a more natural tool to do something with open standards that we do in Flash now. When would you pick SVG instead of Canvas and vice versa?

    Canvas, in a lot of ways is just like the Flash drawing APIs. It’s not accessible and a total black box. The thing is, in the West, there’s a lot of businesses, rightly or wrongly, that want their fancy animation thingy to work on iOS. Since Flash doesn’t work there, canvas is a superb solution.

    However, you must, MUST, decide which technology to use. Don’t just use canvas because you saw a Mario Kart demo using it. Look at the pros and cons of each. SVG and the canvas API are not competitive technologies, they’re specially shaped hammers for specific jobs.

    Brad Neuberg did a superb job of summarising the pros and cons of each, and I’m constantly referring people to it (here’s the video).

    So it really boils down to:

    • Canvas: pixel manipulation, non-interactive and high animation
    • SVG: interactive, vector based

    Choose wisely young padawan!

    6) What about performance? Aren’t large Canvas solutions very slow, especially on mobile devices? Isn’t that a problem for gaming? What can be done to work around that?

    Well…yes and no. I’m finishing a project that has a large canvas animation going on, and it’s not slow on mobile devices (not that it was designed for those). The reason it’s not slow is because of the way the canvas animates. It doesn’t need to be constantly updating at 60fps.

    Performance depends on your application. Evaluate the environment, the technologies and make a good decision. I personally don’t think using a canvas for a really high performance game on a mobile is quite ready. I don’t think the devices have the ommph to get the job done – but there’s a hidden problem – the browser in the device isn’t quite up to it. Hardware acceleration is going to help, a lot, but today, right now, I don’t think we’ll see games like Angry Birds written in JavaScript.

    That said… I’ve seriously considered how you could replicate a game like Canabalt using a mix of canvas, DIVs and CSS. I think it might be doable ::throws gauntlet::

    I think our community could actually learn a lot from the Flash community. They’ve been through all of this already. Trying to make old versions of Flash from years back do things that were pushing the boundaries. People like Seb Lee-Delisle (@seb_ly / are doing an amazing job of teaching both the Flash and JavaScript community.

    7) A feature that used to be HTML5 and is now an own spec is LocalStorage and its derivatives Session Storage or the full-fledged WebSQL and IndexedDB. Another thing is offline storage. There seems to be a huge discussion in developer circles about what to use when and if NoSQL solutions client side are the future or not. What are your thoughts? When can you use what and what are the drawbacks?

    Personally I love seeing server-less applications. Looking at the storage solutions I often find it difficult to see why you wouldn’t use WebStorage every time.

    In a way it acts like (in my limited experience of) NoSQL, in that you lookup a key and get a result.

    Equally, I think SQL in the browser is over the top. Like you’re trying to use the storage method *you* understand and forcing it into the browser. Seems like too much work for too little win.

    Offline Apps, API-wise, ie. the application cache is /really/ sexy. Like sexy with chocolate on top sexy. The idea that our applications can run without the web, or switch when it detects it’s offline is really powerful. The only problem is that the events are screwed. The event to say your app is now offline requires the user to intervene via the browser menu, telling the browser to “work in offline mode”. A total failure of experience. What’s worse is that, as far as I know, there’s no plan to make offline event fire properly :-(

    That all said, cookies are definitely dead for me. I’ve yet to find a real solution for cookies since I found the Web Storage API – and there’s a good decent number of polyfills for Web Storage – so there’s really no fear of using the API.

    8) Changing the track a bit, you’ve built the HTML5shiv to make HTML5elements styleable in IE. This idea sparked quite a lot of other solutions to make IE6 work with the new technologies (or actually simulate them). Where does this end? Do you think it is worth while to write much more code just to have full IE6 support?

    There’s two things here:

    1. Supporting IE6 (clue: don’t)
    2. Polyfills

    IE6, seriously, and for the umpteenth time, look at your users. Seriously. I know the project manager is going to say they don’t know what the figures are, in that case: find out! Then, once you’ve got the usage stats in hand, you know your audience and you know what technology they support.

    If they’re mostly IE6 users, then adding native video with spinning and dancing canvas effect isn’t going to work – not even with shims and polyfills. IE6 is an old dog that just isn’t up to doing the mileage he used to be able to do back in his prime. But enough on this subject – the old ‘do I, or don’t I developer for IE6′ is long in the tooth.

    Polyfills – that’s a different matter. They’re not there to support IE6, they’re there to bring browsers up to your expectations as a developer. However, I’d ask that you carefully consider them before pulling them in. The point of these scripts is they plug missing APIs in those older browsers. “Old browsers” doesn’t particularly mean IE6. For example, the Web Sockets API has a polyfill by way of Flash. If native Web Sockets aren’t there, Flash fills the gap, but the API is exposed in exactly the same manner, meaning that you don’t have to fork your code.

    I don’t think people should be pulling in scripts just for the hell of it. You should consider what you’re trying to achieve and decide whether X technology is the right fit. If it is, and you know (or expect) your users have browsers that don’t support X technology – should you plug it with JavaScript or perhaps should you consider a different technology?

    This exact same argument rings true for when someone adds jQuery just to add or remove a class from an element. It’s simply not worth it – but clearly that particular developer didn’t really understand what they needed to do. So is education the solution? I should hope so.

    9) Where would you send people if they want to learn about HTML5? What are tutorials that taught you a lot? Where should interested people hang out?

    HTML5 Doctor – fo sho’. :)

    I tend to also direct people to my simply to encourage viewing source, and hacking away.

    Regarding what tutorials taught me – if I’m totally honest, the place I’ve learnt the most from is actually There’s some pretty good JavaScript / API tutorials coming from the chaps at Otherwise, I actually spend a lot of time just snooping through the specifications, looking for bits that I’ve not seen before and generally poking them with a stick.

    10) You have announced that you are concentrating on building a framework to make Websockets easy to work with. How is that getting on and what do you see Websockets being used for in the future? In other words, why the fascination?

    Concentrating is a strong word ;-) but it is true, I’ve started working on a product that abstracts Web Sockets to a service. Not the API alone, since it’s so damn simple, but the server setup: creating sessions, user control flow, waiting for users and more.

    The service is called Förbind. Swedish for “connect”, ie. to connect your users. It’s still early days, but I hope to release alpha access to this month.

    I used to work in finance web sites and real-time was the golden egg: to get that data as soon as it was published. So now that it’s available in a native form in the browser, I’m all over it!

    What’s more, I love the idea of anonymous users. I created a bunch of demos where the user can contribute to something without ever really revealing themselves, and when the users come, you start to see how creative people are without really trying. Sure, you get a lot of cocks being drawn, but you also see some impressive ideas – my business 404 page for example allows people to leave a drawing, one of the most impressive is a Super Mario in all his glory. Anonymous users really interest me because as grey as things can seem sometimes, a stranger can easily inspire you.

    Do you know anyone I should interview for “People of HTML5″? Tell me on Twitter: @codepo8