Mozilla

JavaScript Articles

Sort by:

View:

  1. Porting “Me & My Shadow” to the Web – C++ to JavaScript/Canvas via Emscripten

    Editors note: This is a guest post by Alon Zakai of the Mozilla Emscripten team. Thanks Alon!

    Me & My Shadow is an open source 2D game, with clever gameplay in which you control not one character but two. I happened to hear about it recently when they released a 0.3 version:

    Since I’m looking for games to port to the web, I figured this was a good candidate. It was quite easy to port, here is the result: Me & My Shadow on the Web

    Me and my shadow

    You can also get the source on GitHub.

    The port was done automatically by compiling the original code to JavaScript using Emscripten, an open-source C++ to JavaScript compiler that uses LLVM. Using a compiler like this allows the game to just be compiled, instead of manually rewriting it in JavaScript, so the process can take almost no time.

    The compiled game works almost exactly like the desktop version does on the machines and browsers I’ve tested on. Interestingly, performance looks very good. In this case, it’s mainly because most of what the game does is blit images. It uses the cross-platform SDL API, which is a wrapper library for things like opening a window, getting input, loading images, rendering text, etc. (so it is exactly what a game like this needs). Emscripten supports SDL through native canvas calls, so when you compile a game that uses SDL into JavaScript, it will use Emscripten’s SDL implementation. That implementation implements SDL blit operations using drawImage calls and so forth, which browsers generally hardware accelerate these days, so the game runs as fast as it would natively.

    For example, if the C++ code has

    SDL_BlitSurface(sprite, NULL, screen, position)

    then that means to blit the entire bitmap represented by sprite into the screen, at a specific position. Emscripten’s SDL implementation does some translation of arguments, and then calls

    ctx.drawImage(src.canvas, sr.x, sr.y, sr.w, sr.h, dr.x, dr.y, sr.w, sr.h);

    which draws the sprite, contained in src.canvas, into the context representing the screen, at the correct position and size. In other words, the C++ code is translated automatically into code that uses native HTML canvas operations in an efficient manner.

    There are some caveats though. The main problem is browser support for necessary features, the main problems I ran into here are typed arrays and the Blob constructor:

    • Typed arrays are necessary to run compiled C++ code quickly and with maximum compatibility. Emscripten can compile code without them, but the result is slower and needs manual correction for compatibility. Thankfully, all browsers are getting typed arrays. Firefox, Chrome and Opera already have them, Safari was only missing FloatArray64 until recently I believe, and IE will get them in IE10.
    • The Blob constructor is necessary because this game uses Emscripten’s new compression option. It takes all the datafiles (150 or so), packs them into a single file, does LZMA on that, and then the game in the browser downloads that, decompresses, and splits it up. This makes the download much smaller (but does mean there is a short pause to decompress). The issue though is that we end up with data for each file in a typed array. It’s easy to use the BlobBuilder for images, but for audio, they need the mimetype set or they fail to decode, and only the Blob constructor supports that. It looks like only Firefox has the Blob constructor so far, I’ve been told on Twitter there might be a workaround for Chrome that I am hoping to hear more about. Not sure about other browsers. But, the game should still work, just without sound effects and music.

    Another caveat is that there is some unavoidable amount of manual porting necessary:

    JavaScript main loops must be written in an asynchronous way: A callback for each frame. Thankfully, games are usually written in a way that the main loop can easily be refactored into a function that does one iteration, and that was the case here. Then that function that does one main loop iteration is called each frame from JavaScript. However, there are other cases of synchronous code that are more annoying, for example fadeouts that happen when a menu item is picked are done synchronously (draw, SDL_Delay, draw, etc.). This same problem turned up when I ported Doom, I guess it’s a common code pattern. So I just disabled those fadeouts for now; if you do want them in a game you port, you’d need to refactor them to be asynchronous.

    Aside from that, everything pretty much just worked. (The sole exception was that this code fell prey to an LLVM LTO bug, but Rafael fixed it.) So in conclusion I would argue that there is no reason not to run games like these on the web: They are easy to port, and they run nice and fast.

  2. Old tricks for new browsers – a talk at jQuery UK 2012

    Last Friday around 300 developers went to Oxford, England to attend jQuery UK and learn about all that is hot and new about their favourite JavaScript library. Imagine their surprise when I went on stage to tell them that a lot of what jQuery is used for these days doesn’t need it. If you want to learn more about the talk itself, there is a detailed report, slides and the audio recording available.

    The point I was making is that libraries like jQuery were first and foremost there to give us a level playing field as developers. We should not have to know the quirks of every browser and this is where using a library allows us to concentrate on the task at hand and not on how it will fail in 10 year old browsers.

    jQuery’s revolutionary new way of looking at web design was based on two main things: accessing the document via CSS selectors rather than the unwieldy DOM methods and chaining of JavaScript commands. jQuery then continued to make event handling and Ajax interactions easier and implemented the Easing equations to allow for slick and beautiful animations.

    However, this simplicity came with a prize: developers seem to forget a few very simple techniques that allow you to write very terse and simple to understand JavaScripts that don’t rely on jQuery. Amongst others, the most powerful ones are event delegation and assigning classes to parent elements and leave the main work to CSS.

    Event delegation

    Event Delegation means that instead of applying an event handler to each of the child elements in an element, you assign one handler to the parent element and let the browser do the rest for you. Events bubble up the DOM of a document and happen on the element you want to get and each of its parent elements. That way all you have to do is to compare with the target of the event to get the one you want to access. Say you have a to-do list in your document. All the HTML you need is:

    <ul id="todo">
      <li>Go round Mum's</li>
      <li>Get Liz back</li>
      <li>Sort life out!</li>
    </ul>

    In order to add event handlers to these list items, in jQuery beginners are tempted to do a $('#todo li').click(function(ev){...}); or – even worse – add a class to each list item and then access these. If you use event delegation all you need in JavaScript is:

    document.querySelector('#todo').addEventListener( 'click', 
      function( ev ) {
        var t = ev.target;
        if ( t.tagName === 'LI' ) {
          alert( t + t.innerHTML ); 
          ev.preventDefault();
        }
    }, false);

    Newer browsers have a querySelector and querySelectorAll method (see support here) that gives you access to DOM elements via CSS selectors – something we learned from jQuery. We use this here to access the to-do list. Then we apply an event listener for click to the list.

    We read out which element has been clicked with ev.target and compare its tagName to LI (this property is always uppercase). This means we will never execute the rest of the code when the user for example clicks on the list itself. We call preventDefault() to tell the browser not to do anything – we now take over.

    You can try this out in this fiddle or embedded below:

    JSFiddle demo.

    The benefits of event delegation is that you can now add new items without having to ever re-assign handlers. As the main click handler is on the list new items automatically will be added to the functionality. Try it out in this fiddle or embedded below:

    JSFiddle demo.

    Leaving styling and DOM traversal to CSS

    Another big use case of jQuery is to access a lot of elements at once and change their styling by manipulating their styles collection with the jQuery css() method. This is seemingly handy but is also annoying as you put styling information in your JavaScript. What if there is a rebranding later on? Where do people find the colours to change? It is a much simpler to add a class to the element in question and leave the rest to CSS. If you think about it, a lot of times we repeat the same CSS selectors in jQuery and the style document. Seems redundant.

    Adding and removing classes in the past was a bit of a nightmare. The way to do it was using the className property of a DOM element which contained a string. It was then up to you to find if a certain class name is in that string and to remove and add classes by adding to or using replace() on the string. Again, browsers learned from jQuery and now have a classList object (support here) that allows easy manipulation of CSS classes applied to elements. You have add(), remove(), toggle() and contains() to play with.

    This makes it dead easy to style a lot of elements and to single them out for different styling. Let’s say for example we have a content area and want to show one at a time. It is tempting to loop over the elements and do a lot of comparison, but all we really need is to assign classes and leave the rest to CSS. Say our content is a navigation pointing to articles. This works in all browsers:

    <header>
      <h1>Profit plans</h1>
    </header>
    <section id="content">
      <nav id="nav">
        <ul>
          <li><a href="#step1">Step 1: Collect Underpants</a></li>
          <li><a href="#step2">Step 2: ???</a></li>
          <li><a href="#step3">Step 3: Profit!</a></li>
        </ul>
      </nav>
      <article id="step1">
        <header><h1>Step 1: Collect Underpants</h1></header>
        <section>
          <p>
            Make sure Tweek doesn't expect anything, then steal underwear 
            and bring it to the mine.
          </p>
        </section>
        <footer><a href="#nav">back to top</a></footer>
      </article>
      <article id="step2">
        <header><h1>Step 2: ???</h1></header>
        <section>
          <p>WIP</p>
        </section>
        <footer><a href="#nav">back to top</a></footer>
      </article>
      <article id="step3">
        <header><h1>Step 3: Profit</h1></header>
        <section>
          <p>Yes, profit will come. Let's sing the underpants gnome song.</p>
        </section>
        <footer><a href="#nav">back to top</a></footer>
      </article>
    </section>

    Now in order to hide all the articles, all we do is assign a ‘js’ class to the body of the document and store the first link and first article in the content section in variables. We assign a class called ‘current’ to each of those.

    /* grab all the elements we need */
    var nav = document.querySelector( '#nav' ),
        content = document.querySelector( '#content' ),
     
    /* grab the first article and the first link */
        article = document.querySelector( '#content article' ),
        link = document.querySelector( '#nav a' );
     
    /* hide everything by applying a class called 'js' to the body */
    document.body.classList.add( 'js' );
     
    /* show the current article and link */ 
    article.classList.add( 'current' );
    link.classList.add( 'current' );

    Together with a simple CSS, this hides them all off screen:

    /* change content to be a content panel */
    .js #content {
      position: relative;
      overflow: hidden;
      min-height: 300px;
    }
     
    /* push all the articles up */
    .js #content article {
      position: absolute;
      top: -700px;
      left: 250px;
    }
    /* hide 'back to top' links */
    .js article footer {
      position: absolute;
      left: -20000px;
    }

    In this case we move the articles up. We also hide the “back to top” links as they are redundant when we hide and show the articles. To show and hide the articles all we need to do is assign a class called “current” to the one we want to show that overrides the original styling. In this case we move the article down again.

    /* keep the current article visible */
    .js #content article.current {
      top: 0;
    }

    In order to achieve that all we need to do is a simple event delegation on the navigation:

    /* event delegation for the navigation */
    nav.addEventListener( 'click', function( ev ) {
      var t = ev.target;
      if ( t.tagName === 'A' ) {
        /* remove old styles */
        link.classList.remove( 'current' );
        article.classList.remove( 'current' );
        /* get the new active link and article */
        link = t;
        article = document.querySelector( link.getAttribute( 'href' ) );
        /* show them by assigning the current class */
        link.classList.add( 'current' );
        article.classList.add( 'current' );
      }
    }, false);

    The simplicity here lies in the fact that the links already point to the elements with this IDs on them. So all we need to do is to read the href attribute of the link that was clicked.

    See the final result in this fiddle or embedded below.

    JSFiddle demo.

    Keeping the visuals in the CSS

    Mixed with CSS transitions or animations (support here), this can be made much smoother in a very simple way:

    .js #content article {
      position: absolute;
      top: -300px;
      left: 250px;
      -moz-transition: 1s;
      -webkit-transition: 1s;
      -ms-transition: 1s;
      -o-transition: 1s;
      transition: 1s;
    }

    The transition now simply goes smoothly in one second from the state without the ‘current’ class to the one with it. In our case, moving the article down. You can add more properties by editing the CSS – no need for more JavaScript. See the result in this fiddle or embedded below:

    JSFiddle demo.

    As we also toggle the current class on the link we can do more. It is simple to add visual extras like a “you are here” state by using CSS generated content with the :after selector (support here). That way you can add visual nice-to-haves without needing the generate HTML in JavaScript or resort to images.

    .js #nav a:hover:after, .js #nav a:focus:after, .js #nav a.current:after {
      content: '➭';
      position: absolute;
      right: 5px;
    }

    See the final result in this fiddle or embedded below:

    JSFiddle demo.

    The benefit of this technique is that we keep all the look and feel in CSS and make it much easier to maintain. And by using CSS transitions and animations you also leverage hardware acceleration.

    Give them a go, please?

    All of these things work across browsers we use these days and using polyfills can be made to work in old browsers, too. However, not everything is needed to be applied to old browsers. As web developers we should look ahead and not cater for outdated technology. If the things I showed above fall back to server-side solutions or page reloads in IE6, nobody is going to be the wiser. Let’s build escalator solutions – smooth when the tech works but still available as stairs when it doesn’t.

    Translations

  3. JavaScript on the server: Growing the Node.js Community

    Cloud9 IDE and Mozilla have been working together ever since their Bespin and ACE projects joined forces. Both organizations are committed to the success of Node.js, Mozilla due to its history with Javascript and Cloud9 IDE as a core contributor to Node.js and provider of the leading Node.js IDE. As part of this cooperation, this is a guest post written by Ruben Daniels and Zef Hemel of Cloud9 IDE.

    While we all know and love JavaScript as a language for browser-based scripting, few remember that, early on, it was destined to be used as a server-side language as well. Only about a year after JavaScript’s original release in Netscape Navigator 2.0 (1995), Netscape released Netscape Enterprise Server 2.0:

    Netscape Enterprise Server is the first web server to support the Java(TM) and JavaScript(TM) programming languages, enabling the creation, delivery and management of live online applications.

    This is how the web got started, all the way back in the mid-nineties. Sadly, it was not meant to be then. JavaScript on the server failed, while JavaScript in the browser became a hit. At the time, JavaScript was still very young. The virtual machines that executed JavaScript code were slow and heavy weight, and there were no tools to support and manage large JavaScript code bases. This was fine for JavaScript’s use case in the browser at the time, but not sufficient for server-side applications.

    Continued…

  4. Faster Canvas Pixel Manipulation with Typed Arrays

    Edit: See the section about Endiannes.

    Typed Arrays can significantly increase the pixel manipulation performance of your HTML5 2D canvas Web apps. This is of particular importance to developers looking to use HTML5 for making browser-based games.

    This is a guest post by Andrew J. Baker. Andrew is a professional software engineer currently working for Ibuildings UK where his time is divided equally between front- and back-end enterprise Web development. He is a principal member of the browser-based games channel #bbg on Freenode, spoke at the first HTML5 games conference in September 2011, and is a scout for Mozilla’s WebFWD innovation accelerator.


    Eschewing the higher-level methods available for drawing images and primitives to a canvas, we’re going to get down and dirty, manipulating pixels using ImageData.

    Conventional 8-bit Pixel Manipulation

    The following example demonstrates pixel manipulation using image data to generate a greyscale moire pattern on the canvas.

    JSFiddle demo.

    Let’s break it down.

    First, we obtain a reference to the canvas element that has an id attribute of canvas from the DOM.

    var canvas = document.getElementById('canvas');

    The next two lines might appear to be a micro-optimisation and in truth they are. But given the number of times the canvas width and height is accessed within the main loop, copying the values of canvas.width and canvas.height to the variables canvasWidth and canvasHeight respectively, can have a noticeable effect on performance.

    var canvasWidth  = canvas.width;
    var canvasHeight = canvas.height;

    We now need to get a reference to the 2D context of the canvas.

    var ctx = canvas.getContext('2d');

    Armed with a reference to the 2D context of the canvas, we can now obtain a reference to the canvas’ image data. Note that here we get the image data for the entire canvas, though this isn’t always necessary.

    var imageData = ctx.getImageData(0, 0, canvasWidth, canvasHeight);

    Again, another seemingly innocuous micro-optimisation to get a reference to the raw pixel data that can also have a noticeable effect on performance.

    var data = imageData.data;

    Now comes the main body of code. There are two loops, one nested inside the other. The outer loop iterates over the y axis and the inner loop iterates over the x axis.

    for (var y = 0; y < canvasHeight; ++y) {
        for (var x = 0; x < canvasWidth; ++x) {

    We draw pixels to image data in a top-to-bottom, left-to-right sequence. Remember, the y axis is inverted, so the origin (0,0) refers to the top, left-hand corner of the canvas.

    The ImageData.data property referenced by the variable data is a one-dimensional array of integers, where each element is in the range 0..255. ImageData.data is arranged in a repeating sequence so that each element refers to an individual channel. That repeating sequence is as follows:

    data[0]  = red channel of first pixel on first row
    data[1]  = green channel of first pixel on first row
    data[2]  = blue channel of first pixel on first row
    data[3]  = alpha channel of first pixel on first row
     
    data[4]  = red channel of second pixel on first row
    data[5]  = green channel of second pixel on first row
    data[6]  = blue channel of second pixel on first row
    data[7]  = alpha channel of second pixel on first row
     
    data[8]  = red channel of third pixel on first row
    data[9]  = green channel of third pixel on first row
    data[10] = blue channel of third pixel on first row
    data[11] = alpha channel of third pixel on first row
     
     
    ...

    Before we can plot a pixel, we must translate the x and y coordinates into an index representing the offset of the first channel within the one-dimensional array.

            var index = (y * canvasWidth + x) * 4;

    We multiply the y coordinate by the width of the canvas, add the x coordinate, then multiply by four. We must multiply by four because there are four elements per pixel, one for each channel.

    Now we calculate the colour of the pixel.

    To generate the moire pattern, we multiply the x coordinate by the y coordinate then bitwise AND the result with hexadecimal 0xff (decimal 255) to ensure that the value is in the range 0..255.

            var value = x * y & 0xff;

    Greyscale colours have red, green and blue channels with identical values. So we assign the same value to each of the red, green and blue channels. The sequence of the one-dimensional array requires us to assign a value for the red channel at index, the green channel at index + 1, and the blue channel at index + 2.

            data[index]   = value;	// red
            data[++index] = value;	// green
            data[++index] = value;	// blue

    Here we’re incrementing index, as we recalculate it with each iteration, at the start of the inner loop.

    The last channel we need to take into account is the alpha channel at index + 3. To ensure that the plotted pixel is 100% opaque, we set the alpha channel to a value of 255 and terminate both loops.

            data[++index] = 255;	// alpha
        }
    }

    For the altered image data to appear in the canvas, we must put the image data at the origin (0,0).

    ctx.putImageData(imageData, 0, 0);

    Note that because data is a reference to imageData.data, we don’t need to explicitly reassign it.

    The ImageData Object

    At time of writing this article, the HTML5 specification is still in a state of flux.

    Earlier revisions of the HTML5 specification declared the ImageData object like this:

    interface ImageData {
        readonly attribute unsigned long width;
        readonly attribute unsigned long height;
        readonly attribute CanvasPixelArray data;
    }

    With the introduction of typed arrays, the type of the data attribute has altered from CanvasPixelArray to Uint8ClampedArray and now looks like this:

    interface ImageData {
        readonly attribute unsigned long width;
        readonly attribute unsigned long height;
        readonly attribute Uint8ClampedArray data;
    }

    At first glance, this doesn’t appear to offer us any great improvement, aside from using a type that is also used elsewhere within the HTML5 specification.

    But, we’re now going to show you how you can leverage the increased flexibility introduced by deprecating CanvasPixelArray in favour of Uint8ClampedArray.

    Previously, we were forced to write colour values to the image data one-dimensional array a single channel at a time.

    Taking advantage of typed arrays and the ArrayBuffer and ArrayBufferView objects, we can write colour values to the image data array an entire pixel at a time!

    Faster 32-bit Pixel Manipulation

    Here’s an example that replicates the functionality of the previous example, but uses unsigned 32-bit writes instead.

    NOTE: If your browser doesn’t use Uint8ClampedArray as the type of the data property of the ImageData object, this example won’t work!

    JSFiddle demo.

    The first deviation from the original example begins with the instantiation of an ArrayBuffer called buf.

    var buf = new ArrayBuffer(imageData.data.length);

    This ArrayBuffer will be used to temporarily hold the contents of the image data.

    Next we create two ArrayBuffer views. One that allows us to view buf as a one-dimensional array of unsigned 8-bit values and another that allows us to view buf as a one-dimensional array of unsigned 32-bit values.

    var buf8 = new Uint8ClampedArray(buf);
    var data = new Uint32Array(buf);

    Don’t be misled by the term ‘view’. Both buf8 and data can be read from and written to. More information about ArrayBufferView is available on MDN.

    The next alteration is to the body of the inner loop. We no longer need to calculate the index in a local variable so we jump straight into calculating the value used to populate the red, green, and blue channels as we did before.

    Once calculated, we can proceed to plot the pixel using only one assignment. The values of the red, green, and blue channels, along with the alpha channel are packed into a single integer using bitwise left-shifts and bitwise ORs.

            data[y * canvasWidth + x] =
                (255   << 24) |	// alpha
                (value << 16) |	// blue
                (value <<  8) |	// green
                 value;		// red
        }
    }

    Because we’re dealing with unsigned 32-bit values now, there’s no need to multiply the offset by four.

    Having terminated both loops, we must now assign the contents of the ArrayBuffer buf to imageData.data. We use the Uint8ClampedArray.set() method to set the data property to the Uint8ClampedArray view of our ArrayBuffer by specifying buf8 as the parameter.

    imageData.data.set(buf8);

    Finally, we use putImageData() to copy the image data back to the canvas.

    Testing Performance

    We’ve told you that using typed arrays for pixel manipulation is faster. We really should test it though, and that’s what this jsperf test does.

    At time of writing, 32-bit pixel manipulation is indeed faster.

    Wrapping Up

    There won’t always be occasions where you need to resort to manipulating canvas at the pixel level, but when you do, be sure to check out typed arrays for a potential performance increase.

    EDIT: Endianness

    As has quite rightly been highlighted in the comments, the code originally presented does not correctly account for the endianness of the processor on which the JavaScript is being executed.

    The code below, however, rectifies this oversight by testing the endianness of the target processor and then executing a different version of the main loop dependent on whether the processor is big- or little-endian.

    JSFiddle demo.

    A corresponding jsperf test for this amended code has also been written and shows near-identical results to the original jsperf test. Therefore, our final conclusion remains the same.

    Many thanks to all commenters and testers.

  5. Firefox – tons of tools for web developers!

    One of the goals of Firefox have always been to make the lives of web developers as easy and productive as possible, by providing tools and a very extensible web browser to enable people to create amazing things. The idea here is to list a lot of the tools and options available to you as web developers using Firefox.

    Continued…

  6. WDC2011: Tomorrow’s Web (and Future Technologies)

    Last Friday I had the pleasure of attending and speaking at the Web Developer Conference in Bristol. This was the fifth conference in the event’s history and was attended by well over 200 Web designers and developers from across the UK.

    In my talk I covered some Web technologies that are on the horizon and coming your way in the near future. Some topics of particular interest are the WebAPIs that we’re working on, with the WebVibrator API easily being the most popular amongst the attendees (no idea why).

    But in all seriousness, it was a great event and I’m glad to see some of the attendees exploring these new technologies as a result of the talk.

    You can check out the slides below:

  7. Tagging docs for sprint at JSConf.eu October 1-2

    We’re very excited to announce that Mozilla is sponsoring the Hacker Lounge at JSConf.eu and we will be holding a doc sprint at and during the conference. The focus of this doc sprint will naturally be docs for JavaScript and DOM. We hope to encourage attendees at the conference to contribute at least a little to improving the JS and DOM docs on MDN. I and a handful of MDN community members will be there to show them how.

    To that end, we want to tag as many JS and DOM articles as possible, with tags indicating what work needs to be done on those articles. That way, anybody dropping in during the sprint (or anytime later) can look for tagged articles and find something to work on.

    Here are some of the tags we use to indicate that an article needs help:

    • NeedsTechnicalReview: needs someone to verify that the technical information is complete and correct.
    • NeedsExample: needs one or more illustrative code examples of the item documented.
    • NeedsContent: the item is incomplete and needs to be filled out.
    • NeedsJSVersion: needs information about the version of JavaScript and EcmaScript this item first appears in.
    • NeedsBrowserCompatibility: needs a browser compatibility table or needs the table filled out.
    • MakeBrowserAgnostic: the article is written with a focus on Gecko, when it is actually about a standard function or feature, which should be rewritten to be generic.

    Please help by tagging articles in MDN that need work with the appropriate tag. You can use the Talk page for each article to elaborate on what needs to be done, if the tag is not descriptive enough. To modify tags on an article, login to MDN and click Edit Tags at the bottom of the page.

    The wiki page for the doc sprint has links to queries for some of these tags.

    If you will be at JSConf.eu, I look forward to seeing you there! If you will be participating remotely, I’ll see you online!