DOM Articles

Sort by:


  1. multiple file input in Firefox 3.6

    Firefox 3.6 supports multiple file input. This new capability allows you to get several files as input at once, using standard technologies. This is a big improvement, since you used to be constrained to one file at a time, or needed to use a third party (proprietary) application. This will be particularly useful, for example, for photo uploads.

    The input tag

    To let your user select a local file, use the input tag on your Web page. This will show the file picker to the user:

    <input type="file"/>

    In Firefox 3.6, the input tag has been expanded to support multiple files:

    <input type="file" multiple=""/>

    The user will still see the same file picker, but will be able to select more than one file.

    The form tag

    You can still use the classic form mechanism:

    <form method="post" action="upload.php" enctype="multipart/form-data">
      <input name='uploads[]' type="file" multiple=""/>
      <input type="submit" value="Send">

    If the server side code is in PHP, don’t forget to make sure that the value of the name attribute has brackets. The brackets are not from the HTML specification, but are required to manipulate the result of the request as an array (see PHP documentation).

    Here’s an example, which goes through the file list and prints each file name:

    foreach ($_FILES['uploads']['name'] as $filename) {
        echo '<li>' . $filename . '</li>';

    Using File API

    Firefox 3.6 also supports FileAPI. This allows you to do extra processing on the client slide before sending the files to the server. You can access the selected files with the files property of the input DOM element and then manipulate the files using the FileAPI.

    For example, here’s how to get the name of each file selected by the user. This is done on the client side, unlike the previous PHP example.

      var input = document.querySelector("input[type='file']");
      // You've selected input.files.length files
      for (var i = 0; i < input.files.length; i++) {
        // input.files[i] is a file object
        var li = document.createElement("li");
        li.innerHTML = input.files[i].name;


    See this mechanism in action in our multiple file input demo. You’ll need Firefox 3.6 (beta).


    To learn more about multiple file input, check out the documentation on MDC.

  2. arun talks about html5, fonts and india

    Recently Arun Ranganathan, one of the members of the Mozilla Evangelism team, created a video for MozCamp Mumbai. It’s about 20 minutes long and he covers a huge number of topics: the new @font-face CSS property and how it affects the ability for people to receive properly localized content, the differences between the various standards efforts (there’s more than just HTML5) and gives some demos that show what’s possible when you combine video with the web.

    Download: 640×480 – Ogg Theora or MP4 | 320×240 – Ogg Theora or MP4
  3. HTML5 drag and drop in Firefox 3.5

    This post is from Les Orchard, who works on Mozilla’s web development team.


    Drag and drop is one of the most fundamental interactions afforded by graphical user interfaces. In one gesture, it allows users to pair the selection of an object with the execution of an action, often including a second object in the operation. It’s a simple yet powerful UI concept used to support copying, list reordering, deletion (ala the Trash / Recycle Bin), and even the creation of link relationships.

    Since it’s so fundamental, offering drag and drop in web applications has been a no-brainer ever since browsers first offered mouse events in DHTML. But, although mousedown, mousemove, and mouseup made it possible, the implementation has been limited to the bounds of the browser window. Additionally, since these events refer only to the object being dragged, there’s a challenge to find the subject of the drop when the interaction is completed.

    Of course, that doesn’t prevent most modern JavaScript frameworks from abstracting away most of the problems and throwing in some flourishes while they’re at it. But, wouldn’t it be nice if browsers offered first-class support for drag and drop, and maybe even extended it beyond the window sandbox?

    As it turns out, this very wish is answered by the HTML 5 specification section on new drag-and-drop events, and Firefox 3.5 includes an implementation of those events.

    If you want to jump straight to the code, I’ve put together some simple demos of the new events.

    I’ve even scratched an itch of my own and built the beginnings of an outline editor, where every draggable element is also a drop target—of which there could be dozens to hundreds in a complex document, something that gave me some minor hair-tearing moments in the past while trying to make do with plain old mouse events.

    And, all the above can be downloaded or cloned from a GitHub repository I’ve created expecially for this article.

    The New Drag and Drop Events

    So, with no further ado, here are the new drag and drop events, in roughly the order you might expect to see them fired:

    A drag has been initiated, with the dragged element as the event target.
    The mouse has moved, with the dragged element as the event target.
    The dragged element has been moved into a drop listener, with the drop listener element as the event target.
    The dragged element has been moved over a drop listener, with the drop listener element as the event target. Since the default behavior is to cancel drops, returning false or calling preventDefault() in the event handler indicates that a drop is allowed here.
    The dragged element has been moved out of a drop listener, with the drop listener element as the event target.
    The dragged element has been successfully dropped on a drop listener, with the drop listener element as the event target.
    A drag has been ended, successfully or not, with the dragged element as the event target.

    Like the mouse events of yore, listeners can be attached to elements using addEventListener() directly or by way of your favorite JS library.

    Consider the following example using jQuery, also available as a live demo:

        <div id="newschool">
            <div class="dragme">Drag me!</div>
            <div class="drophere">Drop here!</div>
        <script type="text/javascript">
            $(document).ready(function() {
                $('#newschool .dragme')
                    .attr('draggable', 'true')
                    .bind('dragstart', function(ev) {
                        var dt = ev.originalEvent.dataTransfer;
                        dt.setData("Text", "Dropped in zone!");
                        return true;
                    .bind('dragend', function(ev) {
                        return false;
                $('#newschool .drophere')
                    .bind('dragenter', function(ev) {
                        return false;
                    .bind('dragleave', function(ev) {
                        return false;
                    .bind('dragover', function(ev) {
                        return false;
                    .bind('drop', function(ev) {
                        var dt = ev.originalEvent.dataTransfer;
                        return false;

    Thanks to the new events and jQuery, this example is both short and simple—but it packs in a lot of functionality, as the rest of this article will explain.

    Before moving on, there are at least three things about the above code that are worth mentioning:

    • Drop targets are enabled by virtue of having listeners for drop events. But, per the HTML 5 spec, draggable elements need an attribute of draggable="true", set either in markup or in JavaScript.

      Thus, $('#newschool .dragme').attr('draggable', 'true').

    • The original DOM event (as opposed to jQuery’s event wrapper) offers a property called dataTransfer. Beyond just manipulating elements, the new drag and drop events accomodate the transmission of user-definable data during the course of the interaction.
    • Since these are first-class events, you can apply the technique of Event Delegation.

      What’s that? Well, imagine you have a list of 1000 list items—as part of a deeply-nested outline document, for instance. Rather than needing to attach listeners or otherwise fiddle with all 1000 items, simply attach a listener to the parent node (eg. the <ul> element) and all events from the children will propagate up to the single parent listener. As a bonus, all new child elements added after page load will enjoy the same benefits.

      Check out this demo, and the associated JS code to see more about these events and Event Delegation.

    Using dataTransfer

    As mentioned in the last section, the new drag and drop events let you send data along with a dragged element. But, it’s even better than that: Your drop targets can receive data transferred by content objects dragged into the window from other browser windows, and even other applications.

    Since the example is a bit longer, check out the live demo and associated code to get an idea of what’s possible with dataTransfer.

    In a nutshell, the stars of this show are the setData() and getData() methods of the dataTransfer property exposed by the Event object.

    The setData() method is typically called in the dragstart listener, loading dataTransfer up with one or more strings of content with associated recommended content types.

    For illustration, here’s a quick snippet from the example code:

        var dt = ev.originalEvent.dataTransfer;    
        dt.setData('text/plain', $('#logo').parent().text());
        dt.setData('text/html', $('#logo').parent().html());
        dt.setData('text/uri-list', $('#logo')[0].src);

    On the other end, getData() allows you to query for content by type (eg. text/html followed by text/plain). This, in turn, allows you to decide on acceptable content types at the time of the drop event or even during dragover to offer feedback for unacceptable types during the drag.

    Here’s another example from the receiving end of the example code:

        var dt = ev.originalEvent.dataTransfer;    
        $('.content_url .content').text(dt.getData('text/uri-list'));
        $('.content_text .content').text(dt.getData('text/plain'));
        $('.content_html .content').html(dt.getData('text/html'));

    Where dataTransfer really shines, though, is that it allows your drop targets to receive content from sources outside your defined draggable elements and even from outside the browser altogether. Firefox accepts such drags, and attempts to populate dataTransfer with appropriate content types extracted from the external object.

    Thus, you could select some text in a word processor window and drop it into one of your elements, and at least expect to find it available as text/plain content.

    You can also select content in another browser window, and expect to see text/html appear in your events. Check out the outline editing demo and see what happens when you try dragging various elements (eg. images, tables, and lists) and highlighted content from other windows onto the items there.

    Using Drag Feedback Images

    An important aspect of the drag and drop interaction is a representation of the thing being dragged. By default in Firefox, this is a “ghost” image of the dragged element itself. But, the dataTransfer property of the original Event object exposes the method setDragImage() for use in customizing this representation.

    There’s a live demo of this feature, as well as associated JS code available. The gist, however, is sketched out in these code snippets:

        var dt = ev.originalEvent.dataTransfer;    
        dt.setDragImage( $('#feedback_image h2')[0], 0, 0);
        dt.setDragImage( $('#logo')[0], 32, 32); 
        var canvas = document.createElement("canvas");
        canvas.width = canvas.height = 50;
        var ctx = canvas.getContext("2d");
        ctx.lineWidth = 8;
        ctx.lineTo(50, 50);
        ctx.lineTo(0, 50);
        ctx.lineTo(25, 0);
        dt.setDragImage(canvas, 25, 25);

    You can supply a DOM node as the first parameter to setDragImage(), which includes everything from text to images to <canvas> elements. The second two parameters indicate at what left and top offset the mouse should appear in the image while dragging.

    For example, since the #logo image is 64×64, the parameters in the second setDragImage() method places the mouse right in the center of the image. On the other hand, the first call positions the feedback image such that the mouse rests in the upper left corner.

    Using Drop Effects

    As mentioned at the start of this article, the drag and drop interaction has been used to support actions such as copying, moving, and linking. Accordingly, the HTML 5 specification accomodates these operations in the form of the effectAllowed and dropEffect properties exposed by the Event object.

    For a quick fix, check out the a live demo of this feature, as well as the associated JS code.

    The basic idea is that the dragstart event listener can set a value for effectAllowed like so:

        var dt = ev.originalEvent.dataTransfer;
        switch ( {
            case 'effectdrag0': dt.effectAllowed = 'copy'; break;
            case 'effectdrag1': dt.effectAllowed = 'move'; break;
            case 'effectdrag2': dt.effectAllowed = 'link'; break;
            case 'effectdrag3': dt.effectAllowed = 'all'; break;
            case 'effectdrag4': dt.effectAllowed = 'none'; break;

    The choices available for this property include the following:

    no operation is permitted
    copy only
    move only
    link only
    copy or move only
    copy or link only
    link or move only
    copy, move, or link

    On the other end, the dragover event listener can set the value of the dropEffect property to indicate the expected effect invoked on a successful drop. If the value does not match up with effectAllowed, the drop will be considered cancelled on completion.

    In the a live demo, you should be able to see that only elements with matching effects can be dropped into the appropriate drop zones. This is accomplished with code like the follwoing:

        var dt = ev.originalEvent.dataTransfer;
        switch ( {
            case 'effectdrop0': dt.dropEffect = 'copy'; break;
            case 'effectdrop1': dt.dropEffect = 'move'; break;
            case 'effectdrop2': dt.dropEffect = 'link'; break;
            case 'effectdrop3': dt.dropEffect = 'all'; break;
            case 'effectdrop4': dt.dropEffect = 'none'; break;

    Although the OS itself can provide some feedback, you can also use these properties to update your own visible feedback, both on the dragged element and on the drop zone itself.


    The new first-class drag and drop events in HTML5 and Firefox make supporting this form of UI interaction simple, concise, and powerful in the browser. But beyond the new simplicity of these events, the ability to transfer content between applications opens brand new avenues for web-based applications and collaboration with desktop software in general.

  4. css transforms: styling the web in two dimensions

    One feature that Firefox 3.5 adds to its CSS implementation is transform functions. These let you manipulate elements in two dimensional space by rotating, skewing, scaling, and translating them to alter their appearance.

    I’ve put together a demo that shows how some of these functions work.

    There are four animating objects in this demo. Let’s take a look at each of them.

    Rotating the Firefox logo

    On the left, we see the Firefox logo in a nice box, happily spinning in place. This is done by periodically setting the rotation value of the image object, whose ID is logoimg, like this:

      var logo = document.getElementById("logoimg");
      logoAngle = logoAngle + 2;
      if (logoAngle >= 360) {
        logoAngle = logoAngle - 360;
      var style = "-moz-transform: rotate(" + logoAngle + "deg)";
      logo.setAttribute("style", style);

    Every time the animation function is run, we rotate it by 2° around its origin by constructing a style string of the form -moz-transform: rotate(Ndeg).

    By default, all elements’ origins are at their centers (that is, 50% along each axis). The origin can be changed using the -moz-transform-origin attribute.

    Skewing text

    We have two examples of skewing in this demo; the first skews horizontally, which causes the text to “lean” back and forth along the X axis. The second skews vertically, which causes the baseline to pivot along the Y axis.

    In both cases, the code to accomplish this animation is essentially identical, so let’s just look at the code for skewing horizontally:

      text1SkewAngle = text1SkewAngle + text1SkewOffset;
      if (text1SkewAngle > 45) {
        text1SkewAngle = 45;
        text1SkewOffset = -2;
      } else if (text1SkewAngle < -45) {
        text1SkewAngle = -45;
        text1SkewOffset = 2;
   = "skewx(" + text1SkewAngle + "deg)";

    This code updates the current amount by which the text is skewed, starting at zero degrees and moving back and forth between -45° and 45° at a rate of 2° each time the animation function is called. Positive values skew the element to the right and negative values to the left.

    Then the element’s transform style is updated, setting the transform function to be of the form skewx(Ndeg), then setting the element’s style.MozTransform property to that value.

    Scaling elements

    The last of the examples included in the demo shows how to scale an element using the scale transform function:

      text3Scale = text3Scale + text3ScaleOffset;
      if (text3Scale > 6) {
        text3Scale = 6;
        text3ScaleOffset = -0.1;
        text3.innerHTML = "It's going away so fast!" = "blue";
      } else if (text3Scale < 1) {
        text3Scale = 1;
        text3ScaleOffset = 0.1;
        text3.innerHTML = "It's coming right at us!"; = "red";
   = "scale(" + text3Scale + ")";

    This code scales the element up and down between its original size (a scale factor of 1) and a scale factor of 6, moving by 0.1 units each frame. This is done by building a transform of the form scale(N), then setting the element’s style.MozTransform property to that value.

    In addition, just for fun, we’re also changing the text and the color of the text in the block as we switch scaling directions, by setting the value of the block’s innerHTML property to the new contents.

    Final notes

    Three more tidbits to take away from this:

    First, note that as the scaling text grows wider, the document’s width changes to fit it, getting wider as the text grows so that its right edge passes the edge of the document, then narrower as it shrinks again. You can see this by watching the scroll bar at the bottom of the Firefox browser window.

    Second, note that you can actually select and copy the text not only while the elements are transformed, but the selection remains intact while the text continues to transform (although when we change the contents of the scaling example, the selection goes away).

    Third, I didn’t cover all the possible transforms here. For example, I skipped over the translate transform function, which lets you translate an object horizontally or vertically (basically, shifting its position by an offset). You can get a full list of the supported transforms on the Mozilla Developer Center web site.

    Obviously this demo is somewhat frivolous (as demos are prone to be). However, there are genuinely useful things you can do with these when designing interfaces; for example, you can draw text rotated by 90° along the Y axis of a table in order to fit row labels in a narrow but tall space.

  5. using web workers: working smarter, not harder

    This article is written by Malte Ubl, who has done a lot of great work with using Web Workers as part of the bespin project.

    In recent years, the user experience of web applications has grown richer and richer. In-browser applications like GMail, Meebo and Bespin give us an impression of how the web will look and feel in the future. One of the key aspects of creating a great user experience is to build applications that are highly responsive. Users hate to wait and they hate those moments where an application seems to work for a while, then stops responding to their input.

    At the core of modern client-side web applications lies the JavaScript programming language. JavaScript and the DOM that it talks to is inherently single-threaded. This means that in JavaScript only one thing can happen at any given time. Even if your computer has 32 cores it will keep only one of those cores busy when it’s doing a long computation.  For example if you calculate the perfect trajectory to get to the moon it won’t be able to render an animation that shows the trajectory at the same time and it won’t be able to react to any user events like clicks or typing on the keyboard while it’s doing that calculation.


    To maintain responsiveness while performing intense computations concurrency is a part of most modern programming languages. In the past concurrency was often achieved by the use of threads. Threads, however, make it increasingly hard for the programmer to understand the program flow which often leads to very hard to understand bugs and chaotic behavior when different threads manipulate the same data simultaneously.

    Web Workers, which were recommended by the WHATWG, were introduced in Firefox 3.5 to add concurrency to JavaScript applications without also introducing the problems associated with multithreaded programs. Starting a worker is easy – just use the new Worker interface.

    In this example the worker.js file will be loaded and the a new thread will be created to execute that code.

    // Start worker from file "worker.js"
    var worker = new Worker("worker.js");

    Communication between the main UI thread and workers is done by passing messages using the postMessage method. postMessage was added for cross-window communication in Firefox 3. To send a message from the worker back to the page, you just post a message:

    // Send a message back to the main UI thread
    postMessage("Hello Page!");

    To catch the message from the worker, you define an “onmessage” callback on the worker object. Here we just alert the event data that is passed to the callback function. In this case, “” contains the “Hello Page!” string that was sent above.

    worker.onmessage = function (event) {
      // Send a message to the worker
      worker.postMessage("Hello Worker");

    To send a message to the worker we call the postMessage method on the worker object. To receive these messages inside the worker, simply define an onmessage function that will be called every time a message is posted to the worker.

    Error Handling

    There are two levels at which you can recover from runtime errors that occur in a worker. First, you can define an onerror function within the worker. Second, you can handle errors from the outside the worker by attaching an onerror handler on to the worker object:

    worker.onerror = function (event) {

    The event.preventDefault() method prevents the default action, which would be to display the error to the user or at least show it in the error console. Here we alert the error message instead.

    Shared Nothing

    Workers share absolutely no state with the page they are associated with or with any other workers; the only way they can interact at all is through postMessage. Workers also have no access to the DOM, so they can not directly manipulate the web page. There is thus no risk of problems with data integrity when multiple workers want to manipulate the same data at once.

    A standard setup that is using workers would consist of a page JavaScript component that is listening for user events. When an event occurs that triggers an intensive calculation a message is sent to the worker which then starts the computation. The script on the page, however, can terminate immediately and listen for more user events. As soon as the worker is done, it sends a return message to the page which can then, for example, display the result.

    The unresponsive script warning that is being displayed by browsers when a script is taking a long time to execute is a thing of the past when using web workers.

    The Fibonacci Example

    Next is an example of a worker that calculates the Fibonacci numbers from 0 to 99 in the background. Actually, because calculating Fibonacci numbers using this very inefficient method can take a lot of time for larger numbers (as in greater than something like 30) the script might never finish on your computer (or crash because it blows out the stack), but when doing it in a worker this has no effect on the responsiveness of the main web page. So you can still draw a complex animation to make the waiting time for the next number a little more fun.

    This HTML page contains a script that starts a worker from the file “fib-worker.js”. Messages from the worker are displayed on the browser’s console using console.log.

    <!DOCTYPE html>
          <title>Web Worker API Demo</title>
          <script type="text/javascript">
            var worker = new Worker("fib-worker.js");
            worker.onmessage = function (event) {
              console.log( +" -> " +

    The JavaScript file that implements the worker contains a loop that calculates Fibonacci numbers and sends the result to the page.

    // File fib-worker.js
    function fib(n) {
       return n < 2 ? n : fib(n-1) + fib(n-2);
    for(var i = 0; i < 100; ++i) {
          index: i,
          value: fib(i)

    In the example above we see that we can also pass complex objects to the postMessage. These objects can contain everything that can be transmitted via JSON. This means that functions cannot be passed across worker boundaries and that the objects are passed by value rather than by reference.

    Worker APIs

    Workers support a function called importScripts. You can use this to load more source files into the worker.

    importScripts("foo.js", "bar.js");

    When you pass multiple parameters to the function the scripts will be downloaded in parallel but executed in the order of definition. The function does not return until all scripts have been downloaded and executed.

    Here we load an external JavaScript file that calculates SHA-1 hash sums from strings and then we use it to hash responses from AJAX requests. We also use the standard XMLHttpRequest object to retrieve the content of the URL which is passed in via the onmessage event. The interesting part is that we don’t have to worry about making the AJAX request asynchronous because the worker itself is asynchronous with respect to page rendering, so a little waiting for the HTTP request does not hurt as much.

    function onmessage(event) {
        var xhr = new XMLHttpRequest();'GET',, false);

    Other APIs Available to Workers

    Workers may use XMLHttpRequest for AJAX requests as seen above and access the client sided database using web storage API. Here the APIs are identical to their usage in regular JavaScript.

    The setTimeout and setInterval (and the clearTimeout and clearInterval friends) functions, which enable executing code after a given period of time or at certain intervals, are also available within the worker as is the well known navigator object, which can be inspected to get information about the current browser.

    More APIs may be added in the future.

    Browser Compatibility

    As of this writing (and to the knowledge of the author), Firefox 3.5 is the only browser that supports the ability to pass complex objects via postMessage and that implements the extended APIs defined above. Safari 4 implements a very basic version of the Worker API. For other browsers it is possible to use Workers via Google Gears, which originally introduced the concept to browsers.

    Real World Usage

    In the Bespin project, which is a browser based source code editor, we successfully used workers to implement CPU intensive features like real-time source code error checking and code completion. We also created a shim that implements the Worker API in terms of Google Gears and which adds the missing features to the worker implementation of Safari 4 and also moved to using transparent custom events on top of the postMessage interface. These components will be released as a stand-alone library to be usable in other projects in the future.

    Web Workers will play an important role in making the Open Web an even more powerful platform for sophisticated applications. Because in the end all they do is execute JavaScript, it’s easy to make scripts work on clients which do yet have the luxury of web workers. So go ahead and add them to your applications today to make them feel just a little more responsive and more pleasant to use.

  6. exploring music with the audio tag

    Today’s demo comes to us from Samuel Goldszmidt. He’s a web developer specializing in audio applications at Institut de Recherche et Coordination Acoustique/Musique (IRCAM). IRCAM is a European institute covering science, sound and avant garde electro-acoustical art music.

    The demo uses XML to describe the various segments of a piece of music – Florence Baschet’s StreicherKreis (Circle of Strings). The music itself is a combination of stringed instruments and electronic effects. From the XML, SVG is generated for each section of the music. You can click on each section to listen to that part of the piece and a description is shown on how that particular section was created.

    As far as demos go, this is relatively simple. But it’s worth highlighting because it shows how easy it is to build a timeline around a piece of music and add descriptive information. In this case, it’s information meant to teach people how a particular effect was created. But it could be anything, from showing different camera angles of people playing the music to links about different covers of a popular piece. Opening up media to the web means that we can combine it with text, images and other media. This is just a small example.

  7. the script defer attribute

    This post is by Olivier Rochard. Olivier does research at Orange Labs in France.

    In HTML, the script element allows authors to include dynamic script in their documents. The defer attribute is boolean attribute that indicates how the script should be executed. If the defer attribute is present, then the script is executed when the page has finished parsing. The element is added to the end of the list of scripts that will execute when the document has finished parsing. Think about a FIFO processing queue : the first script element to be added to the queue will be the first script to be executed, then processing proceeds sequentially in the same order.

    There is one very good reason for using the defer attribute: performance. If you include a script element in your HTML page the script must be evaluated immediately while the page is being parsed. This means that objects have to be created, styles must be flushed, etc. This can make page loading slower. The defer attribute implies that the script has no side effects on the document as it’s being loaded and can safely be evaluated at the end of the page load.

    The defer attribute was first introduced in Internet Explorer 4, and added in the HTML 4 specification.

    A simple test.

    Here is a simple first test to see how the attribute works. The following lines are in the head element of a page:

    	var test1 = "Test 1 : fail";
    ‹script defer›
    	test1 = "Test 1 : pass";

    If the defer attribute for the script element is correctly implemented the browser will:

    1. Render the page.
    2. Execute the second script element after all the others.
    3. Display “Test 1 : pass” on the Firebug console.

    If the console displays “Test 1 : fail” it’s because the scripts are executed in the same order as in the source code.

    Note that the correct syntax for XHTML documents is:

    <script defer="defer"></script>

    A more advanced test

    This second test is a way to see how the feature works in a webpage with multiple script elements inserted:

    • Inline in the head and body elements
    • External via src attribute in head and body elements
    • With dynamic DOM insertion

    Here is partial source code of a webpage that tests how defer affects script loading and parsing order:

    ‹!doctype html›
            ‹title› Test 2 ‹/title›
            ‹script› var test2 = "Test 2 :\n\n"; ‹/script›
            ‹script› document.addEventListener("DOMContentLoaded",
                            test2 += "\tDOMContentLoaded\n";
                    }, false);
            ‹script defer› test2 += "\tInline HEAD deferred\n"; ‹/script›
            ‹script› test2 += "\tInline HEAD\n"; ‹/script›
            ‹script src="script1.js" defer›
                    // External HEAD deferred (script1.js) 
            ‹script src="script2.js"›
                    // External HEAD  (script2.js)
                // Dynamic DOM insertion of a script (script3.js)
                head = document.getElementsByTagName('head')[0];
                script3 = document.createElement('script');
                script3.setAttribute('src', 'script3.js');
                // Dynamic DOM insertion of a deferred script (script4.js)
                script4 = document.createElement('script');
                script4.setAttribute('defer', 'defer');
                script4.setAttribute('src', 'script4.js');
    	‹script defer›
                // Deferred dynamic DOM insertion of a script (script5.js)
                head = document.getElementsByTagName('head')[0];
                script5 = document.createElement('script');
                script5.setAttribute('src', 'script5.js');
                // Deferred dynamic DOM insertion of a deferred script
                // (script6.js)
                script6 = document.createElement('script');
                script6.setAttribute('defer', 'defer');
                script6.setAttribute('src', 'script6.js');
        ‹body onload="test2 += '\tBody onLoad\n';"›
            ‹script defer› test2 += "\tInline BODY deferred\n"; ‹/script›
            ‹script› test2 += "\tInline BODY\n"; ‹/script›
    	... other body content ...
    		<a onclick="alert(test2);">Launch test 2</a>
    	... other body content ...
            ‹script src="script7.js" defer›
                    // External BODY deferred (script7.js)
            ‹script src="script8.js"›
                    // External BODY (script8.js)

    When you click on the “Launch test 2″ link in the document a pop-up appears with a list in it. This list shows the order of script elements loaded during the session.

    The test also displays the DOMContentLoaded and body.onload events when they are fired.

    If the defer attribute is correctly implemented in the browser, all the deferred lines should be near the bottom of the list.

    Results of the second test for each browser are below (deferred scripts are in green color) :

    • The defer attribute behavior in the Firefox 3.5 browser is correct:

      1. Inline HEAD
      2. External HEAD (script2.js)
      3. Dynamic DOM insertion of a script (script3.js)
      4. Inline BODY
      5. External BODY (script8.js)
      6. Inline HEAD deferred
      7. External HEAD deferred (script1.js)
      8. Dynamic DOM insertion of a deferred script (script4.js)
      9. Inline BODY deferred
      10. External BODY deferred (script7.js)
      11. Deferred dynamic DOM insertion of a script (script5.js)
      12. Deferred dynamic DOM insertion of a deferred script (script6.js)
      13. DOMContentLoaded
      14. Body onLoad
    • The defer attribute behavior in the IE 8 browser is erratic: the order is different at each reload :

      1. Inline HEAD
      2. External HEAD (script2.js)
      3. Inline BODY
      4. External BODY (script8.js)
      5. Dynamic DOM insertion of a script (script3.js)
      6. Dynamic DOM insertion of a deferred script (script4.js)
      7. Inline HEAD deferred
      8. External HEAD deferred (script1.js)
      9. Inline BODY deferred
      10. External BODY deferred (script7.js)
      11. Body onLoad
      12. Deferred dynamic DOM insertion of a script (script5.js)
      13. Deferred dynamic DOM insertion of a deferred script (script6.js)
    • The defer attribute behavior in a WebKit browser (Safari 4.0) is erratic : the order is different at each reload :

      1. Inline HEAD deferred
      2. Inline HEAD
      3. External HEAD deferred (script1.js)
      4. External HEAD (script2.js)
      5. Inline BODY deferred
      6. Inline BODY
      7. External BODY deferred (script7.js)
      8. Deferred dynamic DOM insertion of a script (script5.js)
      9. Dynamic DOM insertion of a deferred script (script4.js)
      10. Deferred dynamic DOM insertion of a deferred script (script6.js)
      11. Dynamic DOM insertion of a script (script3.js)
      12. External BODY (script8.js)
      13. DOMContentLoaded
      14. Body onLoad
    • The defer attribute behavior in the Opera 10.00 Beta browser:

      1. Inline HEAD deferred
      2. Inline HEAD
      3. External HEAD deferred (script1.js)
      4. External HEAD (script2.js)
      5. Dynamic DOM insertion of a script (script3.js)
      6. Dynamic DOM insertion of a deferred script (script4.js)
      7. Deferred dynamic DOM insertion of a script (script5.js)
      8. Deferred dynamic DOM insertion of a deferred script (script6.js)
      9. Inline BODY deferred
      10. Inline BODY
      11. External BODY deferred (script7.js)
      12. External BODY (script8.js)
      13. DOMContentLoaded
      14. Body onLoad

    We hope that this has been a useful introduction to how the defer attribute works in Firefox 3.5. The tests above will also help you predict behavior in other browsers as well.


  8. saving data with localStorage

    This post was written by Jeff Balogh. Jeff works on Mozilla’s web development team.

    New in Firefox 3.5, localStorage is a part of the Web Storage specification. localStorage provides a simple Javascript API for persisting key-value pairs in the browser. It shouldn’t be confused with the SQL database storage proposal, which is a separate (and more contentious) part of the Web Storage spec. Key-value pairs could conceivably be stored in cookies, but you wouldn’t want to do that. Cookies are sent to the server with every request, presenting performance issues with large data sets and the potential for security problems, and you have to write your own interface for treating cookies like a database.

    Here’s a small demo that stores the content of a textarea in localStorage. You can change the text, open a new tab, and find your updated content. Or you can restart the browser and your text will still be there.

    The easiest way to use localStorage is to treat it like a regular object:

    >>> = 'bar'
    >>> localStorage.length
    >>> localStorage[0]
    >>> localStorage['foo']
    >>> delete localStorage['foo']
    >>> localStorage.length
    >>> localStorage.not_set

    There’s also a more wordy API for people who like that sort of thing:

    >>> localStorage.clear()
    >>> localStorage.setItem('foo', 'bar')
    >>> localStorage.getItem('foo')
    >>> localStorage.key(0)
    >>> localStorage.removeItem('foo')
    >>> localStorage.length

    If you want to have a localStorage database mapped to the current session, you can use sessionStorage. It has the same interface as localStorage, but the lifetime of sessionStorage is limited to the current browser window. You can follow links around the site in the same window and sessionStorage will be maintained (going to different sites is fine too), but once that window is closed the database will be deleted. localStorage is for long-term storage, as the w3c spec instructs browsers to consider the data “potentially user-critical”.

    I was a tad disappointed when I found out that localStorage only supports storing strings, since I was hoping for something more structured. But with native JSON support it’s easy to create an object store on top of localStorage:

    Storage.prototype.setObject = function(key, value) {
        this.setItem(key, JSON.stringify(value));
    Storage.prototype.getObject = function(key) {
        return JSON.parse(this.getItem(key));

    localStorage databases are scoped to an HTML5 origin, basically the tuple (scheme, host, port). This means that the database is shared across all pages on the same domain, even concurrently by multiple browser tabs. However, a page connecting over http:// cannot see a database that was created during an https:// session.

    localStorage and sessionStorage are supported by Firefox 3.5, Safari 4.0, and IE8. You can find more compatibility details on, including more detail on the storage event.

  9. DOM Traversal in Firefox 3.5

    Firefox 3.5 includes new support for two W3C DOM traversal specifications. The first, the Element Traversal API, focuses on making element-by-element traversal easier, the second, the NodeIterator interface which makes finding all node types much easier.

    Element Traversal API

    The purpose of the Element Traversal API is to make it easier for developers to traverse through DOM elements without having to worry about intermediary text nodes, comment nodes, etc. This has long been a bane of web developers, in particular, with cases like document.documentElement.firstChild yielding different results depending on the whitespace structure of a document.

    The Element Traversal API introduces a number of new DOM node properties which can make this traversing much simpler.

    Here’s a full break-down of the existing DOM node properties and their new counterparts:

    Purpose All DOM Nodes Just DOM Elements
    First .firstChild .firstElementChild
    Last .lastChild .lastElementChild
    Previous .previousSibling .previousElementSibling
    Next .nextSibling .nextElementSibling
    Length .childNodes.length .childElementCount

    These properties provide a fairly simple addition to the DOM specification (and, honestly, they’re something that should’ve been in the specification to begin with).

    There is one property that is conspicuously absent, though: .childElements (as a counterpart to .childNodes). This property (which contained a live NodeSet of the child elements of the DOM element) was in previous iterations of the specification but it seems to have gone on the cutting room floor at some point in the interim.

    But all is not lost. Right now Internet Explorer, Opera, and Safari all support a .children property which provides a super-set of the functionality that was supposed to have been made possible by .childElements. When support for the Element Traversal API was finally landed for Firefox 3.5, support for .children was included. This means that every major browser now supports this property (far in advance of all browsers supporting the rest of the true Element Traversal specification).

    Some examples of the Element Traversal API (and .children) in action:

    Show next element when a click occurs:

    someElement.addEventListener("click", function(){ = "block";
    }, false);

    Add classes to all of the child elements:

    for ( var i = 0; i < someElement.children.length; i++ ) {
        someElement.children[ i ].className = "active";

    NodeIterator API

    NodeIterator is a relatively old API that hasn’t seen wide adoption, and has just been implemented in Firefox 3.5. Specifically, the NodeIterator API is designed to allow for easy traversal of all nodes in a DOM document (this includes text nodes, comments, etc.).

    The API itself is rather convoluted (containing a number of features that aren’t immediately important to most developers) but if you wish to use it for some simpler tasks it be quite easy.

    The API works by creating a NodeIterator (using document.createNodeIterator) and passing in a series of filters. The NodeIterator is capable of returning all nodes in a document (or within the context of a given node) thus you’ll want to filter it down to only show the ones that you desire. A simple example of this can be found below.

    Construct a NodeIterator for iterating through all the comment nodes in a document.

    var nodeIterator = document.createNodeIterator(
    var node;
    while ( (node = nodeIterator.nextNode()) ) {
        node.parentNode.removeChild( node );

    Once constructed the NodeIterator is bi-directional (you can move in any direction, using previousNode or nextNode).

    Perhaps the best use of the API is in traversing over commonly-used (but difficult to traverse) nodes like comments and text nodes. Since there already exist a few APIs for traversing DOM elements (such as getElementsByTagName) this does come as a welcomed respite to the normal means of node traversal.

  10. DOM selectors API in Firefox 3.5

    The Selectors API recommendation, published by the W3C, is a relatively new effort that gives JavaScript developers the ability to find DOM elements on a page using CSS selectors. This single API takes the complicated process of traversing and selecting elements from the DOM and unifies it under a simple unified interface.

    Out of all the recent work to come out of the standards process this is one of the better-supported efforts across all browsers: Usable today in Internet Explorer 8, Chrome, and Safari and arriving in Firefox 3.5 and Opera 10.

    Using querySelectorAll

    The Selectors API provides two methods on all DOM documents, elements, and fragments: querySelector and querySelectorAll. The methods work virtually identically, they both accept a CSS selector and return the resulting DOM elements (the exception being that querySelector only returns the first element).

    For example, given the following HTML snippet:

    <div id="id" class="class">
        <p>First paragraph.</p>
        <p>Second paragraph.</p>

    We would be able to use querySelectorAll to make the background of all the paragraphs, inside the div with the ID of ‘id’, red.

    var p = document.querySelectorAll("#id p");
    for ( var i = 0; i < p.length; i++ ) {
        p[i].style.backgroundColor = "red";

    Or we could find the first child paragraph of a div that has a class of ‘class’ and give it a class name of ‘first’.

    document.querySelector("div.class > p:first-child")
        .className = "first";

    Normally these types of traversals would be very tedious in long-form JavaScript/DOM code, taking up multiple lines and queries each.

    While the actual use of the Selectors API methods is relatively simple (each taking a single argument) the challenging part comes in when choosing which CSS selectors to use. The Selectors API taps in to the native CSS selectors provided by the browser, for use in styling elements with CSS. For most browsers (Firefox, Safari, Chrome, and Opera) this means that you have access to the full gamut of CSS 3 selectors. Internet Explorer 8 provides a more-limited subset that encompasses CSS 2 selectors (which are still terribly useful).

    The biggest hurdle, for most new users to the Selectors API, is determining which CSS selectors are appropriate for selecting the elements that you desire – especially since most developers who write cross-browser code only have significant experience with a limited subset of fully-working CSS 1 selectors.

    While the CSS 2 and CSS 3 selector specifications can serve as a good start for learning more about what’s available to you there also exist a number of useful guides for learning more:

    Implementations in the Wild

    The most compelling use case of the Selectors API is not its direct use by web developers, but its use by 3rd-party libraries that already provide DOM CSS selector functionality. The trickiest problem towards adopting the use of the Selectors API, today, is that it isn’t available in all browsers that users develop for (this includes IE 6, IE 7 and Firefox 3). Thus, until those browsers are no longer used, we must use some intermediary utility to recreate the full DOM CSS selector experience.

    Thankfully, a number of libraries already exist that provide an API compatible with the Selectors API (in fact, much of the inspiration for the Selectors API comes from the existence of these libraries in the first place). Additionally, many of these implementations already use the Selectors API behind the scenes. This means that you can use using DOM CSS selectors in all browsers that you support AND get the benefit of faster performance from the native Selectors API implementation, with no work to you.

    Some existing implementations that gracefully use the new Selectors API are:

    It’s important to emphasize the large leap in performance that you’ll gain from using this new API (in comparison to the traditional mix of DOM and JavaScript that you must employ). You can really see the difference when you look at the improvement that occurred when JavaScript libraries began to implement the new Selectors API.

    When some tests were run previously the results were as follows:

    You can see the dramatic increase in performance that occurred once the libraries began using the native Selectors API implementations – it’s quite likely that this performance increase will happen in your applications, as well.

    Test Suite

    To coincide with the definition of the Selectors API specification a Selectors API test suite was created by John Resig of Mozilla. This test suite can be used as a way to determine the quality of the respective Selectors API implementations in the major browsers.

    The current results for the browsers that support the API are:

    • Firefox 3.5: 99.3%
    • Safari 4: 99.3%
    • Chrome 2: 99.3%
    • Opera 10b1: 97.5%
    • Internet Explorer 8: 47.4%

    Internet Explorer 8, as mentioned before, is missing most CSS 3 selectors – thus failing most of the associated tests.

    As it stands, the Selectors API should serve as a simple, and fast, way of selecting DOM elements on a page. It’s already benefiting those who use JavaScript libraries that provide similar functionality so feel free to dig in today and give the API a try.