JavaScript Articles

Sort by:


  1. using HTML5 video with fallbacks to other formats

    The Mozilla Support Project and (SUMO for short) is an open and volunteer powered community that helps over 3 million Firefox users a week get support and help with their favorite browser. The Firefox support community maintains a knowledge base with articles in over 30 languages and works directly with users through our support forums or live chat service. They’ve put together the following demonstration of how to use open video and Flash-based video at the same time to provide embedded videos to users regardless of their browser. This demo article was written by Laura Thomson, Cheng Wang and Eric Cooper

    Note: This article shows how to add video objects to a page using JavaScript. For most pages we suggest that you read the article that contains information on how to use the video tag to provide simple markup fallbacks. Markup-based fallbacks are much more elegant than JavaScript solutions and are generally recommended for use on the web.

    One of the challenges to open video adoption on the web is making sure that online video still performs well on browsers that don’t currently support open video.  Rather than asking users with these browsers to download the video file and use a separate viewer, the new <video> tag degrades gracefully to allow web developers to provide a good experience for everyone.  As Firefox 3.5 upgrades the web, users in this transitional period can be shown open video or video using an existing plugin in an entirely seamless way depending on what their browser supports.

    At SUMO, we use this system to provide screencasts of problem-solving steps such as in the article on how to make Firefox the default browser.

    If you visit the page using Firefox 3.5, or another browser with open video support, and click on the “Watch a video of these instructions” link, you get a screencast using an Ogg-formatted file.  If you visit with a browser that does not support <video> you get the exact same user experience using an open source .flv player and the same video encoded in the .flv format, or in some cases using the SWF Flash format.  These alternate viewing methods use the virtually ubiquitous Adobe Flash plugin which is one of the most common ways to show video on the web.

    The code works as follows.

    In the page which contains the screencasts, we include some JavaScript.   Excerpts from this code follow, but you can see or check out the complete listing from Mozilla SVN.

    The code begins by setting up an object to represent the player:

    Screencasts.Player = {
    width: 640,
    height: 480,
    thumbnails: [],
    priority: { 'ogg': 1, 'swf': 2, 'flv': 3 },
    handlers: { 'swf': 'Swf', 'flv': 'Flash', 'ogg': 'Ogg' },
    properNouns: { 'swf': 'Flash', 'flv': 'Flash', 'ogg': 'Ogg Theora' },
    canUseVideo: false,
    isThumb: false,
    thumbWidth: 160,
    thumbHeight: 120

    We allocate a priority to each of the possible video formats.  You’ll notice we also have the 'canUseVideo' attribute, which defaults to false.

    Later on in the code, we test the user’s browser to see if it is video-capable:

    var detect = document.createElement('video');
    if (typeof detect.canPlayType === 'function' &&
        detect.canPlayType('video/ogg;codecs="theora,vorbis"') == 'probably' ) {
          Screencasts.Player.canUseVideo = true;
          Screencasts.Player.priority.ogg =
              Screencasts.Player.priority.flv + 2

    If we can create a video element and it indicates that it can play the Ogg Theora format we set canUseVideo to true and increase the priority of the Ogg file to be greater than the priority of the .flv file. (Note that you could also detect if the browser could play .mp4 files to support Safari out of the box.)

    Finally, we use the priority to select which file is actually played, by iterating through the list of files to find the one that has the highest priority:

    for (var x=0; x < file.length; x++) {
      if (!best ) {
        best = file[x];
      } else {
        if (this.priority[best] < this.priority[file[x]])
          best = file[x];

    With these parts in place, the browser displays only the highest priority video and in a format that it can handle.

    If you want to learn more about the Mozilla Support Project or get involved in helping Firefox users, check out their guide to getting started.


    * Note: To view this demo using Safari 4 on Mac OSX, you will need to add Ogg support to Quicktime using the Xiph Quicktime plugin available from

  2. Black Box Driven Development in JavaScript

    Sooner or later every developer finds the beauty of the design patterns. Also, sooner or later the developer finds that most of the patterns are not applicable in their pure format. Very often we use variations. We change the well-known definitions to fit in our use cases. I know that we (the programmers) like buzzwords. Here is a new one – Black Box Driven Development or simply BBDD. I started applying the concept before a couple of months, and I could say that the results are promising. After finishing several projects, I started seeing the good practices and formed three principles.

    What is a black box?

    Before to go with the principles of the BBDD let’s see what is meant by a black box. According to Wikipedia:

    In science and engineering, a black box is a device, system or object which can be viewed in terms of its input, output and transfer characteristics without any knowledge of its internal workings.

    In programming, every piece of code that accepts input, performs actions and returns an output could be considered as a black box. In JavaScript, we could apply the concept easily by using a function. For example:

    var Box = function(a, b) {
        var result = a + b;
        return result;

    This is the simplest version of a BBDD unit. It is a box that performs an operation and returns output immediately. However, very often we need something else. We need continuous interaction with the box. This is another kind of box that I use to call living black box.

    var Box = function(a, b) {
        var api = {
            calculate: function() {
                return a + b;
        return api;

    We have an API containing all the public functions of the box. It is identical to the revealing module pattern. The most important characteristic of this pattern is that it brings encapsulation. We have a clear separation of the public and private objects.

    Now that we know what a black box is, let’s check out the three principles of BBDD.

    Principle 1: Modulize everything

    Every piece of logic should exist as an independent module. In other words – a black box. In the beginning of the development cycle it is a little bit difficult to recognize these pieces. Spending too much time in architecting the application without having even a line of code may not produce good results. The approach that works involves coding. We should sketch the application and even make part of it. Once we have something we could start thinking about black boxing it. It is also much easier to jump into the code and make something without thinking if it is wrong or right. The key is to refactor the implementation till you feel that it is good enough.

    Let’s take the following example:

    $(document).ready(function() {
        if(window.localStorage) {
            var products = window.localStorage.getItem('products') || [], content = '';
            for(var i=0; i';
        } else {
            $('.error').css('display', 'block');
            $('.error').html('Error! Local storage is not supported.')

    We are getting an array called products from the local storage of the browser. If the browser does not support local storage, then we show a simple error message.

    The code as it is, is fine, and it works. However, there are several responsibilities that are merged into a single function. The first optimization that we have to do is to form a good entry point of our code. Sending just a newly defined closure to the $(document).ready is not flexible. What if we want to delay the execution of our initial code or run it in a different way. The snippet above could be transformed to the following:

    var App = function() {
        var api = {};
        api.init = function() {
            if(window.localStorage) {
                var products = window.localStorage.getItem('products') || [], content = '';
                for(var i=0; i';
            } else {
                $('.error').css('display', 'block');
                $('.error').html('Error! Local storage is not supported.');
            return api;
        return api;
    var application = App();

    Now, we have better control over the bootstrapping.

    The source of our data at the moment is the local storage of the browser. However, we may need to fetch the products from a database or simply use a mock-up. It makes sense to extract this part of the code:

    var Storage = function() {
        var api = {};
        api.exists = function() {
            return !!window && !!window.localStorage;
        api.get = function() {
            return window.localStorage.getItem('products') || [];
        return api;

    We have two other operations that could form another box – setting HTML content and show an element. Let’s create a module that will handle the DOM interaction.

    var DOM = function(selector) {
        var api = {}, el;
        var element = function() {
            if(!el) {
                el = $(selector);
                if(el.length == 0) {
                    throw new Error('There is no element matching "' + selector + '".');
            return el;
        api.content = function(html) {
            return api;
        } = function() {
            element().css('display', 'block');
            return api;
        return api;

    The code is doing the same thing as in the first version. However, we have a test function element that checks if the passed selector matches anything in the DOM tree. We are also black boxing the jQuery element that makes our code much more flexible. Imagine that we decide to remove jQuery. The DOM operations are hidden in this module. It is worth nothing to edit it and start using vanilla JavaScript for example or some other library. If we stay with the old variant, we will probably go through the whole code base replacing code snippets.

    Here is the transformed script. A new version that uses the modules that we’ve created above:

    var App = function() {
        var api = {},
            storage = Storage(),
            c = DOM('.content'),
            e = DOM('.error');
        api.init = function() {
            if(storage.exists()) {
                var products = storage.get(), content = '';
                for(var i=0; i';
            } else {
                e.content('Error! Local storage is not supported.').show();
            return api;
        return api;

    Notice that we have separation of responsibilities. We have objects that play roles. It is easier and much more interesting to work with such codebase.

    Principle 2: Expose only public methods

    What makes the black box valuable is the fact that it hides the complexity. The programmer should expose only methods (or properties) that are needed. All the other functions that are used for internal processes should be private.

    Let’s get the DOM module above:

    var DOM = function(selector) {
        var api = {}, el;
        var element = function() { … }
        api.content = function(html) { … } = function() { … }
        return api;

    When a developer uses our class, he is interested in two things – changing the content and showing a DOM element. He should not think about validations or change CSS properties. In our example, there are private variable el and private function element. They are hidden from the outside world.

    Principle 3: Use composition over inheritance

    One of the popular ways to inherit classes in JavaScript uses the prototype chain. In the following snippet, we have class A that is inherited by class C:

    function A(){};
    A.prototype.someMethod = function(){};
    function C(){};
    C.prototype = new A();
    C.prototype.constructor = C;

    However, if we use the revealing module pattern, it makes sense to use composition. It is because we are dealing with objects and not functions (* in fact the functions in JavaScript are also objects). Let’s say that we have a box that implements the observer pattern, and we want to extend it.

    var Observer = function() {
        var api = {}, listeners = {};
        api.on = function(event, handler) { … }; = function(event, handler) { … };
        api.dispatch = function(event) { … };
        return api;
    var Logic = function() {
        var api = Observer();
        api.customMethod = function() { … };
        return api;

    We get the required functionality by assigning an initial value to the api variable. We should notice that every class that uses this technique receives a brand new observer object so there is no way to produce collisions.


    Black box driven development is a nice way to architect your applications. It provides encapsulation and flexibility. BBDD comes with a simple module definition that helps organizing big projects (and teams). I saw how several developers worked on the same project, and they all built their black boxes independently.

  3. Capturing – Improving Performance of the Adaptive Web

    Responsive design is now widely regarded as the dominant approach to building new websites. With good reason, too: a responsive design workflow is the most efficient way to build tailored visual experiences for different device screen sizes and resolutions.

    Responsive design, however, is only the tip of the iceberg when it comes to creating a rich, engaging mobile experience.

    Image Source: For a Future-Friendly Web by Brad Frost

    The issue of performance with responsive websites

    Performance is one of the most important features of a website, but is also frequently overlooked. Performance is something that many developers struggle with – in order to create high-performing websites you need to spend a lot of time tuning your site’s backend. Even more time is required to understand how browsers work, so that you make rendering pages as fast as possible.

    When it comes to creating responsive websites, the performance challenges are even more difficult because you have a single set of markup that is meant to be consumed by all kinds of devices. One problem you hit is the responsive image problem – how do you ensure that big images intended for your Retina Macbook Pro are not downloaded on an old Android phone? How do you prevent desktop ads from rendering on small screen devices?

    It’s easy to overlook performance as a problem because we often conduct testing under perfect conditions – using a fast computer, fast internet, and close proximity to our servers. Just to give you an idea of how evident this problem is, we conducted an analysis into some top responsive e-commerce sites which revealed that the average responsive site home page consists of 87.2 resources and is made up of 1.9 MB of data.

    It is possible to solve the responsive performance problem by making the necessary adjustments to your website manually, but performance tuning by hand involves both complexity and repetition, and that makes it a great candidate for creating tools. With Capturing, we intend to make creating high-performing adaptive web experiences as easy as possible.

    Introducing Capturing

    Capturing is a client-side API we’ve developed to give developers complete control over the DOM before any resources have started loading. With responsive sites, it is a challenge to control what resources you want to load based on the conditions of the device: all current solutions require you to make significant changes to your existing site by either using server-side user-agent detection, or by forcing you to break semantic web standards (for example, changing the src attribute to data-src).

    Our approach to give you resource control is done by capturing the source markup before it has a chance to be parsed by the browser, and then reconstructing the document with resources disabled.

    The ability to control resources client-side gives you an unprecedented amount of control over the performance of your website.

    Capturing was a key feature of Mobify.js 1.1, our framework for creating mobile and tablet websites using client-side templating. We have since reworked Mobify.js in our 2.0 release to be a much more modular library that can be used in any existing website, with Capturing as the primary focus.

    A solution to the responsive image problem

    One way people have been tackling the responsive image problem is by modifying existing backend markup, changing the src of all their img elements to something like data-src, and accompanying that change with a <noscript> fallback. The reason this is done is discussed in this CSS-Tricks post

    “a src that points to an image of a horse will start downloading as soon as that image gets parsed by the browser. There is no practical way to prevent this.

    With Capturing, this is no longer true.

    Say, for example, you had an img element that you want to modify for devices with Retina screens, but you didn’t want the original image in the src attribute to load. Using Capturing, you could do something like this:

    if (window.devicePixelRatio && window.devicePixelRatio >= 2) {
        var bannerImg = capturedDoc.getElementById("banner");
        bannerImg.src = "retinaBanner.png"

    Because we have access to the DOM before any resources are loaded, we can swap the src of images on the fly before they are downloaded. The latter example is very basic – a better example to highlight the power of capturing it to demonstrate a perfect implementation of the picture polyfill.

    Picture Polyfill

    The Picture element is the official W3C HTML extension for dealing with adaptive images. There are polyfills that exist in order to use the Picture element in your site today, but none of them are able to do a perfect polyfill – the best polyfill implemented thus far requires a <noscript> tag surrounding an img element in order to support browsers without Javascript. Using Capturing, you can avoid this madness completely.

    Open the example and be sure to fire up the network tab in web inspector to see which resources get downloaded:

    Here is the important chunk of code that is in the source of the example:


    Take note that there is an img element that uses a src attribute, but the browser only downloads the correct image. You can see the code for this example here (note that the polyfill is only available in the example, not the library itself – yet):

    Not all sites use modified src attributes and <noscript> tags to solve the responsive image problem. An alternative, if you don’t want to rely on modifying src or adding <noscript> tags for every image of your site, is to use server-side detection in order to swap out images, scripts, and other content. Unfortunately, this solution comes with a lot of challenges.

    It was easy to use server-side user-agent detection when the only device you needed to worry about was the iPhone, but with the amount of new devices rolling out, keeping a dictionary of all devices containing information about their screen width, device pixel ratio, and more is a very painful task; not to mention there are certain things you cannot detect with server-side user-agent – such as actual network bandwidth.

    What else can you do with Capturing?

    Solving the responsive image problem is a great use-case for Capturing, but there are also many more. Here’s a few more interesting examples:

    Media queries in markup to control resource loading

    In this example, we use media queries in attributes on images and scripts to determine which ones will load, just to give you an idea of what you can do with Capturing. This example can be found here:

    Complete re-writing of a page using templating

    The primary function of Mobify.js 1.1 was client-side templating to completely rewrite the pages of your existing site when responsive doesn’t offer enough flexibility, or when changing the backend is simply too painful and tedious. It is particularly helpful when you need a mobile presence, fast. This is no longer the primary function of Mobify.js, but it still possible using Capturing.

    Check out this basic example:

    In this example, we’ve taken parts of the existing page and used them in a completely new markup rendered to browser.

    Fill your page with grumpy cats

    And of course, there is nothing more useful then replacing all the images in a page with grumpy cats! In a high-performing way, of course ;-).

    Once again, open up web inspector to see that the original images on the site did not download.


    So what’s the catch? Is there a performance penalty to using Capturing? Yes, there is, but we feel the performance gains you can make by controlling your resources outweigh the minor penalty that Capturing brings. On first load, the library (and main executable if not concatenated together), must download and execute, and the load time here will vary depending on the round trip latency of the device (ranges from around ~60ms to ~300ms). However, the penalty of every subsequent request will be reduced by at least half due to the library being cached, and the just-in-time (JIT) compiler making the compilation much more efficient. You can run the test yourself!

    We also do our best to keep the size of the library to a minimum – at the time of publishing this blog post, the library is 4KB minified and gzipped.

    Why should you use Capturing?

    We created Capturing to give more control of performance to developers on the front-end. The reason other solutions fail to solve this problem is because the responsibilities of the front-end and backend have become increasingly intertwined. The backend’s responsibility should be to generate semantic web markup, and it should be the front-end’s responsibility to take the markup from the backend and processes it in such a way that it is best visually represented on the device, and in a high-performing way. Responsive design solves the first issue (visually representing data), and Capturing helps solve the next (increasing performance on websites by using front-end techniques such as determining screen size and bandwidth to control resource loading).

    If you want to continue to obey the laws of the semantic web, and if you want an easy way to control performance at the front-end, we highly recommend that you check out Mobify.js 2.0!

    How can I get started using Capturing?

    Head over to our quick start guide for instructions on how to get setup using Capturing.

    What’s next?

    We’ve begun with an official developer preview of Mobify.js 2.0, which includes just the Capturing portion, but we will be adding more and more useful features.

    The next feature on the list to add is automatic resizing of images, allowing you to dynamically download images based on the size of the browser window without the need to modify your existing markup (aside from inserting a small javascript snippet)!

    We also plan to create other polyfills that can only be solved with Capturing, such as the new HTML5 Template Tag, for example.

    We look forward to your feedback, and we are excited to see what other developers will do with our new Mobify.js 2.0 library!

  4. Firefox 4 Performance

    Dave Mandelin from the JS team and Joe Drew from the Graphics team summarize the key performance improvements in Firefox 4.

    The web wants fast browsers. Cutting-edge HTML5 web pages play games, mash up and share maps, sound, and videos, show spreadsheets and presentations, and edit photos. Only a high-performance browser can do that. What the web wants, it’s our job to make, and we’ve been working hard to make Firefox 4 fast.

    Firefox 4 comes with performance improvements in almost every area. The most dramatic improvements are in JavaScript and graphics, which are critical for modern HTML5 apps and games. In the rest of this article, we’ll profile the key performance technologies and show how they make the web that much “more awesomer”.

    Fast JavaScript: Uncaging the JägerMonkey
    JavaScript is the programming language of the web, powering most of the dynamic content and behavior, so fast JavaScript is critical for rich apps and games. Firefox 4 gets fast JavaScript from a beast we call JägerMonkey. In techno-gobbledygook, JägerMonkey is a multi-architecture per-method JavaScript JIT compiler with 64-bit NaN-boxing, inline caching, and register allocation. Let’s break that down:

      JägerMonkey has full support for x86, x64, and ARM processors, so we’re fast on both traditional computers and mobile devices. W00t!
      (Crunchy technical stuff ahead: if you don’t care how it works, skip the rest of the sections.)

      Per-method JavaScript JIT compilation

      The basic idea of JägerMonkey is to translate (compile) JavaScript to machine code, “just in time” (JIT). JIT-compiling JavaScript isn’t new: previous versions of Firefox feature the TraceMonkey JIT, which can generate very fast machine code. But some programs can’t be “jitted” by TraceMonkey. JägerMonkey has a simpler design that is able to compile everything in exchange for not doing quite as much optimization. But it’s still fast. And TraceMonkey is still there, to provide a turbo boost when it can.

      64-bit NaN-boxing
      That’s the technical name for the new 64-bit formats the JavaScript engine uses to represent program values. These formats are designed to help the JIT compilers and tuned for modern hardware. For example, think about floating-point numbers, which are 64 bits. With the old 32-bit value formats, floating-point calculations required the engine to allocate, read, write, and deallocate extra memory, all of which is slow, especially now that processors are much faster than memory. With the new 64-bit formats, no extra memory is required, and calculations are much faster. If you want to know more, see the technical article Mozilla’s new JavaScript value representation.
      Inline caching
      Property accesses, like o.p, are common in JavaScript. Without special help from the engine, they are complicated, and thus slow: first the engine has to search the object and its prototypes for the property, next find out where the value is stored, and only then read the value. The idea behind inline caching is: “What if we could skip all that other junk and just read the value?” Here’s how it works: The engine assigns every object a shape that describes its prototype and properties. At first, the JIT generates machine code for o.p that gets the property by laborious searching. But once that code runs, the JITs finds out what o's shape is and how to get the property. The JIT then generates specialized machine code that simply verifies that the shape is the same and gets the property. For the rest of the program, that o.p runs about as fast as possible. See the technical article PICing on JavaScript for fun and profit for much more about inline caching.

      Register allocation
      Code generated by basic JITs spends a lot of time reading and writing memory: for code like x+y, the machine code first reads x, then reads y, adds them, and then writes the result to temporary storage. With 64-bit values, that's up to 6 memory accesses. A more advanced JIT, such as JägerMonkey, generates code that tries to hold most values in registers. JägerMonkey also does some related optimizations, like trying to avoid storing values at all when they are constant or just a copy of some other value.

    Here's what JägerMonkey does to our benchmark scores:

    That's more than 3x improvement on SunSpider and Kraken and more than 6x on V8!

    Fast Graphics: GPU-powered browsing.
    For Firefox 4, we sped up how Firefox draws and composites web pages using the Graphics Processing Unit (GPU) in most modern computers.

    On Windows Vista and Windows 7, all web pages are hardware accelerated using Direct2D . This provides a great speedup for many complex web sites and demo pages.

    On Windows and Mac, Firefox uses 3D frameworks (Direct3D or OpenGL) to accelerate the composition of web page elements. This same technique is also used to accelerate the display of HTML5 video .

    Final take
    Fast, hardware-accelerated graphics combined plus fast JavaScript means cutting-edge HTML5 games, demos, and apps run great in Firefox 4. You see it on some of the sites we enjoyed making fast. There's plenty more to try in the Mozilla Labs Gaming entries and of course, be sure to check out the Web O' Wonder.

  5. a quick note on JavaScript engine components

    There have been a bunch of posts about the JägerMonkey (JM) post that we made the other day, some of which get things subtly wrong about the pieces of technology that are being used as part of Mozilla’s JM work. So here’s the super-quick overview of what we’re using, what the various parts do and where they came from:

    1. SpiderMonkey.This is Mozilla’s core JavaScript Interpreter. This engine takes raw JavaScript and turns it into an intermediate bytecode. That bytecode is then interpreted. SpiderMonkey was responsible for all JavaScript handling in Firefox 3 and earlier. We continue to make improvements to this engine, as it’s still the basis for a lot of work that we did in Firefox 3.5, 3.6 and later releases as well.

    2. Tracing. Tracing was added before Firefox 3.5 and was responsible for much of the big jump that we made in performance. (Although some of that was because we also improved the underlying SpiderMonkey engine as well.)

    This is what we do to trace:

    1. Monitor interpreted JavaScript code during execution looking for code paths that are used more than once.
    2. When we find a piece of code that’s used more than once, optimize that code.
    3. Take that optimized representation and assemble it to machine code and execute it.

    What we’ve found since Firefox 3.5 is that when we’re in full tracing mode, we’re really really fast. We’re slow when we have to “fall back” to SpiderMonkey and interpret + record.

    One difficult part of tracing is generating code that runs fast. This is done by a piece of code called Nanojit. Nanojit is a piece of code that was originally part of the Tamarin project. Mozilla isn’t using most of Tamarin for two reasons: 1. we’re not shipping ECMAScript 4 and 2. the interpreted part of Tamarin was much slower than SpiderMonkey. For Firefox 3.5 we took the best part – Nanojit – and bolted it to the back of SpiderMonkey instead.

    Nanojit does two things: it takes a high-level representation of JavaScript and does optimization. It also includes an assembler to take that optimized representation and generate native code for machine-level execution.

    Mozilla and Adobe continue to collaborate on Nanojit. Adobe uses Nanojit as part of their ActionScript VM.

    3. Nitro Assembler. This is a piece of code that we’re taking from Apple’s version of webkit that generates native code for execution. The Nitro Assembler is very different than Nanojit. While Nanojit takes a high-level representation, does optimization and then generates code all the Nitro Assembler does is generate code. So it’s complex, low-level code, but it doesn’t do the same set of things that Nanojit does.

    We’re using the Nitro assembler (along with a lot of other new code) to basically build what everyone else has – compiled JavaScript – and then we’re going to do what we did with Firefox 3.5 – bolt tracing onto the back of that. So we’ll hopefully have the best of all worlds: SpiderMonkey generating native code to execute like the other VMs with the ability to go onto trace for tight inner loops for even more performance.

    I hope this helps to explain what bits of technology we’re using and how they fit into the overall picture of Firefox’s JS performance.

  6. audio player – HTML5 style

    Last week we featured a demo from Alistair MacDonald (@F1LT3R) where he showed how to animate SVG with Canvas and a bunch of free tools. This week he has another demo for us that shows how you can use the new audio element in Firefox 3.5 with some canvas and JS to build a nice-looking audio player.

    But what’s really interesting about this demo is not so much that it plays audio – lots of people have built audio players – but how it works. If you look at the source code for the page what you’ll find looks something like this:

    (The actual list has fallbacks and is more compact – cleaned up here for easier reading.)

    That’s right – the player above is just a simple HTML unordered list that happens to include audio elements and is styled with CSS. You’ll notice that if you right click on one of them that it has all the usual items – save as, bookmark this link, copy this link location, etc. You can even poke at it with Firebug.

    The JavaScript driver that Al has written will look for a <div> element with the jai ID and then look for any audio elements that are inside it. It then will draw the playback interface in the canvas at the top of the list. The playback interface is built with simple JS canvas calls and an SVG-derived font.

    Using this driver it’s super-easy to add an audio player to any web site by just defining a canvas and a list. Much like what we’ve seen on a lot of the web with the rise of useful libraries like jQuery, this library can add additional value to easily-defined markup. Another win for HTML5 and the library model.

    Al has a much larger write-up on the same page as the demo. If you haven’t read through it you should now.

    (Also? Al wrote the music himself. So awesome.)

  7. Performance with JavaScript String Objects

    This article aims to take a look at the performance of JavaScript engines towards primitive value Strings and Object Strings. It is a showcase of benchmarks related to the excellent article by Kiro Risk, The Wrapper Object. Before proceeding, I would suggest visiting Kiro’s page first as an introduction to this topic.

    The ECMAScript 5.1 Language Specification (PDF link) states at paragraph 4.3.18 about the String object:

    String object member of the Object type that is an instance of the standard built-in String constructor

    NOTE A String object is created by using the String constructor in a new expression, supplying a String value as an argument.
    The resulting object has an internal property whose value is the String value. A String object can be coerced to a String value
    by calling the String constructor as a function (15.5.1).

    and David Flanagan’s great book “JavaScript: The Definitive Guide”, very meticulously describes the Wrapper Objects at section 3.6:

    Strings are not objects, though, so why do they have properties? Whenever you try to refer to a property of a string s, JavaScript converts the string value to an object as if by calling new String(s). [...] Once the property has been resolved, the newly created object is discarded. (Implementations are not required to actually create and discard this transient object: they must behave as if they do, however.)

    It is important to note the text in bold above. Basically, the different ways a new String object is created are implementation specific. As such, an obvious question one could ask is “since a primitive value String must be coerced to a String Object when trying to access a property, for example str.length, would it be faster if instead we had declared the variable as String Object?”. In other words, could declaring a variable as a String Object, ie var str = new String("hello"), rather than as a primitive value String, ie var str = "hello" potentially save the JS engine from having to create a new String Object on the fly so as to access its properties?

    Those who deal with the implementation of ECMAScript standards to JS engines already know the answer, but it’s worth having a deeper look at the common suggestion “Do not create numbers or strings using the ‘new’ operator”.

    Our showcase and objective

    For our showcase, we will use mainly Firefox and Chrome; the results, though, would be similar if we chose any other web browser, as we are focusing not on a speed comparison between two different browser engines, but at a speed comparison between two different versions of the source code on each browser (one version having a primitive value string, and the other a String Object). In addition, we are interested in how the same cases compare in speed to subsequent versions of the same browser. The first sample of benchmarks was collected on the same machine, and then other machines with a different OS/hardware specs were added in order to validate the speed numbers.

    The scenario

    For the benchmarks, the case is rather simple; we declare two string variables, one as a primitive value string and the other as an Object String, both of which have the same value:

      var strprimitive = "Hello";
      var strobject    = new String("Hello");

    and then we perform the same kind of tasks on them. (notice that in the jsperf pages strprimitive = str1, and strobject = str2)

    1. length property

      var i = strprimitive.length;
      var k = strobject.length;

    If we assume that during runtime the wrapper object created from the primitive value string strprimitive, is treated equally with the object string strobject by the JavaScript engine in terms of performance, then we should expect to see the same latency while trying to access each variable’s length property. Yet, as we can see in the following bar chart, accessing the length property is a lot faster on the primitive value string strprimitive, than in the object string strobject.

    (Primitive value string vs Wrapper Object String – length, on jsPerf)

    Actually, on Chrome 24.0.1285 calling strprimitive.length is 2.5x faster than calling strobject.length, and on Firefox 17 it is about 2x faster (but having more operations per second). Consequently, we realize that the corresponding browser JavaScript engines apply some “short paths” to access the length property when dealing with primitive string values, with special code blocks for each case.

    In the SpiderMonkey JS engine, for example, the pseudo-code that deals with the “get property” operation looks something like the following:

      // direct check for the "length" property
      if (typeof(value) == "string" && property == "length") {
        return StringLength(value);
      // generalized code form for properties
      object = ToObject(value);
      return InternalGetProperty(object, property);

    Thus, when you request a property on a string primitive, and the property name is “length”, the engine immediately just returns its length, avoiding the full property lookup as well as the temporary wrapper object creation. Unless we add a property/method to the String.prototype requesting |this|, like so:

      String.prototype.getThis = function () { return this; }

    then no wrapper object will be created when accessing the String.prototype methods, as for example String.prototype.valueOf(). Each JS engine has embedded similar optimizations in order to produce faster results.

    2. charAt() method

      var i = strprimitive.charAt(0);
      var k = strobject["0"];

    (Primitive value string vs Wrapper Object String – charAt(), on jsPerf)

    This benchmark clearly verifies the previous statement, as we can see that getting the value of the first string character in Firefox 20 is substiantially faster in strprimitive than in strobject, about x70 times of increased performance. Similar results apply to other browsers as well, though at different speeds. Also, notice the differences between incremental Firefox versions; this is just another indicator of how small code variations can affect the JS engine’s speed for certain runtime calls.

    3. indexOf() method

      var i = strprimitive.indexOf("e");
      var k = strobject.indexOf("e");

    (Primitive value string vs Wrapper Object String – IndexOf(), on jsPerf)

    Similarly in this case, we can see that the primitive value string strprimitive can be used in more operations than strobject. In addition, the JS engine differences in sequential browser versions produce a variety of measurements.

    4. match() method

    Since there are similar results here too, to save some space, you can click the source link to view the benchmark.

    (Primitive value string vs Wrapper Object String – match(), on jsPerf)

    5. replace() method

    (Primitive value string vs Wrapper Object String – replace(), on jsPerf)

    6. toUpperCase() method

    (Primitive value string vs Wrapper Object String – toUpperCase(), on jsPerf)

    7. valueOf() method

      var i = strprimitive.valueOf();
      var k = strobject.valueOf();

    At this point it starts to get more interesting. So, what happens when we try to call the most common method of a string, it’s valueOf()? It seems like most browsers have a mechanism to determine whether it’s a primitive value string or an Object String, thus using a much faster way to get its value; surprizingly enough Firefox versions up to v20, seem to favour the Object String method call of strobject, with a 7x increased speed.

    (Primitive value string vs Wrapper Object String – valueOf(), on jsPerf)

    It’s also worth mentioning that Chrome 22.0.1229 seems to have favoured too the Object String, while in version 23.0.1271 a new way to get the content of primitive value strings has been implemented.

    A simpler way to run this benchmark in your browser’s console is described in the comment of the jsperf page.

    8. Adding two strings

      var i = strprimitive + " there";
      var k = strobject + " there";

    (Primitive string vs Wrapper Object String – get str value, on jsPerf)

    Let’s now try and add the two strings with a primitive value string. As the chart shows, both Firefox and Chrome present a 2.8x and 2x increased speed in favour of strprimitive, as compared with adding the Object string strobject with another string value.

    9. Adding two strings with valueOf()

      var i = strprimitive.valueOf() + " there";
      var k = strobject.valueOf() + " there";

    (Primitive string vs Wrapper Object String – str valueOf, on jsPerf)

    Here we can see again that Firefox favours the strobject.valueOf(), since for strprimitive.valueOf() it moves up the inheritance tree and consequently creates a new wapper object for strprimitive. The effect this chained way of events has on the performance can also be seen in the next case.

    10. for-in wrapper object

      var i = "";
      for (var temp in strprimitive) { i += strprimitive[temp]; }
      var k = "";
      for (var temp in strobject) { k += strobject[temp]; }

    This benchmark will incrementally construct the string’s value through a loop to another variable. In the for-in loop, the expression to be evaluated is normally an object, but if the expression is a primitive value, then this value gets coerced to its equivalent wrapper object. Of course, this is not a recommended method to get the value of a string, but it is one of the many ways a wrapper object can be created, and thus it is worth mentioning.

    (Primitive string vs Wrapper Object String – Properties, on jsPerf)

    As expected, Chrome seems to favour the primitive value string strprimitive, while Firefox and Safari seem to favour the object string strobject. In case this seems much typical, let’s move on the last benchmark.

    11. Adding two strings with an Object String

      var str3 = new String(" there");
      var i = strprimitive + str3;
      var k = strobject + str3;

    (Primitive string vs Wrapper Object String – 2 str values, on jsPerf)

    In the previous examples, we have seen that Firefox versions offer better performance if our initial string is an Object String, like strobject, and thus it would be seem normal to expect the same when adding strobject with another object string, which is basically the same thing. It is worth noticing, though, that when adding a string with an Object String, it’s actually quite faster in Firefox if we use strprimitive instead of strobject. This proves once more how source code variations, like a patch to a bug, lead to different benchmark numbers.


    Based on the benchmarks described above, we have seen a number of ways about how subtle differences in our string declarations can produce a series of different performance results. It is recommended that you continue to declare your string variables as you normally do, unless there is a very specific reason for you to create instances of the String Object. Also, note that a browser’s overall performance, particularly when dealing with the DOM, is not only based on the page’s JS performance; there is a lot more in a browser than its JS engine.

    Feedback comments are much appreciated. Thanks :-)

  8. Old tricks for new browsers – a talk at jQuery UK 2012

    Last Friday around 300 developers went to Oxford, England to attend jQuery UK and learn about all that is hot and new about their favourite JavaScript library. Imagine their surprise when I went on stage to tell them that a lot of what jQuery is used for these days doesn’t need it. If you want to learn more about the talk itself, there is a detailed report, slides and the audio recording available.

    The point I was making is that libraries like jQuery were first and foremost there to give us a level playing field as developers. We should not have to know the quirks of every browser and this is where using a library allows us to concentrate on the task at hand and not on how it will fail in 10 year old browsers.

    jQuery’s revolutionary new way of looking at web design was based on two main things: accessing the document via CSS selectors rather than the unwieldy DOM methods and chaining of JavaScript commands. jQuery then continued to make event handling and Ajax interactions easier and implemented the Easing equations to allow for slick and beautiful animations.

    However, this simplicity came with a prize: developers seem to forget a few very simple techniques that allow you to write very terse and simple to understand JavaScripts that don’t rely on jQuery. Amongst others, the most powerful ones are event delegation and assigning classes to parent elements and leave the main work to CSS.

    Event delegation

    Event Delegation means that instead of applying an event handler to each of the child elements in an element, you assign one handler to the parent element and let the browser do the rest for you. Events bubble up the DOM of a document and happen on the element you want to get and each of its parent elements. That way all you have to do is to compare with the target of the event to get the one you want to access. Say you have a to-do list in your document. All the HTML you need is:

    • Go round Mum's
    • Get Liz back
    • Sort life out!

    In order to add event handlers to these list items, in jQuery beginners are tempted to do a $('#todo li').click(function(ev){...}); or – even worse – add a class to each list item and then access these. If you use event delegation all you need in JavaScript is:

    document.querySelector('#todo').addEventListener( 'click',
      function( ev ) {
        var t =;
        if ( t.tagName === 'LI' ) {
          alert( t + t.innerHTML );
    }, false);

    Newer browsers have a querySelector and querySelectorAll method (see support here) that gives you access to DOM elements via CSS selectors – something we learned from jQuery. We use this here to access the to-do list. Then we apply an event listener for click to the list.

    We read out which element has been clicked with and compare its tagName to LI (this property is always uppercase). This means we will never execute the rest of the code when the user for example clicks on the list itself. We call preventDefault() to tell the browser not to do anything – we now take over.

    You can try this out in this fiddle or embedded below:

    JSFiddle demo.

    The benefits of event delegation is that you can now add new items without having to ever re-assign handlers. As the main click handler is on the list new items automatically will be added to the functionality. Try it out in this fiddle or embedded below:

    JSFiddle demo.

    Leaving styling and DOM traversal to CSS

    Another big use case of jQuery is to access a lot of elements at once and change their styling by manipulating their styles collection with the jQuery css() method. This is seemingly handy but is also annoying as you put styling information in your JavaScript. What if there is a rebranding later on? Where do people find the colours to change? It is a much simpler to add a class to the element in question and leave the rest to CSS. If you think about it, a lot of times we repeat the same CSS selectors in jQuery and the style document. Seems redundant.

    Adding and removing classes in the past was a bit of a nightmare. The way to do it was using the className property of a DOM element which contained a string. It was then up to you to find if a certain class name is in that string and to remove and add classes by adding to or using replace() on the string. Again, browsers learned from jQuery and now have a classList object (support here) that allows easy manipulation of CSS classes applied to elements. You have add(), remove(), toggle() and contains() to play with.

    This makes it dead easy to style a lot of elements and to single them out for different styling. Let’s say for example we have a content area and want to show one at a time. It is tempting to loop over the elements and do a lot of comparison, but all we really need is to assign classes and leave the rest to CSS. Say our content is a navigation pointing to articles. This works in all browsers:

    Profit plans

    Step 1: Collect Underpants

    Make sure Tweek doesn't expect anything, then steal underwear and bring it to the mine.

    Step 3: Profit

    Yes, profit will come. Let's sing the underpants gnome song.

    Now in order to hide all the articles, all we do is assign a ‘js’ class to the body of the document and store the first link and first article in the content section in variables. We assign a class called ‘current’ to each of those.

    /* grab all the elements we need */
    var nav = document.querySelector( '#nav' ),
        content = document.querySelector( '#content' ),
    /* grab the first article and the first link */
        article = document.querySelector( '#content article' ),
        link = document.querySelector( '#nav a' );
    /* hide everything by applying a class called 'js' to the body */
    document.body.classList.add( 'js' );
    /* show the current article and link */
    article.classList.add( 'current' );
    link.classList.add( 'current' );

    Together with a simple CSS, this hides them all off screen:

    /* change content to be a content panel */
    .js #content {
      position: relative;
      overflow: hidden;
      min-height: 300px;
    /* push all the articles up */
    .js #content article {
      position: absolute;
      top: -700px;
      left: 250px;
    /* hide 'back to top' links */
    .js article footer {
      position: absolute;
      left: -20000px;

    In this case we move the articles up. We also hide the “back to top” links as they are redundant when we hide and show the articles. To show and hide the articles all we need to do is assign a class called “current” to the one we want to show that overrides the original styling. In this case we move the article down again.

    /* keep the current article visible */
    .js #content article.current {
      top: 0;

    In order to achieve that all we need to do is a simple event delegation on the navigation:

    /* event delegation for the navigation */
    nav.addEventListener( 'click', function( ev ) {
      var t =;
      if ( t.tagName === 'A' ) {
        /* remove old styles */
        link.classList.remove( 'current' );
        article.classList.remove( 'current' );
        /* get the new active link and article */
        link = t;
        article = document.querySelector( link.getAttribute( 'href' ) );
        /* show them by assigning the current class */
        link.classList.add( 'current' );
        article.classList.add( 'current' );
    }, false);

    The simplicity here lies in the fact that the links already point to the elements with this IDs on them. So all we need to do is to read the href attribute of the link that was clicked.

    See the final result in this fiddle or embedded below.

    JSFiddle demo.

    Keeping the visuals in the CSS

    Mixed with CSS transitions or animations (support here), this can be made much smoother in a very simple way:

    .js #content article {
      position: absolute;
      top: -300px;
      left: 250px;
      -moz-transition: 1s;
      -webkit-transition: 1s;
      -ms-transition: 1s;
      -o-transition: 1s;
      transition: 1s;

    The transition now simply goes smoothly in one second from the state without the ‘current’ class to the one with it. In our case, moving the article down. You can add more properties by editing the CSS – no need for more JavaScript. See the result in this fiddle or embedded below:

    JSFiddle demo.

    As we also toggle the current class on the link we can do more. It is simple to add visual extras like a “you are here” state by using CSS generated content with the :after selector (support here). That way you can add visual nice-to-haves without needing the generate HTML in JavaScript or resort to images.

    .js #nav a:hover:after, .js #nav a:focus:after, .js #nav a.current:after {
      content: '➭';
      position: absolute;
      right: 5px;

    See the final result in this fiddle or embedded below:

    JSFiddle demo.

    The benefit of this technique is that we keep all the look and feel in CSS and make it much easier to maintain. And by using CSS transitions and animations you also leverage hardware acceleration.

    Give them a go, please?

    All of these things work across browsers we use these days and using polyfills can be made to work in old browsers, too. However, not everything is needed to be applied to old browsers. As web developers we should look ahead and not cater for outdated technology. If the things I showed above fall back to server-side solutions or page reloads in IE6, nobody is going to be the wiser. Let’s build escalator solutions – smooth when the tech works but still available as stairs when it doesn’t.


  9. HTML5 drag and drop in Firefox 3.5

    This post is from Les Orchard, who works on Mozilla’s web development team.


    Drag and drop is one of the most fundamental interactions afforded by graphical user interfaces. In one gesture, it allows users to pair the selection of an object with the execution of an action, often including a second object in the operation. It’s a simple yet powerful UI concept used to support copying, list reordering, deletion (ala the Trash / Recycle Bin), and even the creation of link relationships.

    Since it’s so fundamental, offering drag and drop in web applications has been a no-brainer ever since browsers first offered mouse events in DHTML. But, although mousedown, mousemove, and mouseup made it possible, the implementation has been limited to the bounds of the browser window. Additionally, since these events refer only to the object being dragged, there’s a challenge to find the subject of the drop when the interaction is completed.

    Of course, that doesn’t prevent most modern JavaScript frameworks from abstracting away most of the problems and throwing in some flourishes while they’re at it. But, wouldn’t it be nice if browsers offered first-class support for drag and drop, and maybe even extended it beyond the window sandbox?

    As it turns out, this very wish is answered by the HTML 5 specification section on new drag-and-drop events, and Firefox 3.5 includes an implementation of those events.

    If you want to jump straight to the code, I’ve put together some simple demos of the new events.

    I’ve even scratched an itch of my own and built the beginnings of an outline editor, where every draggable element is also a drop target—of which there could be dozens to hundreds in a complex document, something that gave me some minor hair-tearing moments in the past while trying to make do with plain old mouse events.

    And, all the above can be downloaded or cloned from a GitHub repository I’ve created expecially for this article.

    The New Drag and Drop Events

    So, with no further ado, here are the new drag and drop events, in roughly the order you might expect to see them fired:

    A drag has been initiated, with the dragged element as the event target.
    The mouse has moved, with the dragged element as the event target.
    The dragged element has been moved into a drop listener, with the drop listener element as the event target.
    The dragged element has been moved over a drop listener, with the drop listener element as the event target. Since the default behavior is to cancel drops, returning false or calling preventDefault() in the event handler indicates that a drop is allowed here.
    The dragged element has been moved out of a drop listener, with the drop listener element as the event target.
    The dragged element has been successfully dropped on a drop listener, with the drop listener element as the event target.
    A drag has been ended, successfully or not, with the dragged element as the event target.

    Like the mouse events of yore, listeners can be attached to elements using addEventListener() directly or by way of your favorite JS library.

    Consider the following example using jQuery, also available as a live demo:

        <div id="newschool">
            <div class="dragme">Drag me!</div>
            <div class="drophere">Drop here!</div>
        <script type="text/javascript">
            $(document).ready(function() {
                $('#newschool .dragme')
                    .attr('draggable', 'true')
                    .bind('dragstart', function(ev) {
                        var dt = ev.originalEvent.dataTransfer;
                        dt.setData("Text", "Dropped in zone!");
                        return true;
                    .bind('dragend', function(ev) {
                        return false;
                $('#newschool .drophere')
                    .bind('dragenter', function(ev) {
                        return false;
                    .bind('dragleave', function(ev) {
                        return false;
                    .bind('dragover', function(ev) {
                        return false;
                    .bind('drop', function(ev) {
                        var dt = ev.originalEvent.dataTransfer;
                        return false;

    Thanks to the new events and jQuery, this example is both short and simple—but it packs in a lot of functionality, as the rest of this article will explain.

    Before moving on, there are at least three things about the above code that are worth mentioning:

    • Drop targets are enabled by virtue of having listeners for drop events. But, per the HTML 5 spec, draggable elements need an attribute of draggable="true", set either in markup or in JavaScript.

      Thus, $('#newschool .dragme').attr('draggable', 'true').

    • The original DOM event (as opposed to jQuery’s event wrapper) offers a property called dataTransfer. Beyond just manipulating elements, the new drag and drop events accomodate the transmission of user-definable data during the course of the interaction.
    • Since these are first-class events, you can apply the technique of Event Delegation.

      What’s that? Well, imagine you have a list of 1000 list items—as part of a deeply-nested outline document, for instance. Rather than needing to attach listeners or otherwise fiddle with all 1000 items, simply attach a listener to the parent node (eg. the <ul> element) and all events from the children will propagate up to the single parent listener. As a bonus, all new child elements added after page load will enjoy the same benefits.

      Check out this demo, and the associated JS code to see more about these events and Event Delegation.

    Using dataTransfer

    As mentioned in the last section, the new drag and drop events let you send data along with a dragged element. But, it’s even better than that: Your drop targets can receive data transferred by content objects dragged into the window from other browser windows, and even other applications.

    Since the example is a bit longer, check out the live demo and associated code to get an idea of what’s possible with dataTransfer.

    In a nutshell, the stars of this show are the setData() and getData() methods of the dataTransfer property exposed by the Event object.

    The setData() method is typically called in the dragstart listener, loading dataTransfer up with one or more strings of content with associated recommended content types.

    For illustration, here’s a quick snippet from the example code:

        var dt = ev.originalEvent.dataTransfer;
        dt.setData('text/plain', $('#logo').parent().text());
        dt.setData('text/html', $('#logo').parent().html());
        dt.setData('text/uri-list', $('#logo')[0].src);

    On the other end, getData() allows you to query for content by type (eg. text/html followed by text/plain). This, in turn, allows you to decide on acceptable content types at the time of the drop event or even during dragover to offer feedback for unacceptable types during the drag.

    Here’s another example from the receiving end of the example code:

        var dt = ev.originalEvent.dataTransfer;
        $('.content_url .content').text(dt.getData('text/uri-list'));
        $('.content_text .content').text(dt.getData('text/plain'));
        $('.content_html .content').html(dt.getData('text/html'));

    Where dataTransfer really shines, though, is that it allows your drop targets to receive content from sources outside your defined draggable elements and even from outside the browser altogether. Firefox accepts such drags, and attempts to populate dataTransfer with appropriate content types extracted from the external object.

    Thus, you could select some text in a word processor window and drop it into one of your elements, and at least expect to find it available as text/plain content.

    You can also select content in another browser window, and expect to see text/html appear in your events. Check out the outline editing demo and see what happens when you try dragging various elements (eg. images, tables, and lists) and highlighted content from other windows onto the items there.

    Using Drag Feedback Images

    An important aspect of the drag and drop interaction is a representation of the thing being dragged. By default in Firefox, this is a “ghost” image of the dragged element itself. But, the dataTransfer property of the original Event object exposes the method setDragImage() for use in customizing this representation.

    There’s a live demo of this feature, as well as associated JS code available. The gist, however, is sketched out in these code snippets:

        var dt = ev.originalEvent.dataTransfer;
        dt.setDragImage( $('#feedback_image h2')[0], 0, 0);
        dt.setDragImage( $('#logo')[0], 32, 32);
        var canvas = document.createElement("canvas");
        canvas.width = canvas.height = 50;
        var ctx = canvas.getContext("2d");
        ctx.lineWidth = 8;
        ctx.lineTo(50, 50);
        ctx.lineTo(0, 50);
        ctx.lineTo(25, 0);
        dt.setDragImage(canvas, 25, 25);

    You can supply a DOM node as the first parameter to setDragImage(), which includes everything from text to images to <canvas> elements. The second two parameters indicate at what left and top offset the mouse should appear in the image while dragging.

    For example, since the #logo image is 64×64, the parameters in the second setDragImage() method places the mouse right in the center of the image. On the other hand, the first call positions the feedback image such that the mouse rests in the upper left corner.

    Using Drop Effects

    As mentioned at the start of this article, the drag and drop interaction has been used to support actions such as copying, moving, and linking. Accordingly, the HTML 5 specification accomodates these operations in the form of the effectAllowed and dropEffect properties exposed by the Event object.

    For a quick fix, check out the a live demo of this feature, as well as the associated JS code.

    The basic idea is that the dragstart event listener can set a value for effectAllowed like so:

        var dt = ev.originalEvent.dataTransfer;
        switch ( {
            case 'effectdrag0': dt.effectAllowed = 'copy'; break;
            case 'effectdrag1': dt.effectAllowed = 'move'; break;
            case 'effectdrag2': dt.effectAllowed = 'link'; break;
            case 'effectdrag3': dt.effectAllowed = 'all'; break;
            case 'effectdrag4': dt.effectAllowed = 'none'; break;

    The choices available for this property include the following:

    no operation is permitted
    copy only
    move only
    link only
    copy or move only
    copy or link only
    link or move only
    copy, move, or link

    On the other end, the dragover event listener can set the value of the dropEffect property to indicate the expected effect invoked on a successful drop. If the value does not match up with effectAllowed, the drop will be considered cancelled on completion.

    In the a live demo, you should be able to see that only elements with matching effects can be dropped into the appropriate drop zones. This is accomplished with code like the follwoing:

        var dt = ev.originalEvent.dataTransfer;
        switch ( {
            case 'effectdrop0': dt.dropEffect = 'copy'; break;
            case 'effectdrop1': dt.dropEffect = 'move'; break;
            case 'effectdrop2': dt.dropEffect = 'link'; break;
            case 'effectdrop3': dt.dropEffect = 'all'; break;
            case 'effectdrop4': dt.dropEffect = 'none'; break;

    Although the OS itself can provide some feedback, you can also use these properties to update your own visible feedback, both on the dragged element and on the drop zone itself.


    The new first-class drag and drop events in HTML5 and Firefox make supporting this form of UI interaction simple, concise, and powerful in the browser. But beyond the new simplicity of these events, the ability to transfer content between applications opens brand new avenues for web-based applications and collaboration with desktop software in general.