Mozilla

Articles by Arun Ranganathan

Sort by:

View:

  1. Firefox 4: An early walk-through of IndexedDB

    Web developers already have localStorage, which is used for client side storage of simple key-value pairs. This alone doesn’t address the needs of many web applications for structured storage and indexed data. Mozilla is working on a structured storage API with indexing support called IndexedDB, and we will have some test builds in the next few weeks. This can be compared to the WebDatabase API implemented by several browsers that uses a subset of the allowable language of SQLite. Mozilla has chosen to not implement WebDatabase for various reasons discussed in this post.

    In order to compare IndexedDB and WebDatabase, we are going to show four examples that use most parts of the asynchronous APIs of each specification. The differences between SQL storage with tables (WebDatabase) and JavaScript object storage with indexes (IndexedDB) becomes pretty clear after reading the examples. The synchronous versions of these APIs are only available on worker threads. Since not all browsers currently implement worker threads, the synchronous APIs will not be discussed at this time. The IndexedDB code is based off a proposal that Mozilla has submitted to the W3C WebApps working group that has gotten positive feedback so far. The code for both APIs does not include any error handling (for brevity), but production code should always have it!

    These examples are for the storage of a candy store’s sale of candy to customers, which we’ll refer to as kids. Each entry in candySales represents a sale of a specified amount of candy to a kid, specified by an entry in candy and kids respectively.

    Example 1 – Opening and Setting Up a Database

    This first example demonstrates how to open a database connection and create the tables or object stores if the version number is not correct. Upon opening the database, both examples check the version and create the necessary tables or object stores and then set the correct version number. WebDatabase is a bit stricter in how it handles versions by giving an error if the database version is not what the caller expects (this is specified by the second argument to openDatabase). IndexedDB simply lets the caller handle versioning as they see fit. Note that there is active discussion about how IndexedDB should handle version changes in the working group.

    WebDatabase

    var db = window.openDatabase("CandyDB", "",
                                 "My candy store database",
                                 1024);
    if (db.version != "1") {
      db.changeVersion(db.version, "1", function(tx) {
        // User's first visit.  Initialize database.
        var tables = [
          { name: "kids", columns: ["id INTEGER PRIMARY KEY",
                                    "name TEXT"]},
          { name: "candy", columns: ["id INTEGER PRIMARY KEY",
                                     "name TEXT"]},
          { name: "candySales", columns: ["kidId INTEGER",
                                          "candyId INTEGER",
                                          "date TEXT"]}
        ];
     
        for (var index = 0; index < tables.length; index++) {
          var table = tables[index];
          tx.executeSql("CREATE TABLE " + table.name + "(" +
                        table.columns.join(", ") + ");");
        }
      }, null, function() { loadData(db); });
    }
    else {
      // User has been here before, no initialization required.
      loadData(db);
    }

    IndexedDB

    var request = window.indexedDB.open("CandyDB",
                                        "My candy store database");
    request.onsuccess = function(event) {
      var db = event.result;
      if (db.version != "1") {
        // User's first visit, initialize database.
        var createdObjectStoreCount = 0;
        var objectStores = [
          { name: "kids", keyPath: "id", autoIncrement: true },
          { name: "candy", keyPath: "id", autoIncrement: true },
          { name: "candySales", keyPath: "", autoIncrement: true }
        ];
     
        function objectStoreCreated(event) {
          if (++createdObjectStoreCount == objectStores.length) {
            db.setVersion("1").onsuccess = function(event) {
              loadData(db);
            };
          }
        }
     
        for (var index = 0; index < objectStores.length; index++) {
          var params = objectStores[index];
          request = db.createObjectStore(params.name, params.keyPath,
                                         params.autoIncrement);
          request.onsuccess = objectStoreCreated;
        }
      }
      else {
        // User has been here before, no initialization required.
        loadData(db);
      }
    };

    Example 2 – Storing Kids in the Database

    This example stores several kids into the appropriate table or object store. This example demonstrates one of the risks that have to be dealt with when using WebDatabase: SQL injection attacks. In WebDatabase explicit transactions must be used, but in IndexedDB a transaction is provided automatically if only one object store is accessed. Transaction locking is per-object store in IndexedDB. Additionally, IndexedDB takes a JavaScript object to insert, whereas with WebDatabase callers must bind specific columns. In both cases you get the insertion id in the callback.

    WebDatabase

    var kids = [
      { name: "Anna" },
      { name: "Betty" },
      { name: "Christine" }
    ];
     
    var db = window.openDatabase("CandyDB", "1",
                                 "My candy store database",
                                 1024);
    db.transaction(function(tx) {
      for (var index = 0; index < kids.length; index++) {
        var kid = kids[index];
        tx.executeSql("INSERT INTO kids (name) VALUES (:name);", [kid],
                      function(tx, results) {
          document.getElementById("display").textContent =
              "Saved record for " + kid.name +
              " with id " + results.insertId;
        });
      }
    });

    IndexedDB

    var kids = [
      { name: "Anna" },
      { name: "Betty" },
      { name: "Christine" }
    ];
     
    var request = window.indexedDB.open("CandyDB",
                                        "My candy store database");
    request.onsuccess = function(event) {
      var objectStore = event.result.objectStore("kids");
      for (var index = 0; index < kids.length; index++) {
        var kid = kids[index];
        objectStore.add(kid).onsuccess = function(event) {
          document.getElementById("display").textContent =
            "Saved record for " + kid.name + " with id " + event.result;
        };
      }
    };

    Example 3 – List All Kids

    This example lists all of the kids stored in the kids table or the kids object store. WebDatabase uses a result set object which will be passed to the callback method provided after all rows have been retrieved. IndexedDB, on the other hand, passes a cursor to the event handler as results are retrieved. Results should come back faster, as a result. While not shown in this example, you can also stop iterating data with IndexedDB by simply not calling cursor.continue().

    WebDatabase

    var db = window.openDatabase("CandyDB", "1",
                                 "My candy store database",
                                 1024);
    db.readTransaction(function(tx) {
      // Enumerate the entire table.
      tx.executeSql("SELECT * FROM kids", function(tx, results) {
        var rows = results.rows;
        for (var index = 0; index < rows.length; index++) {
          var item = rows.item(index);
          var element = document.createElement("div");
          element.textContent = item.name;
          document.getElementById("kidList").appendChild(element);
        }
      });
    });

    IndexedDB

    var request = window.indexedDB.open("CandyDB",
                                        "My candy store database");
    request.onsuccess = function(event) {
      // Enumerate the entire object store.
      request = event.result.objectStore("kids").openCursor();
      request.onsuccess = function(event) {
        var cursor = event.result;
        // If cursor is null then we've completed the enumeration.
        if (!cursor) {
          return;
        }
        var element = document.createElement("div");
        element.textContent = cursor.value.name;
        document.getElementById("kidList").appendChild(element);
        cursor.continue();
      };
    };

    Example 4 – List Kids Who Bought Candy

    This example lists all the kids, and how much candy each kid purchased. WebDatabase simply uses a LEFT JOIN query which makes this example very simple. IndexedDB does not currently have an API specified for doing a join between different object stores. As a result, the example opens a cursor to the kids object store and an object cursor on the kidId index on the candySales object store and performs the join manually.

    WebDatabase

    var db = window.openDatabase("CandyDB", "1",
                                 "My candy store database",
                                 1024);
    db.readTransaction(function(tx) {
      tx.executeSql("SELECT name, COUNT(candySales.kidId) " +
                    "FROM kids " +
                    "LEFT JOIN candySales " +
                    "ON kids.id = candySales.kidId " +
                    "GROUP BY kids.id;",
                    function(tx, results) {
        var display = document.getElementById("purchaseList");
        var rows = results.rows;
        for (var index = 0; index < rows.length; index++) {
          var item = rows.item(index);
          display.textContent += ", " + item.name + "bought " +
                                 item.count + "pieces";
        }
      });
    });

    IndexedDB

    candyEaters = [];
    function displayCandyEaters(event) {
      var display = document.getElementById("purchaseList");
      for (var i in candyEaters) {
        display.textContent += ", " + candyEaters[i].name + "bought " +
                               candyEaters[i].count + "pieces";
      }
    };
     
    var request = window.indexedDB.open("CandyDB",
                                        "My candy store database");
    request.onsuccess = function(event) {
      var db = event.result;
      var transaction = db.transaction(["kids", "candySales"]);
      transaction.oncomplete = displayCandyEaters;
     
      var kidCursor;
      var saleCursor;
      var salesLoaded = false;
      var count;
     
      var kidsStore = transaction.objectStore("kids");
      kidsStore.openCursor().onsuccess = function(event) {
        kidCursor = event.result;
        count = 0;
        attemptWalk();
      }
      var salesStore = transaction.objectStore("candySales");
      var kidIndex = salesStore.index("kidId");
      kidIndex.openObjectCursor().onsuccess = function(event) {
        saleCursor = event.result;
        salesLoaded = true;
        attemptWalk();
      }
      function attemptWalk() {
        if (!kidCursor || !salesLoaded)
          return;
     
        if (saleCursor && kidCursor.value.id == saleCursor.kidId) {
          count++;
          saleCursor.continue();
        }
        else {
          candyEaters.push({ name: kidCursor.value.name, count: count });
          kidCursor.continue();
        }
      }
    }

    IndexedDB generally simplifies the programming model for interacting with databases, and allows for a wide number of use cases. The working group is designing this API so it could be wrapped by JavaScript libraries; for instance, there’s plenty of room for a CouchDB-style API on top of our IndexedDB implementation. It would also be very possible to build a SQL-based API on top of IndexedDB (such as WebDatabase). Mozilla is eager to get developer feedback about IndexedDB, particularly since the specification has not been finalized yet. Feel free to leave a comment here expressing your thoughts or leave anonymous feedback through Rypple.

  2. Beyond HTML5: Database APIs and the Road to IndexedDB

    IndexedDB is an evolving web standard for the storage of significant amounts of structured data in the browser and for high performance searches on this data using indexes. Mozilla has submitted substantial technical feedback on the specification, and we plan to implement it in Firefox 4. We spoke to prominent web developers about evolving an elegant structured storage API for the web. While versions of Safari, Chrome, and Opera support a technology called Web SQL Database, which uses SQL statements as string arguments passed to a JavaScript API, we think developer aesthetics are an important consideration, and that this is a particularly inelegant solution for client-side web applications. We brought developer feedback to the editor of the IndexedDB specification, and also spoke with Microsoft, who agree with us that IndexedDB is a good option for the web. With additional implementations from the Chrome team in the offing, we think it is worth explaining our design choices, and why we think IndexedDB is a better solution for the web than Web SQL Database.

    Web applications can already take advantage of localStorage and sessionStorage in IE 8+, Safari 4+, Chrome 4+, Opera 10.5+ and Firefox 2+ to store key-value pairs with a simple JavaScript API. The Web Storage standard (encompassing localStorage and sessionStorage), now widely implemented, is useful for storing smaller amounts of data, but less useful for storing larger amounts of structured data. While many server-side databases use SQL to programmatically store structured data and to meaningfully query it, on the client-side, the use of SQL in a JavaScript API has been contentious.

    SQL? Which SQL?

    Many web developers certainly are familiar with SQL, since many developers touch just as much server-side code (e.g. PHP and database operations) as client-side code (e.g. JavaScript, CSS, and markup). However, despite the ubiquity that SQL enjoys, there isn’t a single normative SQL standard that defines the technology. In particular, SQLite supports most of SQL-92, with some notable omissions, and is what the WebDatabase API is based on. But SQLite itself isn’t a specification — it’s a release-ready technology! And the best definition of what constitutes the supported subset of SQL that SQLite uses is the SQLite manual. In order to really get Web SQL Database right, we’d have to first start with defining a meaningful subset of SQL for web applications. Why define a whole other language, when more elegant solutions exist within JavaScript itself?

    The Benefits and Pitfalls of SQLite

    We think SQLite is an extremely useful technology for applications, and make it available for Firefox extensions and trusted code. We don’t think it is the right basis for an API exposed to general web content, not least of all because there isn’t a credible, widely accepted standard that subsets SQL in a useful way. Additionally, we don’t want changes to SQLite to affect the web later, and don’t think harnessing major browser releases (and a web standard) to SQLite is prudent. IndexedDB does not have this problem; even though our underlying implementation of IndexedDB may be based on SQLite, we keep developers insulated from changes to SQLite by exposing an API that isn’t based on SQLite’s supported syntax.

    Aesthetics and Web Developers

    Last year, we held a summit at the Mozilla campus to discuss storage on the web. We invited web developers to speak to us about a desirable structured storage API on the web. Many did express resigned acceptance of a SQLite-based API, since they had already experimented with releases of Web SQL Database in some browsers, and claimed that something in circulation was better than a collection of ideas. Yet, all voiced enthusiasm for better design choices, and how a simpler model would make life easier for them. We watched as developers whiteboarded a simple BTree API that addressed their application storage needs, and this galvanized us to consider other options. We were resolved that using strings representing SQL commands lacked the elegance of a “web native” JavaScript API, and started looking at alternatives. Along with Microsoft, we sent feedback about the IndexedDB proposal and actively became involved in the standardization effort.

    In another article, we compare IndexedDB with Web SQL Database, and note that the former provides much syntactic simplicity over the latter. IndexedDB leaves room for a third-party JavaScript library to straddle the underlying primitives with a BTree API, and we look forward to seeing initiatives like BrowserCouch built on top of IndexedDB. Intrepid web developers can even build a SQL API on top of IndexedDB. We’d particularly welcome an implementation of the Web SQL Database API on top of IndexedDB, since we think that this is technically feasible. Starting with a SQL-based API for use with browser primitives wasn’t the right first step, but certainly there’s room for SQL-based APIs on top of IndexedDB.

    We want to continue the discussion with web developers about storage on the web, since that helps us structure our thoughts about product features and future web standards. We look forward to seeing the next generation of web applications with support for high performance searches on indexed data, and to seeing web applications work even more robustly in “airplane mode.”

    Links

  3. Revitalizing Caching

    Apparently, there are only two hard problems in computer science: cache invalidation and the naming of things (or so Phil Karlton’s dictum goes). Earlier this month, we invited representatives of Twitter, Facebook, SproutCore, Palm’s webOS, Microsoft’s “Office On The Web”, Yahoo, and Google to talk to us about the former problem (amongst other things), though we also learned something about the latter.

    Caching is an important issue to get right on the web, not least of all because of the proliferation of web applications on mobile devices. The goals of our caching summit were to identify use cases that would help us move forward with caching and with HTTP request efficiency. How desirable was rolling up our sleeves to look at HTTP/1.1 Pipelining in Firefox, for instance? What else was needed at the HTTP layer? And was the vaunted HTML5 AppCache, implemented in Firefox 3.5 onwards, actually useful to developers? What else needed to be exposed to web applications, either within content or via additional headers?

    Developer feedback is invaluable, and is increasingly the basis of how we want to evolve the next integral pieces of the web platform. Web developers are one of our primary constituencies; going forward, we want them to help us prioritize what we should implement, and what we need to focus on with respect to web standards. We chose our attendees wisely; if any group of people could talk about web applications at scale, the current performance of the cache, and their wishlist for future browser caching behavior on the web platform, it was this group of people. And the feedback they gave us was copious and useful — our work is cut-out for us. Notably, we’ve got a few actions we’re going to follow-up on:

    • Increase the default size of our disk and memory caches. Firefox’s disk cache is currently set at 50MB, a small-ish number given the amount of disk space available on hardware currently (and although this limit can be increased using advanced preferences, few users actually change the default). This is low-hanging fruit for us to fix. An interesting open question is whether we should determine disk cache size heuristically, in lieu of choosing a new upper bound. Steve Souders, who attended our caching summit, blogs about cache sizes, as well as premature cache evictions.

    • Conduct a “Mozilla Test-Pilot” project to get more data about how exactly we cache resources currently. This is part of a larger question about updating our caching algorithm. Like other browsers, we use an optimization of the Least Recently Used (LRU) caching algorithm, called LRU-SP. Data that we would want to gather includes determining what the hit rate, mean, variance and distribution of cached resources are. What’s the average lifetime? How about different modes where our LRU-SP replacement policy doesn’t work well for certain apps, where big resources (such as an essential script file) may get eliminated before smaller ones (such as map tiles)? We’ll also have to research the role that anti-virus software plays in routinely flushing out the cache, leading to further occurrences of untimely eviction of relevant resources.

    • Explore prioritization of resources in the cache based on MIME type. For instance, allowing for JavaScript (text/javascript) to always get higher priority in terms of what gets pruned by our LRU-SP algorithm. A good follow-up for this would be to get Chrome, IE, Apple, and Opera to discuss this with us, and then document what we come up with as a MIME-type based “naive” prioritization. We also want to allow developers to set resource priority on their own, perhaps through a header. This is likely to be an ongoing discussion.

    • Really figure out what hampers the adoption of HTTP/1.1 Pipelining on the web, including data from proxies and how they purge. While Pipelining is enabled in Mobile Firefox builds (such at that running on the Nokia N900 and on Android devices) by default, we have it turned off in desktop Firefox builds. We do this for a variety of reasons, not least of all the risk of performance slow downs if one of the pipelined requests slows down the others. It’s clear from our discussion that many who attended our caching summit think pipelining will help their applications perform much better. The situation on the web now with respect to pipelining is a kind of hostage’s dilemma: of the main desktop browsers, nobody has turned on pipelining, for fear of a scenario that slows down performance (leading to that particular browser being accused of “being slow”). The developers who visited us threw down the proverbial gauntlet; at a minimum we’ve got to figure out what hamstrings the use of pipelining on the web, and determine what we can actually do to remove those obstacles.

    • Figure out how to evolve the HTML5 AppCache, which frankly hasn’t seen the adoption we initially expected. While we tend to view parts of HTML5 such as Cache Manifests and window.applicationCache as yet another performance tool (to ensure web applications rapidly load upon subsequent accesses), it is different than the general browser cache. What’s in a name, and why is naming something amongst the hardest problems in computer science? The use of the word “cache” to describe the parts of HTML5 that deal with offline web applications has confused some developers. What we’re calling the HTML5 AppCache was primarily conceived of to enable offline web application use, but many applications (such as those built with SproutCore) treat it as an auxiliary cache, to ensure that applications have a rapid start-up time and generally perform well. Why, we were asked, should we have two things: a general purpose browser cache, and something else, uniquely for offline usage? On the one hand, the HTML5 AppCache allows web apps to act like native apps (enabling “rapid-launch icons” on some mobile handsets), perhaps even eventually integrating with native application launchers. And on the other hand, the HTML5 AppCache’s separateness from the general cache may mean that we coin different APIs to work with the general cache. In general, simultaneously evolving multiple APIs with “cache” in the name may be confusing. But that’s why naming is amongst the hard problems, and that’s why we have to architect the next iteration mindful of the potential for both redundancy as well as confusion.

    • We’ve got a tracking bug in place with a bold moniker: “Improve HTTP Cache.” You’ll see the gamut of changes we’d like to introduce here, including benchmarking our cache against Chromium’s (and perhaps just using Chromium’s cache code, if we need to).

      Caching is important, but difficult. It would be fair to describe most of the near-term evolution of the web that way, whether that is the introduction of an indexable database capability, streaming video, or new layout models within CSS. These evolutions won’t necessarily happen within a standards body or on a listserv, but rather through rapid prototyping and meaningful feedback. That’s why we have to talk to web developers to help us do the right thing, and that’s why we’ll keep organizing meet-ups such as the recent caching summit.

  4. WebGL Draft Released Today

    Even without a draft specification of WebGL in circulation, we’ve seen some promising 3D content using WebGL appear on the web, put together mainly through developer ingenuity and the fact that Firefox, Chromium, and WebKit are open source projects with early support for the technology. Today, the WebGL Working Group at Khronos released a provisional public draft of the WebGL specification, and we are very excited for what this means for the web.

    For one thing, it means more developers can get involved in the evolution of WebGL. There’s a public mailing list set up, so that you can engage directly with members of the WebGL Working Group, as well as a web forum. It’s important to note that the specification is not yet finalized. Participation from the web community is essential towards finalizing the specification, which we hope to do in the first quarter of 2010.

    It also means that there are implementations of the draft specification that you can begin to test. You can obtain a Firefox nightly that implements the WebGL draft specification, and can turn on WebGL support in that build by following these steps:

    • Type “about:config” in your browser’s URL bar
    • Do a search for “webgl” in the Filter field
    • Double-click the “enabled_for_all_sites” preference to change it to “true

    Other browsers that support the draft WebGL specification are listed on the WebGL Wiki.

    Now of course, this is hardware-accelerated 3D graphics in an early beta, and for now requires an OpenGL 2.0-capable GPU and drivers. In particular, most flavors of Intel’s integrated GPUs will not work straight out of the box (such as the GMA900/GMA950 often found in laptops). Developers who build nightly builds of the browser can induce software rendering using Mesa, and should check out Vlad’s blog post for doing this on Windows. Caveat emptor: building software rendering using Mesa into a Firefox nightly is likely to yield slower performance, and is only for the intrepid.

    WebGL is a royalty-free, cross-platform API that brings OpenGL ES 2.0 to the web as a 3D drawing context within HTML5′s Canvas element, exposed as low-level interfaces within the Document Object Model.

    Developers familiar with the Canvas 2D context will recognize that WebGL is another context for Canvas:

    // get the canvas element in the DOM
    var canvas = document.getElementById("canvas3D");
    var gl = canvas.getContext("experimental-webgl");

    Note that till the specification is finalized, the context is called experimental-webgl.

    WebGL uses the OpenGL shading language, GLSL ES, and can be cleanly combined with other web content that is layered on top or underneath the 3D content. It is an emerging web standard, and is designed to be used with other parts of the web platform. The release of the draft specification is one step in bringing about a plugin free 3D API to the web, usable straight out of the box. People have already begun using it to create compelling libraries. Check out X3DOM, which is a JavaScript library using WebGL that allows users to author content in X3D. We expect developer ingenuity to surprise and awe us, as it always has.

    References

  5. W3C FileAPI in Firefox 3.6

    Often, web applications will prompt the user to select a file, typically to upload to a server. Unless the web application makes use of a plugin, file selection occurs through an HTML input element, of the sort <input type="file"/>. Firefox 3.6 now supports much of the W3C File API, which specifies the ability to asynchronously read the selected file into memory, and perform operations on the file data within the web application (for example, to display a thumbnail preview of an image, before it is uploaded, or to look for ID3 tags within an MP3 file, or to look for EXIF data in JPEG files, all on the client side). This is a new API, and replaces the file API that was introduced in Firefox 3.

    It is important to note that even before the advent of the W3C File API draft (which only became a Working Draft in November 2009), Firefox 3 and later provide the ability to read files into memory synchronously but that capability should be considered deprecated in favor of the new implementation in Firefox 3.6 of the asynchronous File API. The deprecated API allowed you synchronously access a file:

    // After obtaining a handle to a file
    // access the file data
    var dataURL = file.getAsDataURL();
    img.src = dataURL;

    While Firefox 3.6 will continue to support code usage of the sort above, it should be considered deprecated since it reads files synchronously on the main thread. For large files, this could result in blocking on the result of the read, which isn’t desirable. Moreover, the file object itself provides a method to read from it, rather than having a separate reader object. These considerations informed the technical direction of the new File API in Firefox 3.6 (and the direction of the specification). The rest of this article is about the newly introduced File API.

    Accessing file selections

    Firefox 3.6 supports multiple file selections on an input element, and returns all the files selected using the FileList interface. Previous versions of Firefox only supported one selection of a file using the input element. Additionally, the FileList interface is also exposed to the HTML5 Drag and Drop API as a property of the DataTransfer interface. Users can drag and drop multiple files to a drop target within a web page as well.

    The following HTML spawns the standard file picker, with which you can select multiple files:

    <input id="inputFiles" type="file" multiple="" />

    Note that if you don’t use the multiple attribute, you only enable single file selection.

    You can work with all the selected files obtained either through the file picker (using the input element) or through the DataTransfer object by iterating through the FileList:

    var files = document.getElementById("inputFiles").files;
     
    // or, for a drag event e:
    // var dt = e.dataTransfer; var files = dt.files
     
    for (var i = 0; i < files.length; i++) {
      var file = files[i];
      handleFile(file);
     
    }

    Properties of files

    Once you obtain a reference to an individually selected file from a FileList, you get a File object, which has name, type, and size properties. Continuing with the code snippet above:

    function handleFile(file) {
        // RegExp for JPEG mime type
        var imageType = /image\/jpeg/;
     
        // Check if match
        if (!file.type.match(imageType)) {
            return false;
        }
       // Check if the picture exceeds set limit
       if(file.size > maxSize) {
          alert("Choose a smaller photo!");
          return false;
       }
      // Add file name to page
      var picData = document.createTextNode(file.name);
      dataGrid.appendChild(picData);
      return true;
    }

    The size attribute is the file’s size, in bytes. The name attribute is the file’s name, without path information. The type attribute is an ASCII-encoded string in lower case representing the media type of the file, expressed as an RFC2046 MIME type. The type attribute in particular is useful in sniffing file type, as in the example above, where the script determines if the file in question is a JPEG file. If Firefox 3.6 cannot determine the file’s type, it will return the empty string.

    Reading Files

    Firefox 3.6 and beyond support the FileReader object to read file data asynchronously into memory, using event callbacks to mark progress. The object is instantiated in the standard way:

    var binaryReader = new FileReader();

    Event handler attributes are used to work with the result of the file read operation. For very large files, it is possible to watch for progress events as the file is being read into memory (using the onprogress event handler attribute to set the event handler function). This is useful in scenarios where the drives in question may not be local to the hardware, or if the file in question is particularly big.

    The FileReader object supports three methods to read files into memory. Each allows programmatic access to the files data in a different format, though in practice only one read method should be called on a given FileReader object:

    • filereader.readAsBinaryString(file); will asynchronously return a binary string with each byte represented by an integer in the range [0..255]. This is useful for binary manipulations of a file’s data, for example to look for ID3 tags in an MP3 file, or to look for EXIF data in a JPEG image.
    • filereader.readAsText(file, encoding); will asynchronously return a string in the format solicited by the encoding parameter (for example encoding = "UTF-8"). This is useful for working with a text file, for example to parse an XML file.
    • filereader.readAsDataURL(file); will asynchronously return a Data URL. Firefox 3.6 allows large URLs, and so this feature is particularly useful when a URL could help display media content in a web page, for example for image data, video data, or audio data.

    An example helps tie this all together:

    if (files.length > 0) {
        if (!handleFile(files[0])) {
            invalid.style.visibility="visible";
            invalid.msg = "Select a JPEG Image";
         }
    }
     
    var binaryReader = new FileReader();
    binaryReader.onload = function(){
       var exif = findEXIFInJPG(binaryReader.result);
       if (!exif) {
          // ...set up conditions for lack of data
       }
       else {
        // ...write out exif data
       }
     
    binaryReader.onprogress = updateProgress;
    binaryReader.onerror = errorHandler;
     
    binaryReader.readAsBinaryString(file);
     
    function updateProgress(evt){
       // use lengthComputable, loaded, and total on ProgressEvent
       if (evt.lengthComputable) {
              var loaded = (evt.loaded / evt.total);
              if (loaded < 1) {
                // update progress meter
                progMeter.style.width = (loaded * 200) + "px";
              }
       }
    }
     
    function errorHandler(evt) {
      if(evt.target.error.code == evt.target.error.NOT_FOUND_ERR) {
       alert("File Not Found!");
      }
    }

    In order to work with binary data, the use of the charCodeAt function exposed on strings will be particularly useful. For instance, an utility of the sort:

    function getByteAt(file, idx) {
        return file.charCodeAt(idx);
    }

    allows extraction of the Unicode value of the character at the given index.

    An example of similar code in action in Firefox 3.6, including use of the readAsDataURL method to render an image, as well as binary analysis of a JPEG for EXIF detection (using the readAsBinaryString method), can be found in Paul Rouget’s great demo of the File API..

    A word on the specification

    The existence of a W3C public working draft of the File API holds the promise of other browsers implementing it shortly. Firefox 3.6′s implementation is fairly complete, but is missing some of the technology mentioned in the specification. Notably, the urn feature on the File object isn’t yet implemented, and neither is the ability to extract byte-ranges of files using the slice method. A synchronous way to read files isn’t yet implemented as part of Worker Threads. These features will come in future versions of Firefox.

    References

  6. video – more than just a tag

    This article is written by Paul Rouget, Mozilla contributor and purveyor of extraordinary Open Web demos.

    Starting with Firefox 3.5, you can embed a video in a web page like an image. This means video is now a part of the document, and finally, a first class citizen of the Open Web. Like all other elements, you can use it with CSS and JavaScript. Let’s see what this all means …

    The Basics

    First, you need a video to play. Firefox supports the Theora codec (see here to know all media formats supported by the audio and video elements).

    Add the video to your document:

    <video id="myVideo" src="myFile.ogv"/>

    You might need to add some “fallback” code if the browser doesn’t support the video tag. Just include some HTML (which could be a warning, or even some Flash) inside the video tag.

    <video id="myVideo" src="myFile.ogv">
    <strong>Your browser is not awesome enough!</strong>
    </video>

    Here’s some more information about the fallback mechanism.

    HTML Attributes

    You can find all the available attributes here.

    Some important attributes:

    • autoplay: The video will be played just after the page loads.
    • autobuffer: By default (without this attribute), the video file is not downloaded unless you click on the play button. Adding this attribute starts downloading the video just after the page loads.
    • controls: by default (without this attribute), the video doesn’t include any controls (play/pause button, volume, etc.). Use this attribute if you want the default controls.
    • height/width: The size of the video

    Example:

    <video id="myVideo" src="myFile.ogv" 
       autobuffer="true" controls="true"/>

    You don’t have to add the “true” value to some of these attributes in HTML5, but it’s neater to do so. If you’re not in an XML document, you can simply write:

    <video id="myVideo" src="myFile.ogv" autobuffer controls/>

    JavaScript API

    Like any other HTML element, you have access to the video element via the Document Object Model (DOM):

    var myVideo = document.getElementById("myVideo");

    Once you obtain a handle to the video element, you can use the JavaScript API for video.

    Here is a short list of some useful methods and properties (and see here for more of the DOM API for audio and video elements):

    • play() / pause(): Play and pause your video.
    • currentTime: The current playback time, in seconds. You can change this to seek.
    • duration: The duration of the video.
    • muted: Is the sound muted?
    • ended: Has the video ended?
    • paused: Is the video paused?
    • volume: To determine the volume, and to change it.

    Example:

    <button onclick="myVideo.play()">Play</button>
    <button onclick="myVideo.volume = 0.5">Set Volume</button>
    <button onclick="alert(myVideo.volume)">Volume?</button>

    Events

    You know how to control a video (play/pause, seek, change the volume, etc.). You have almost everything you need to create your own controls. But you need some feedback from the video, and for that, let’s see the different events you can listen to:

    • canplay: The video is ready to play
    • canplaythrough: The video is ready to play without interruption (if the download rate doesn’t change)
    • load: The video is ready to play without interruption (the video has been downloaded entirely)
    • ended: The video just ended
    • play: The video just started playing
    • pause: The video has been paused
    • seeking: The video is seeking (it can take some seconds)
    • seeked: The seeking process just finished
    • timeupdate: While the video is playing, the currentTime is updated. Every time the currentTime is updated, timeupdate is fired.

    Here’s a full list of events.

    For example, you can follow the percentage of the video that has just been played:

    function init() 
    {
      var video = document.getElementById("myVideo");
      var textbox = document.getElementById("sometext");
      video.addEventListener("timeupdate", function() {
      textbox.value = Math.round(100 * (video.currentTime / video.duration)) + "%"; }
     
    }
    <video id="myVideo" src="myFile.ogv" 
                autoplay="true" onplay="init()"/>
    <input id="sometext"/>

    Showing all this in action, here’s a nice open video player using the Video API.

    Now that you’re familiar with some of the broad concepts behind the Video API, let’s really delve into the video as a part of the Open Web, introducing video to CSS, SVG, and Canvas.

    CSS and SVG

    A video element is an HTML element. That means you can use CSS to style it.

    A simple example: using the CSS Image Border rule (a new CSS 3 feature introduced in Firefox 3.5). You can view how it works on the Mozilla Developer Wiki.

    And obviously, you can use it with the video tag:

     
    <video id="myVideo" src="myFile.ogv" 
    style="-moz-border-image: 
               url(tv-border.jpg) 25 31 37 31 stretch stretch; 
               border-width: 20px;"/>

    One of my demos uses this very trick.

    Since Firefox 3.5 provides some new snazzy new CSS features, you can do some really fantastic things. Take a look at the infamous washing machine demo, in which I subject an esteemed colleague to some rotation.

    It uses some CSS rules:

    And some SVG:

    Because the video element is like any other HTML element, you can add some HTML content over the video itself, like I do in this demo. As you can see, there is a <div> element on top of the video (position: absolute;).

    Time for a Break

    Well, we’ve just seen how far we can go with the video element, both how to control it and how to style it. That’s great, and it’s powerful. I strongly encourage you to read about the new web features available in Firefox 3.5, and to think about what you can do with such features and the video element.

    You can do so much with the power of the Open Web. You can compute the pixels of the video. You can, for example, try to find some shapes in the video, follow the shapes, and draw something as an attachment to these shapes. That’s what I do here! Let’s see how it actually works.

    Canvas & Video

    Another HTML 5 element is canvas. With this element, you can draw bitmap data (see the canvas reference, and I strongly suggest this canvas overview). But something you might not know is that you can copy the content of an <img/> element, a <canvas/> element and a <video/> element.

    That’s a really important point for the video element. It gives you a way to play with the values of the pixels of the video frames.

    You can do a “screenshot” of the current frame of the video in a canvas.

    function screenshot() {
     var video = document.getElementById("myVideo");
     var canvas = document.getElementById("myCanvas");
     var ctx = canvas.getContext("2d");
     
     ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
    }
    <video id="myVideo" src="myFile.ogv" autoplay="true" with="600" height="400"/>
    <canvas id="myCanvas" with="600" height="400"/>
    <button onclick="screenshot()">Copy current frame to canvas</button>

    You can first apply a transformation to your canvas (see the documentation). You can also copy a thumbnail of the video.

    If you draw every frame in a canvas, your canvas will look like a video element. And you can draw what you want in this canvas, after drawing the frame. That’s what I do in this demo.

    Once you have a video frame in your canvas, you can compute the values of the pixels.

    Some things you should know if you want to compute the pixels values of a frame:

    • you can’t use this mechanism with a video from another domain.
    • you can’t use this mechanism with a video from a file:/// URL (which would be useful during the development of your web application). But you can change this behavior for testing: in about:config, change the value of “security.fileuri.strict_origin_policy” to “false”. But be very careful! editing about:config — that’s an expert feature!
    • There are two ways to display the result of your application on the top of the video:
      • use your canvas as a video (if you draw the frame every time), and then draw directly into the canvas
      • use a transparent canvas on the top of the video
    • the canvas element can be “display: none”
    • the video element can be “display: none”

    About JavaScript

    For the image processing, you will need to do a lot of computation. Here are some tricks:

    • copy your frame in a small canvas. If the canvas is three times smaller than the video, it means nine times fewer pixels to compute.
    • avoid recursion. In a recursion, the script engine doesn’t use the JIT optimization.
    • if you want to do a distance between colors, use the L.A.B colorspace.
    • if you want to find the center of an object, compute its centroid. See the “computeFrame” function that I use in this JavaScript snippet for my demo.
    • if the algorithm is really heavy, you can use a Worker thread, but take into account that you will need to send the content of the canvas to the thread. It’s a big array, and objects are automatically JSONified before being sent. It can take a while.

    Conclusion

    As you can see, you can do powerful things with the video element, the canvas element, CSS3, SVG and the new JavaScript engine. You have everything in your hands to create a completely new way to use Video on the web. It’s up to you now — upgrade the web!

    References

  7. cross-site xmlhttprequest with CORS

    XMLHttpRequest is used within many Ajax libraries, but till the release of browsers such as Firefox 3.5 and Safari 4 has only been usable within the framework of the same-origin policy for JavaScript. This meant that a web application using XMLHttpRequest could only make HTTP requests to the domain it was loaded from, and not to other domains. Developers expressed the desire to safely evolve capabilities such as XMLHttpRequest to make cross-site requests, for better, safer mash-ups within web applications. The Cross-Origin Resource Sharing (CORS) specification consists of a simple header exchange between client-and-server, and is used by IE8′s proprietary XDomainRequest object as well as by XMLHttpRequest in browsers such as Firefox 3.5 and Safari 4 to make cross-site requests. These browsers make it possible to make asynchronous HTTP calls within script to other domains, provided the resources being retrieved are returned with the appropriate CORS headers.

    A Quick Overview of CORS

    Firefox 3.5 and Safari 4 implement the CORS specification, using XMLHttpRequest as an “API container” that sends and receives the appropriate headers on behalf of the web developer, thus allowing cross-site requests. IE8 implements part of the CORS specification, using XDomainRequest as a similar “API container” for CORS, enabling simple cross-site GET and POST requests. Notably, these browsers send the ORIGIN header, which provides the scheme (http:// or https://) and the domain of the page that is making the cross-site request. Server developers have to ensure that they send the right headers back, notably the Access-Control-Allow-Origin header for the ORIGIN in question (or ” * ” for all domains, if the resource is public) .

    The CORS standard works by adding new HTTP headers that allow servers to serve resources to permitted origin domains. Browsers support these headers and enforce the restrictions they establish. Additionally, for HTTP request methods that can cause side-effects on user data (in particular, for HTTP methods other than GET, or for POST usage with certain MIME types), the specification mandates that browsers “preflight” the request, soliciting supported methods from the server with an HTTP OPTIONS request header, and then, upon “approval” from the server, sending the actual request with the actual HTTP request method. Servers can also notify clients whether “credentials” (including Cookies and HTTP Authentication data) should be sent with requests.

    Capability Detection

    XMLHttpRequest can make cross-site requests in Firefox 3.5 and in Safari 4; cross-site requests in previous versions of these browsers will fail. It is always possible to try to initiate the cross-site request first, and if it fails, to conclude that the browser in question cannot handle cross-site requests from XMLHttpRequest (based on handling failure conditions or exceptions, e.g. not getting a 200 status code back). In Firefox 3.5 and Safari 4, a cross-site XMLHttpRequest will not successfully obtain the resource if the server doesn’t provide the appropriate CORS headers (notably the Access-Control-Allow-Origin header) back with the resource, although the request will go through. And in older browsers, an attempt to make a cross-site XMLHttpRequest will simply fail (a request won’t be sent at all).

    Both Safari 4 and Firefox 3.5 provide the withCredentials property on XMLHttpRequest in keeping with the emerging XMLHttpRequest Level 2 specification, and this can be used to detect an XMLHttpRequest object that implements CORS (and thus allows cross-site requests). This allows for a convenient “object detection” mechanism:

    if (XMLHttpRequest)
    {
        var request = new XMLHttpRequest();
        if (request.withCredentials !== undefined)
        {
          // make cross-site requests
        }
    }

    Alternatively, you can also use the “in” operator:

    if("withCredentials" in request)
    {
      // make cross-site requests
    }

    Thus, the withCredentials property can be used in the context of capability detection. We’ll discuss the use of “withCredentials” as a means to send Cookies and HTTP-Auth data to sites later on in this article.

    “Simple” Requests using GET or POST

    IE8, Safari 4, and Firefox 3.5 allow simple GET and POST cross-site requests. “Simple” requests don’t set custom headers, and the request body only uses plain text (namely, the text/plain Content-Type).

    Let us assume the following code snippet is served from a page on site http://foo.example and is making a call to http://bar.other:

     
    var url = "http://bar.other/publicNotaries/"
    if(XMLHttpRequest)
    {
      var request = new XMLHttpRequest();
      if("withCredentials" in request)
      {
       // Firefox 3.5 and Safari 4
       request.open('GET', url, true);
       request.onreadystatechange = handler;
       request.send();
      }
      else if (XDomainRequest)
      {
       // IE8
       var xdr = new XDomainRequest();
       xdr.open("get", url);
       xdr.send();
     
       // handle XDR responses -- not shown here :-)
      }
     
     // This version of XHR does not support CORS  
     // Handle accordingly
    }

    Firefox 3.5, IE8, and Safari 4 take care of sending and receiving the right headers. Here is the Simple Request example. It is also instructive to look at the headers sent back by the server. Notably, amongst the other request headers, the browser would send the following in order to enable the simple request above:

    GET /publicNotaries/ HTTP/1.1
    Referer: http://foo.example/notary-mashup/
    Origin: http://foo.example

    Note the use of the “Origin” HTTP header that is part of the CORS specification.

    And, amongst the other response headers, the server at http://bar.other would include:

    Access-Control-Allow-Origin: http://foo.example
    Content-Type: application/xml
    ......

    A more complete treatment of CORS and XMLHttpRequest can be found here, on the Mozilla Developer Wiki.

    “Preflighted” Request

    The CORS specification mandates that requests that use methods other than POST or GET, or that use custom headers, or request bodies other than text/plain, are preflighted. A preflighted request first sends the OPTIONS header to the resource on the other domain, to check and see if the actual request is safe to send. This capability is currently not supported by IE8′s XDomainRequest object, but is supported by Firefox 3.5 and Safari 4 with XMLHttpRequest. The web developer does not need to worry about the mechanics of preflighting, since the implementation handles that.

    The code snippet below shows code from a web page on http://foo.example calling a resource on http://bar.other. For simplicity, we leave out the section on object and capability detection, since we’ve covered that already:

    var invocation = new XMLHttpRequest();
    var url = 'http://bar.other/resources/post-here/';
    var body = '
    Arun';
    function callOtherDomain(){
    if(invocation)
    {
        invocation.open('POST', url, true);
        invocation.setRequestHeader('X-PINGOTHER', 'pingpong');
        invocation.setRequestHeader('Content-Type', 'application/xml');
        invocation.onreadystatechange = handler;
        invocation.send(body);
    }

    You can see this example in action here. Looking at the header exchange between client and server is really instructive. A more detailed treatment of this can be found on the Mozilla Developer Wiki.

    In this case, before Firefox 3.5 sends the request, it first uses the OPTIONS header:

    OPTIONS /resources/post-here/ HTTP/1.1
    Origin: http://foo.example
    Access-Control-Request-Method: POST
    Access-Control-Request-Headers: X-PINGOTHER

    Then, amongst the other response headers, the server responds with:

    HTTP/1.1 200 OK
    Access-Control-Allow-Origin: http://arunranga.com
    Access-Control-Allow-Methods: POST, GET, OPTIONS
    Access-Control-Allow-Headers: X-PINGOTHER
    Access-Control-Max-Age: 1728000

    At which point, the actual response is sent:

    POST /resources/post-here/ HTTP/1.1
    ...
    Content-Type: application/xml; charset=UTF-8
    X-PINGOTHER: pingpong
    ...

    Credentialed Requests

    By default, “credentials” such as Cookies and HTTP Auth information are not sent in cross-site requests using XMLHttpRequest. In order to send them, you have to set the withCredentials property of the XMLHttpRequest object. This is a new property introduced in Firefox 3.5 and Safari 4. IE8′s XDomainRequest object does not have this capability.

    Again, let us assume some JavaScript on a page on http://foo.example wishes to call a resource on http://bar.other and send Cookies with the request, such that the response is cognizant of Cookies the user may have acquired.

    var request = new XMLHttpRequest();
    var url = 'http://bar.other/resources/credentialed-content/';
    function callOtherDomain(){
      if(request)
      {
       request.open('GET', url, true);
       request.withCredentials = "true";
       request.onreadystatechange = handler;
       request.send();
      }

    Note that withCredentials is false (and NOT set) by default. The header exchange is similar to the case of of a simple GET request, with the exception that now an HTTP Cookie header is sent with the request header. You can see this sample in action here.

    A Note on Security

    In general, data requested from a remote site should be treated as untrusted. Executing JavaScript code retrieved from a third-party site without first determining its validity is NOT recommended. Server administrators should be careful about leaking private data, and should judiciously determine that resources can be called in a cross-site manner.

    References

  8. (r)evolution number 5

    We’ve just launched Firefox 3.5, and we’re incredibly proud. Naturally, we have engaged in plentiful Mozilla advocacy — this site is, amongst other things, a vehicle for showcasing the latest browser’s new capabilities. We like to think about this release as an upgrade for the whole World Wide Web, because of the new developer-facing features that have just been introduced into the web platform. When talking about some of the next generation standards, the appearance of the number “5″ is almost uncanny — consider HTML5 and ECMAScript 5 (PDF). The recent (and very welcome) hype around HTML5 in the press is what motivates this article. Let’s take a step back, and consider some of Mozilla’s web advocacy in the context of events leading up to the release of Firefox 3.5.

    Standardization of many of these features often came after much spirited discussion, and we’re pleased to see the prominent placement of HTML5 as a key strategic initiative by major web development companies. Indeed, exciting new web applications hold a great deal of promise, and really showcase what the future of the web platform holds in store for aspiring developers. Many herald the triumphant arrival of the browser as the computer, an old theme that gets bolstered with the arrival of attractive HTML5 platform features that are implemented across Safari, Chrome, Opera, and of course, Firefox (with IE8 getting an honorable mention for having both some HTML5 features and some ECMAScript, 5th Edition features).

    Call it what you will — Web 5.0, Open Web 5th Generation (wince!), or, (R)evolution # 5, the future is now. But lest anyone forget, HTML5 is not a completed standard yet, as the W3C was quick to point out. The editor doesn’t anticipate completion till 2010. The path taken from the start of what is now called HTML5 to the present-day era of (very welcome) hype has been a long one, and Mozilla has been part of the journey from the very beginning.

    For one thing, we were there to point out, in no uncertain terms, that the W3C had perhaps lost its way. Exactly 5 summers ago (again, with that magic number!), it became evident that the W3C was no longer able to serve as sole custodian of the standards governing the open web of browser-based applications, so Mozilla, along with Opera, started the WHATWG. Of course, back then, we didn’t call it HTML5, and while Firefox itself made a splash in 2004, the steps taken towards standardization were definitive but tentative. Soon, other browser vendors joined us, and by the time the reconciliation with W3C occurred two years later, the innovations introduced into the web platform via the movement initiated by Mozilla had gained substantial momentum.

    The net result is a specification that is not yet complete called “HTML5″ which is implemented piecemeal by most modern browsers. The features we choose to implement as an industry are in response to developers, and our modus operandi is (for the most part) in the open. Mozilla funds the HTML5 Validator, producing the first real HTML5 parser, which now drives W3C’s markup validation for HTML5. That parser has made its way back into Firefox. It’s important to note that capabilities that are of greatest interest (many of which are showcased on this blog) are not only developed within the HTML5 specification, but also as part of the W3C Geolocation WG, the Web Apps WG, and the CSS WG.

    The release of Firefox 3.5, along with updates to other modern browsers, seems to declare that HTML5 has arrived. But with the foresight that comes with having been around this for a while, we also know that we have a lot of work ahead of us. For one thing, we’ve got to finish HTML5, or at least publish a subset of it that we all agree is ready for implementation, soon. We’ve also got to ensure that accessibility serves as an important design principle in the emerging web platform, and resolve sticky differences here. Also, an open standard does not an open platform make, as debates about web fonts and audio/video codecs show. We’ve got a lot of work ahead of us, but for now, 5 years after the summer we started the ball rolling, we’re enjoying the hype around (R)evolution Number 5.

  9. better security and performance with native JSON

    The JavaScript Object Notation (JSON) mechanism for representing data has rapidly become an indispensable part of the web developer’s toolkit, allowing JavaScript applications to obtain and parse data intuitively, within scripts, with lightweight data encapsulation. Firefox 3.5 includes support for JSON natively by exposing a new primitive — window.JSON — to the top level object.

    Native “out of the box” support for JSON comes to the web courtesy of the ECMAScript 5th Edition (PDF link to specification), other aspects of which will also be supported by Firefox 3.5. Presently, native JSON is supported by Firefox 3.5 and IE8, with a strong likelihood of other browsers supporting it soon as well.

    Native JSON support has two advantages:

    1. Safety. Simply using eval to evaluate expressions returned as strings raises security issues. Also, the native JSON primitive can only work with data. It can’t be used to parse objects with method calls; attempting to do so returns an error.
    2. Performance. Parsing JSON safely, using third-party scripts and libraries, is likely to be slower than native JSON parsing within the browser.

    Let’s look at some examples.

    A JSON API for search results might look like this:

    /* 
    Assume that you obtained var data
    as a string from a server
    For convenience we display this search
    result on separate lines
    */
     
    var data = ' { "responseData":
    {"results": [
        {
            "SafeSearch":"true",
            "url":"http://www.arunranga.com/i.jpg",
        },
        {
            "SafeSearch":"false",
    	 "url":"http://www.badarunranga.com/evil.jpg",
        }
    ]}}';

    Such a resource could be returned by a simple HTTP GET request using a RESTful API.

    Using native JSON, you could do something like this:

    /* 
     Obtain a handle to the above JSON resource
     This is best and most conveniently done
     via third-party libraries that support native JSON
    */
     
    if (window.JSON) {
        var searchObj = JSON.parse(data);
        for (var i=0; i++; i < searchObj.responseData.results.length) {
            if (searchObj.responseData.results[i].SafeSearch) {
                var img = new Image();
                img.src = searchObj.responseData.results[i].url;
                // ... Insert image into DOM ...
        }
    }

    You can also stringify the object back to the string:

    // Back to where we started from
     
    var data = JSON.stringify(searchObj);
     
    // data now holds the string we started with

    Of course, to really enable the power of JSON, you’ll want to retrieve JSON resources from different domains, via callback mechanisms like JSONP. Many web developers are unlikely to use the JSON primitive directly. Rather, they’ll use them within libraries such as Dojo and jQuery. These libraries allow for the retrieval and direct parsing of JSON resources from different domains, and add a great deal of syntactic sugar to the process of callbacks and DOM insertions.

    The native JSON primitive works with the popular json2.js library (which correctly detects if native JSON is enabled), so developers can seamlessly use JSON parsing on browsers that don’t support native JSON. As of this writing, Dojo and jQuery have committed to supporting native JSON: