FileAPI Articles

Sort by:


  1. Announcing the winners of the July 2013 Dev Derby!

    This past summer, some of the most passionate and creative web developers out there innovated with the File API in our July Dev Derby contest. After sorting through the entries, an all-star cast of former judges–Peter Lubbers, Eric Shepherd, and David Walsh–decided on three winners and two runners-up.

    Not a contestant? There are other reasons to be excited. Most importantly, all of these demos are completely open-source, making them wonderful lessons in the exciting things you can do with the File API today.

    Dev Derby

    The Results



    Congratulations to these winners! As always, this represents only a small portion of the impressive work submitted to the contest. After you have finished playing with these winning demos, be sure to check out the rest. You will not be disappointed.

    The Dev Derby is currently on hiatus, but will be back before long. In the meantime, head over to the Demo Studio to see some general-interest demos and submit your own.

    Further reading

  2. Why no FileSystem API in Firefox?

    A question that I get asked a lot is why Firefox doesn’t support the FileSystem API. Usually, but not always, they are referring specifically to the FileSystem and FileWriter specifications which Google is implementing in Chrome, and which they have proposed for standardization in W3C.

    The answer is somewhat complex, and depends greatly on what exact capabilities of the above two specifications the person is actually wanting to use. The specifications are quite big and feature full, so it’s no surprise that people are wanting to do very different things with it. This blog post is an attempt at giving my answer to this question and explain why we haven’t implemented the above two specifications. But note that this post represents my personal opinion, intended to spur more conversation on this topic.

    As stated above, people asking for “FileSystem API support” in Firefox are actually often interested in solving many different problems. In my opinion most, but so far not all, of these problems have better solutions than the FileSystem API. So let me walk through them below.

    Storing resources locally

    Probably the most common thing that people want to do is to simply store a set of resources so that they are available without having to use the network. This is useful if you need quick access to the resources, or if you want to be able to access them even if the user is offline. Games are a very common type of application where this is needed. For example an enemy space ship might have a few associated images, as well as a couple of associated sounds, used when the enemy is moving around the screen and shooting. Today, people generally solve this by storing the images and sound files in a file system, and then store the file names of those files along with things like speed and firepower of the enemy.

    However it seems a bit non-optimal to me to have to store some data separated from the rest. Especially when there is a solution which can store both structured data as well as file data. IndexedDB treats file data just like any other type of data. You can write a File or a Blob into IndexedDB just like you can store strings, numbers and JavaScript objects. This is specified by the IndexedDB spec and so far implemented in both the Firefox and IE implementations of IndexedDB. Using this, you can store all information that you need in one place, and a single query to IndexedDB can return all the data you need. So for example, if you were building a web based email client, you could store an object like:

      subject: "Hi there",
      body: "Hi Sven,\nHow are you doing...",
      attachments: [blob1, blob2, blob3]

    Another advantage here is that there’s no need to make up file names for resources. Just store the File or Blob object. No name needed.

    In Firefox’s IndexedDB implementation (and I believe IE’s too) the files are transparently stored outside of the actual database. This means that performance of storing a file in IndexedDB is just as good as storing the file in a filesystem. It does not bloat the database itself slowing down other operations, and reading from the file means that the implementation just reads from an OS file, so it’s just as fast as a filesystem.

    Firefox IndexedDB implementation is even smart enough that if you store the same Blob multiple files in a IndexedDB database it just creates one copy of the file. Writing further references to the same Blob just adds to an internal reference counter. This is completely transparent to the web page, the only thing it will notice is faster writes and less resource use. However I’m not sure if IE does the same, so check there first before relying on it.

    Access pictures and music folders

    The second most common thing that people ask for related to a file system APIs is to be able to access things like the user’s picture or music libraries. This is something that the FileSystem API submitted to W3C doesn’t actually provide, though many people seems to think it does. To satisfy that use-case we have the DeviceStorage API. This API allows full file system capabilities for “user files”. I.e. files that aren’t specific to a website, but rather resources that are managed and owned by the user and that the user might want to access through several apps. Such as photos and music. The DeviceStorage API is basically a simple file system API mostly optimized for these types of files.

    We’re still in the process of specifying and implementing this API. It’s available to test with in recent nightly builds, but so far isn’t enabled by default. The main problem with exposing this functionality to the web is security. You wouldn’t want just any website to read or modify your images. We could put up a prompt like we do with the GeoLocation API, given that this API potentially can delete all your pictures from the last 10 years, we probably want something more. This is something we are actively working on. But it’s definitely the case here that security is the hard part here, not implementing the low-level file operations.

    Low-level file manipulation

    A less common request is the ability to do low-level create, read, update and delete (CRUD) file operations. For example being able to write 10 bytes in the middle of a 10MB file. This is not something IndexedDB supports right now, it only allows adding and removing whole files. This is supported by the FileWriter specification draft. However I think this part of this API has some pretty fundamental problems. Specifically there are no locking capabilities, so there is no way to do multiple file operations and be sure that another tab didn’t modify or read the file in between those operations. There is also no way to do fsync which means that you can’t implement ACID type applications on top of FileWriter, such as a database.

    We have instead created an API with the same goal, but which has capabilities for locking a file and doing multiple operations. This is done in a way to ensure that there is no risk that pages can forget to unlock a file, or that deadlocks can occur. The API also allows fsync operations which should enable doing things like databases on top of FileHandle. However most importantly, the API is done in such a way that you shouldn’t need to nest asynchronous callbacks as much as with FileWriter. In other words it should easier to use for authors. You can read more about FileHandle at

    The filesystem URL scheme

    There is one more capability that exist in the FileSystem API not covered above. The specification introduces a new filesystem: URL scheme. When loading URLs from filesystem: it returns the contents of files in stored using the FileSystem API. This is a very cool feature for a couple of reasons. First of all these URLs are predictable. Once you’ve stored a file in the file system, you always know which URL can be used to load from it. And the URL will continue to work as long as the file is stored in the file system, even if the web page is reloaded. Second, relative URLs work with the filesystem: scheme. So you can create links from one resource stored in the filesystem to another resource stored in the filesystem.

    Firefox does support the blob: URL scheme, which does allow loading data from a Blob anywhere where URLs can be used. However it doesn’t have the above mentioned capabilities. This is something that I’d really like to find a solution for. If we can’t find a better solution, implementing the Google specifications is definitely an option.


    As always when talking about features to be added to the web platform it’s important to talk about use cases and capabilities, and not jump directly to a particular solution. Most of the use cases that the FileSystem API aims to solve can be solved in other ways. In my opinion many times in better ways.

    This is why we haven’t prioritized implementing the FileSystem API, but instead focused on things like making our IndexedDB implementation awesome, and coming up with a good API for low-level file manipulation.

    Focusing on IndexedDB has also meant that we very soon have a good API for basic file storage available in 3 browsers: IE10, Firefox and Chrome.

    On a related note, we just fixed the last known spec compliance issues in our IndexedDB implementation, so Firefox 16 will ship with IndexedDB unprefixed!

    As always, we’re very interested in getting feedback from other people, especially from web developers. Do you think that FileSystem API is something we should prioritize? If so, why?

  3. Creating thumbnails with drag and drop and HTML5 canvas

    HTML5 Canvas is a very cool feature. Seemingly just an opportunity to paint inside the browser with a very low-level API you can use it to heavily convert and change image and video content in the document. Today, let’s take a quick look at how you can use Canvas and the FileReader API to create thumbnails from images dropped into a browser document.

    The final code is available on GitHub and you can see an online demo here. There is also a screencast available on YouTube:

    Step 1: Getting the files into the browser

    The first step to resize images in the browser is to somehow get them. For this, we can just add an element in the page and assign drag and drop event handlers to it:

    s.addEventListener( 'dragover', function ( evt ) {
    }, false );
    s.addEventListener( 'drop', getfiles, false );

    Notice that we only prevent the default behaviour when we drag things over the element. This is to prevent the browser from just showing the images when we drag them in.

    The getfiles() function then does the hard work of reading all the files in and sending them on to the functions that do the resizing and image generation:

    function getfiles( ev ) {
      var files = ev.dataTransfer.files;
      if ( files.length > 0 ) {
        var i = files.length;
        while ( i-- ) {
          var file = files[ i ];
          if ( file.type.indexOf( 'image' ) === -1 ) { continue; }
          var reader = new FileReader();
          reader.readAsDataURL( file );
          reader.onload = function ( ev ) {
            var img = new Image();
            img.src =;
            img.onload = function() {
            imagetocanvas( this, thumbwidth, thumbheight, crop, background );

    The drop event gives us a property called dataTransfer which contains a list of all the files that have been dropped. We make sure that there was at least one file in the drop and then iterate over them.

    If the file type was not an image (or in other words the type property of the file does not contain the string “image”) we don’t do anything with the file and continue the loop.

    If the file is an image we instantiate a new FileReader and tell it to read the file as a Data URL. When the reader successfully loaded the file it fires its onload handler.

    In this handler we create a new image and set its src attribute to the result of the file transfer. We then send this image to the imagetocanvas() function with the parameters to resize (in the demo these come from the form):

    function imagetocanvas( img, thumbwidth, thumbheight, crop, background ) {
      c.width = thumbwidth;
      c.height = thumbheight;
      var dimensions = resize( img.width, img.height, thumbwidth, thumbheight );
      if ( crop ) {
        c.width = dimensions.w;
        c.height = dimensions.h;
        dimensions.x = 0;
        dimensions.y = 0;
      if ( background !== 'transparent' ) {
        cx.fillStyle = background;
        cx.fillRect ( 0, 0, thumbwidth, thumbheight );
        img, dimensions.x, dimensions.y, dimensions.w, dimensions.h
      addtothumbslist( jpeg, quality );

    This function gets the desired thumbnail size and resizes the canvas to these dimensions. This has the added benefit of wiping the canvas so that no old image data would be added to our thumbnail. We then resize the image to fit into the thumbnail using a resize() function. You can see for yourself what this one does in the source code, it just means the image gets resized to fit. The function returns an object with the width and the height of the new image and the x and y position where it should be positioned onto the canvas.

    If we don’t want the full-size thumbnail but instead crop it we resize the canvas accordingly and reset x and y to 0.

    If the user requested a background we fill the canvas with the colour. After that we put the image on the canvas with the x and y coordinates and the new width and height.

    This takes care of creating a new thumbnail on the canvas, but we haven’t got it as an image in the document yet. To this end, we call addtothumbslist():

    function addtothumbslist( jpeg, quality ) {
      var thumb = new Image(),
          url = jpeg ? c.toDataURL( 'image/jpeg' , quality ) : c.toDataURL();
      thumb.src = url;
      thumb.title = Math.round( url.length / 1000 * 100 ) / 100 + ' KB';
      o.appendChild( thumb );

    This one creates a new image and checks if the users wanted a JPG or PNG image to be created. PNG images tend to be better quality but also bigger in file size. If a JPG was requested we call the canvas’ toDataURL() method with two parameters: the requested JPEG mime type and the quality of the image (ranging between 0 and 1 with 1 being best quality). If a PNG is wanted, we can just call toDataURL() without any parameters as this is the default.

    We set the src of the image to the generated url string and add a title showing the size of the image in KB (rounded to two decimals). All that is left then is to add the thumb to the output element on the page.

    That’s it, you can now drag and drop images into the browser to generate thumbnails. Right now, we can only save them one at a time (or if you have some download add-ons all at once). Would be fun to add Zip.js to the mix to offer them as a zip. I dare you! :)

    More reading:

  4. HTML5 APIs – Where No Man Has Gone Before! – Presentation at Gotham JS

    Last weekend I was in New York City to speak at the GothamJS conference and Mozilla also sponsored it. It was a nice event with about 200 attendees, taking place in the NYIT Auditorium on Broadway.

    The event was one-track with 8 speakers, and personally I always prefer when it’s just one track for follow-up discussions and that everyone have seen and heard the same thing. The topics were ranging broadly between script loaders and HTML5 in one end, and voice-controlled telephony applications in the other.

    My presentation

    My talk was about HTML5 APIs in general, to give an introduction to them but also to inspire people to try things out and also give feedback to both working groups and web browser vendors about current implementations.

    Slides can also be downloaded at SlideShare.

    Additionally to the APIs covered in my London Ajax Mobile Event presentation, I went through Web Sockets, File API, HTML5 video, canvas and WebGL. Also, if you are more interested in the <canvas> element, my colleague Rob Hawkes recently released the Foundation HTML5 Canvas book.

    What I especially liked talking about is services like which helps you take control over the problem of different video codec support in different web browsers, by storing various formats and then deliver the most suitable one depending on the web browser/device accessing it.

    Another favorite is Universal Subtitles, which is an excellent tool for everyone to be able to add subtitles to a video clip, empowering users with varying language skills to take part of a video and its content and sharing it with the world.

    An option to make the content of a web site richer, there is Popcorn.js to sync key events in the video playing to what kind of text or other information you want to present to go with that. To complement that, the Butterapp is an editor to create that kind of content syncing, currently in alpha.

    I also mentioned videograbber for taking easy video screenshots in the web browser.

    Dev Derby <video> challenge

    I also want to take the opportunity to remind you that Mozilla Dev Derby has a challenge for what you can accomplish with the <video> that goes till the end of July, so please submit anything if you have a good idea!

  5. Aurora 6 is here

    What’s new in Aurora 6?

    The most notable addition to this new Aurora are the <progress> element, window.matchMedia API, better APIs for binary data, Server-Sent Events as well as the return of WebSockets.

    Aurora 6 has been published last week and can be downloaded from

    The <progress> element

    screenshot of progress bars as seen on windows
    This element can be used to give a visual cue of something in progress in the page. System progress bars are being used, which means that users of MacOS and Linux will see something different than what is pictured here.


    window.matchMedia() is the javascript equivalent of CSS Media Queries.

    Binary data APIs improvements

    • XHR2 responseType and response attributes allow getting the response from an XHR in the form of efficient Blob or ArrayBuffer.
    • FileReader.readAsArrayBuffer() allow reading files and get the response as an ArrayBuffer.
    • BlobBuilder allow concatenating multiple blobs as well as text and ArrayBuffer into a single Blob.

    Expect to see even more improvements in this area in Aurora 7.

    Server Sent Events

    Server Sent Events are a mean for a server-side script to generate client-side events accompanied with data.

    Messages generated on the server-side with a text/event-stream mime-type and consist of a list of events data.

    data: data generated by the server
    data: this line will generate a second event

    WebSockets are back!

    WebSockets can be used to create an interactive communication channel between a browser and a server. They are already used to build “HTML5” chats, multiplayer games, and much much more.
    Note that this API will be temporarily namespaced in prevision of upcoming changes to the specification.

    Other Interesting Additions

    Learn about what’s new in Aurora 6’s user interface on and let us know what you think.

  6. How to resume a paused or broken file upload

    This is a guest post written by Simon Speich. Simon is a web developer, believer in web standards and a lover of Mozilla since Mozilla 0.8 (!).

    Today, Simon is experimenting with the File API and the new Slice() method introduced in Firefox 4. Here is how he implements a resume upload feature in a file uploader.

    Uploading a file is done with the XHR Level2 object. It provides different methods and events to handle the request (e.g., sending data and monitoring its progress) and to handle the response (e.g., checking if uploading was OK or an error occurred). For more information, read How to develop a HTML5 Image Uploader.

    Unfortunately, the XHR object does not provide a method to pause and resume an upload. But it is possible to implement that functionality by combining the new File API’s slice() method with the XHR’s abort() method. Let’s see how.

    Live demo

    You can check out the live fileUploader demo or download the JavaScript and PHP code from

    Pause and resume an upload

    The idea is to provide the user with a button to pause an upload in progress and to resume it again later. Pausing the request is simple. Just abort the request with the abort() method. Make sure your user interface doesn’t report this as an error.

    The harder part is resuming the upload, since the request was aborted and the connection closed. Instead of sending the whole file again, we use the blob’s mozSlice() method to first create a chunk containing the remaining part of the file. Then we create the new request, send the chunk, and append it to the part already saved on the server before the request was aborted.

    Creating a chunk

    The chunk can be created as:

    var chunk = file.mozSlice(start, end);

    All we need to know is where to start slicing, that is, the number of bytes that was already uploaded. The easiest way would be to save the ProgressEvent’s loaded property before we aborted the request. However, this number is not necessarily exactly the same as the number of bites written on the server. The most reliable approach is to send an additional request to fetch the size of the partially written file from the server before we upload again. Then this information can be used to slice the file and create the chunk.

    Summarizing the above chain of events

    (assuming an upload is already in progress):

    1. user pauses upload
    2. state of UI is set to paused
    3. uploading is aborted
    4. server stops writing file to disk
    5. user resumes upload
    6. state of UI is set to resuming
    7. get size of partially written file from server
    8. slice file into remaining part (chunk)
    9. upload chunk
    10. state of UI is set to uploading
    11. server appends data

    JavaScript code

    // Assuming that the request to fetch the already written bytes has just
    // taken place and xhr.result contains the response from the server.
    var start = xhr.result.numWrittenBytes;
    var chunk = file.mozSlice(start, file.size);
    var req = new XMLHttpRequest();'post', 'fnc.php?fnc=resume', true);
    req.setRequestHeader("Cache-Control", "no-cache");
    req.setRequestHeader("X-Requested-With", "XMLHttpRequest");
    req.setRequestHeader("X-File-Size", file.size);

    PHP code

    The only difference on the server side between handling a normal upload and a resumed upload is that in the latter case you need to append to your file instead of creating it.

    $headers = getallheaders();
    $protocol = $_SERVER[‘SERVER_PROTOCOL’];
    $fnc = isset($_GET['fnc']) ? $_GET['fnc'] : null;
    $file = new stdClass();
    $file->name = basename($headers['X-File-Name']));
    $file->size = $headers['X-File-Size']);
    // php://input bypasses the php.ini settings, so we have to limit the file size ourselves:
    $maxUpload = getBytes(ini_get('upload_max_filesize'));
    $maxPost = getBytes(ini_get('post_max_size'));
    $memoryLimit = getBytes(ini_get('memory_limit'));
    $limit = min($maxUpload, $maxPost, $memoryLimit);
    if ($headers['Content-Length'] > $limit) {
      header($protocol.' 403 Forbidden');
      exit('File size to big. Limit is '.$limit. ' bytes.');
    $file->content = file_get_contents(’php://input’);
    $flag = ($fnc == ‘resume’ ? FILE_APPEND : 0);
    file_put_contents($file->name, $file->content, $flag);
    function getBytes($val) {
    $val = trim($val);
          $last = strtolower($val[strlen($val) - 1]);
          switch ($last) {
              case 'g': $val *= 1024;
    case 'm': $val *= 1024;
    case 'k': $val *= 1024;
    return $val;


    The PHP code example above does not do any security checks. A user can send and write any type of file to your disk or append to or even overwrite any of your files. So make sure you take the appropriate security measures when enabling uploading on your website.

    Resume upload after an error

    The sequence of events for pause-and-resume can also be used to continue uploading after a network error. Instead of trying to upload the whole file again, get the already written file size from the server and slice the file into a new chunk first.

    Note about resuming a paused or broken file upload

    Appending the chunk to the file might create a corrupted file, since you don’t have control over what the server writes after the request is aborted — if it writes anything at all.

    Resume upload after a browser crash

    You can take the pause-and-resume functionality even a step further. It is possible (at least in theory) to even recover uploading after an unexpected closing or crashing of the browser. The problem is that after the browser was closed, the file object, which was read into memory, is lost. The user would have to re-pick or drag over the file again first, before being able to slice the file to resume the upload.

    Instead, you could use the new IndexedDB API and store the file before any uploading is done. Then after a browser crash, load the file from the database, slice into the remaining chunk and resume the upload.

  7. The shortest image uploader – ever!

    A couple of line of JavaScript. That’s all you need.

    This is a very short Image Uploader, based on API. If you want to do more complex stuff (like resize, crop, drawing, colors, …) see my previous post.

    Back-story. I’ve been talking to‘s owner (Hi Alan!). He recently added Drag’n Drop support to his image sharing website. But also, Alan allows Cross-Domain XMLHttpRequest (thank you!). So basically, you can use his API to upload pictures to his website, from your HTML page, with no server side code involved – at all.

    And here is an example of what you can do:

    (see the full working code on github – live version there )

    (also, you’ll need to understand FormData, see here)

    function upload(file) {
      // file is from a <input> tag or from Drag'n Drop
      // Is the file an image?
      if (!file || !file.type.match(/image.*/)) return;
      // It is!
      // Let's build a FormData object
      var fd = new FormData();
      fd.append("image", file); // Append the file
      fd.append("key", "6528448c258cff474ca9701c5bab6927");
      // Get your own key:
      // Create the XHR (Cross-Domain XHR FTW!!!)
      var xhr = new XMLHttpRequest();"POST", ""); // Boooom!
      xhr.onload = function() {
        // Big win!
        // The URL of the image is:
       // Ok, I don't handle the errors. An exercice for the reader.
       // And now, we send the formdata

    That’s all :)

    Works on Chrome and Firefox 4 (Edit:) and Safari.

  8. How to develop a HTML5 Image Uploader

    HTML5 comes with a set of really awesome APIs. If you combine these APIs with the <canvas> element, you could create a super/modern/awesome Image Uploader. This article shows you how.

    All these tips work well in Firefox 4. I also describe some alternative ways to make sure it works on Webkit-based browsers. Most of these APIs don’t work in IE, but it’s quite easy to use a normal form as a fallback.

    Please let us know if you use one of these technologies in your project!

    Retrieve the images

    Drag and drop

    To upload files, you’ll need an <input type=”file”> element. But you should also allow the user to drag and drop images from the desktop directly to your web page.

    I’ve written a detailed article about implementing drag-and-drop support for your web pages.

    Also, take a look at the Mozilla tutorial on drag-and-drop.

    Multiple input

    Allow the user the select several files to upload at the same time from the File Picker:

    <input type="file" multiple>

    Again, here is an article I’ve written about multiple file selection.

    Pre-process the files

    Use the File API

    (See the File API documentation for details.)

    From drag-and-drop or from the <input> element, you have a list a files ready to be used:

    // from an input element
    var filesToUpload = input.files;
    // from drag-and-drop
    function onDrop(e) {
      filesToUpload = e.dataTransfer.files;

    Make sure these files are actually images:

    if (!file.type.match(/image.*/)) {
      // this file is not an image.

    Show a thumbnail/preview

    There are two options here. You can either use a FileReader (from the File API) or use the new createObjectURL() method.


    var img = document.createElement("img");
    img.src = window.URL.createObjectURL(file);


    var img = document.createElement("img");
    var reader = new FileReader();
    reader.onload = function(e) {img.src =}

    Use a canvas

    Once you have the image preview in an <img> element, you can draw this image in a <canvas> element to pre-process the file.

    var ctx = canvas.getContext("2d");
    ctx.drawImage(img, 0, 0);

    Resize the image

    People are used to uploading images straight from their camera. This gives high resolution and extremely heavy (several megabyte) files. Depending on the usage, you may want to resize such images. A super easy trick is to simply have a small canvas (800×600 for example) and to draw the image tag into this canvas. Of course, you’ll have to update the canvas dimensions to keep the ratio of the image.

    var MAX_WIDTH = 800;
    var MAX_HEIGHT = 600;
    var width = img.width;
    var height = img.height;
    if (width > height) {
      if (width > MAX_WIDTH) {
        height *= MAX_WIDTH / width;
        width = MAX_WIDTH;
    } else {
      if (height > MAX_HEIGHT) {
        width *= MAX_HEIGHT / height;
        height = MAX_HEIGHT;
    canvas.width = width;
    canvas.height = height;
    var ctx = canvas.getContext("2d");
    ctx.drawImage(img, 0, 0, width, height);

    Edit the image

    Now, you have your image in a canvas. Basically, the possibilities are infinite. Let’s say you want to apply a sepia filter:

    var imgData = ctx.createImageData(width, height);
    var data =;
    var pixels = ctx.getImageData(0, 0, width, height);
    for (var i = 0, ii =; i < ii; i += 4) {
        var r =[i + 0];
        var g[i + 1];
        var b =[i + 2];
        data[i + 0] = (r * .393) + (g *.769) + (b * .189);
        data[i + 1] = (r * .349) + (g *.686) + (b * .168)
        data[i + 2] = (r * .272) + (g *.534) + (b * .131)
        data[i + 3] = 255;
    ctx.putImageData(imgData, 0, 0);

    Upload with XMLHttpRequest

    Now that you have loaded the images on the client, eventually you want to send them to the server.

    How to send a canvas

    Again, you have two options. You can convert the canvas to a data URL or (in Firefox) create a file from the canvas.


    var dataurl = canvas.toDataURL("image/png");

    Create a file from the canvas

    var file = canvas.mozGetAsFile("foo.png");

    Atomic upload

    Allow the user to upload just one file or all the files at the same time.

    Show progress of the upload

    Use the upload events to create a progress bar:

    xhr.upload.addEventListener("progress", function(e) {
      if (e.lengthComputable) {
        var percentage = Math.round((e.loaded * 100) /;
        // do something
    }, false);

    Use FormData

    You probably don’t want to just upload the file (which could be easily done via: xhr.send(file)) but add side information (like a key and a name).

    In that case, you’ll need to create a multipart/form-data request via a FormData object. (See Firefox 4: easier JS form handling with FormData.)

    var fd = new FormData();
    fd.append("name", "paul");
    fd.append("image", canvas.mozGetAsFile("foo.png"));
    fd.append("key", "××××××××××××");
    var xhr = new XMLHttpRequest();"POST", "");

    Open your API

    Maybe you want to allow other websites to use your service.

    Allow cross-domain requests

    By default, your API is only reachable from a request created from your own domain. If you want to allow people use your API, allow Cross-XHR in your HTTP header:

    Access-Control-Allow-Origin: *

    You can also allow just a pre-defined list of domains.

    Read about Cross-Origin Resource Sharing.


    (Thanks to Daniel Goodwin for this tip.)

    Also, listen to messages sent from postMessage. You could allow people to use your API through postMessage:

    document.addEventListener("message", function(e){
        // retrieve parameters from
        var key =;
        var name =;
        var dataurl =;
        // Upload
    // Once the upload is done, you can send a postMessage to the original window, with URL

    That’s all. If you have any other tips to share, feel free to drop a comment.

    Enjoy ;)

  9. Firefox 4 – FormData and the new File.url object

    This is a guest post from Jonas Sicking, who does much of the work inside of Gecko on content facing features. He covers FormData, which we’ve talked about before, but shows how it can connect to an important part of the File API we’ve added for Firefox 4: File.url.

    In Firefox 4 we’re continuing to add support for easier and better file handling. Two features that are available in Firefox 4 Beta 1 are File.url and FormData. In this post I’ll give a short introduction to both of them.

    Starting with Firefox 3.6 we supported a standardized way of reading files using the FileReader object. This object allowed you to read the contents of a file into memory to analyze its content or display the contents to the user. For example to display a preview of an image to a user, you could use the following script

    var reader = new FileReader();
    reader.onload = function() {
      previewImage.src = reader.result;

    There are two unfortunate things to note here. First of all, reader.result is a data url which contains the whole contents of the file. I.e. the full file contents is kept in memory. Not only that, data urls are often base64 encoded, and each base64 encoded character is stored in a javascript character, which generally uses 2 bytes of memory. The result is that if the above code is used to read a 10MB image file, reader.result is a 26.7MB large string.

    The other unfortunate thing is that the above code is somewhat complicated since it needs to use asynchronous events to read from disk.

    In Firefox 4 Beta 1 you can instead use the following code

    previewImage.src = myFile.url;

    This uses the File.url property defined by the File API specification. The property returns a short, about 40 characters, url. The contents of this url you generally won’t have to care about, but for the interested it contains a randomly generated identifier prefixed by a special scheme.

    This can url can then be used anywhere where generic urls are used, and reading from that url directly reads from the file. The example above makes the image element read directly from the file and display the resulting image to the user. The load works just like when loading from a http url, normal ‘load’ events and ‘error’ events are fired as appropriate.

    You can also display HTML files by using an <iframe> and setting its src to the value returned by File.url. However you have to watch out for that relative url in the HTML file won’t work as the relative urls are resolved against the generated url returned from File.url. This is intentional as the user might have only granted access to the HTML file, and not to the image files.

    Other places where this URL can be useful is for CSS background images, to set the background of an element to use a local file. Or even read from the url using XMLHttpRequest if you have existing code that uses XMLHttpRequest and which you don’t want to convert to use FileReader.

    The other feature that we are supporting in Firefox 4 Beta 1 is the FormData object. This object is useful if you have existing server infrastructure for receiving files which uses multipart/form-data encoding.

    In Firefox 3.6, sending a file to a server using multipart/form-data encoding using XMLHttpRequest required a a bit of manual work. You had to use a FileReader to read the contents of the file into memory, then manually multipart/form-data encode it, and then finally send it to a server. This both required more code, as well as required that the whole file contents was read into memory.

    In Firefox 4, you’ll be able to use the FormData object from the XMLHttpRequest Level 2 specification. This allows the following clean code

    var fd = new FormData();
    fd.append("fileField", myFile);
    var xhr = new XMLHttpRequest();"POST", "file_handler.php");

    This will automatically multipart/form-data encode the file and send it to the server. The contents of the file is read in small chunks and thus doesn’t use any significant amounts of memory. It will send the same contents as a form with the following markup:

    <form enctype="multipart/form-data" method="post">
      <input type="file" name="fileField">

    If you want to send multiple files, simply call fd.append for each file you want to submit and the files will all be sent in a single request. You can of course still use the normal progress events, both for upload and download progress, that XMLHttpRequest always supplies.

    However FormData also has another nice feature. You can also send normal, non-file, multipart/form-data values. For example

    var fd = new FormData();
    fd.append("author", "Jonas Sicking");
    fd.append("name", "New File APIs");
    fd.append("attachment1", file1);
    fd.append("attachment2", file2);
    var xhr = new XMLHttpRequest();"POST", "file_handler.php");

    You can even get a FormData object which contains all the information from a <form>. (However note that the syntax for this is likely to change before final release)

    var fd = myFormElement.getFormData();
    var xhr = new XMLHttpRequest();"POST", "file_handler.php");

    Here fd will contain data from all the form fields, from radio buttons to file fields, that are contained in the form.

    As always, we’re all ears for feedback about these features. Please let us know what you think, and especially if you have tested them out and they do not appear to do what you expect them to. You can use to give us feedback, or use the feedback button in the upper right corner (see below screenshot).

  10. HTML5 adoption stories: and html5 drag and drop

    This is a guest post from Tomas Barreto, a developer who works at They recently adopted HTML5 drag and drop as a way to share files with other people using new features in Firefox. The included video is a pitch for the feature and service, but shows how easy it is to do simple HTML5-based upload progress even with multiple files. Tomas gives an overview of the relatively simple JavaScript required to do this, and how improvements in Firefox 4 will make things even easier. Also have a quick look at the bottom of the post for links to additional docs and resources.

    At, we’re always exploring new ways to help users get content quickly and securely onto our cloud content management platform. So when asked, “What feature would make you use Box more?” during the Box Hack Olympics in April, my colleague CJ and I decided to tackle the most intuitive way to upload files: simply dragging them from the desktop into Box.

    We considered technologies ranging from Gears to Firefox plugins, but only HTML5 had sufficient adoption. By using some of the JavaScript APIs defined in the HTML5 standard, CJ and I could create a seamless drag and drop experience for our users on supporting browsers. Furthermore, using an HTML5-based upload feature would allow us to enable users to select multiple files at once, and also display progress on the client without polling. And with HTML5 adoption across the latest versions of three of the top four browsers, we felt confident about building an upload method based on this new technology without the trade-offs of using a third-party plug-in.

    We rolled out the first rev of our drag and drop feature a few weeks ago, and we’re impressed with how quickly it has been adopted. It’s already one of the most popular ways to get files onto Box, and in its first week it surpassed our old AJAX upload method. You can check out our demo video to get a feel for the feature:

    To build this feature, we referenced a handful of online examples that explained how to use Firefox 3 FileReader object and the drag and drop file event support. Our first implementation used this object to load the file into memory and then took advantage of the latest XMLHttpRequest events to track progress on the client.

    var files = event.originalEvent.dataTransfer.files; // drop event
    var reader = new FileReader();
    reader.onload = function(event) {
      var file_contents =;
      var request = new XMLHttpRequest();
      ... // attach event listeners to monitor progress and detect errors
      var post_body = '';
      .. // build post body
      post_body += file_contents;
      .. // finish post body
      var url = '';
      var request = new XMLHttpRequest();
  "POST",  url, true); // open asynchronous post request
      request.setRequestHeader('content-type', 'multipart/form-data; boundary=""'); // make sure to set a boundary

    This approach worked well because we could use the same server processing code that we previously used for uploads. The main disadvantage here is that the FileReader object reads the entire file into memory, which is not optimal for a general upload use case. Our current HTML5 implementation uses this logic and has forced us to restrict drag and drop uploads to just 25mb. However, thanks to recommendations from the Mozilla team, we’ll be taking an alternative approach for V2 of drag and drop, where the file is read chunks as needed by the request. Here’s how we’re going to do it:

    var files = event.originalEvent.dataTransfer.files; // drop event
    var url = '';
    var request = new XMLHttpRequest();"POST",  url, true); // open asynchronous post request

    Since this approach is not formatted as a multipart form-data, it will require some adjustments on our back-end to support receiving file uploads in this way. However, it’s definitely worth the trade-off since we’ll get all the benefits of the previous method and we don’t need special file size restrictions. In the future, we’ll consider using yet another way to efficiently upload files that is supported in Firefox 4 and uses the traditional multi-part form:

    var files = event.originalEvent.dataTransfer.files; // drop event
    var url = '';
    var request = new XMLHttpRequest();
    var fd = new FormData;
    fd.append("myFile", files[0]);
"POST",  url, true); // open asynchronous post request

    We’re already exploring more ways to enrich the Box experience using HTML5. With HTML5, we can build faster, richer and more interactive features with native browser support, and bridge the traditional gap between desktop software and web applications. Here are just a few cool new upload-related features on our roadmap:

    • Pause/Resume uploads using the Blob slice API to split files into chunks (this will be a huge robustness boost, especially for large uploads)
    • Allowing uploads to resume even after the browser closes by caching the file using IndexedDB support (possibly in Firefox 4)

    We’d also like to begin a discussion about supporting the reverse drag and drop use case: dragging files from the browser to the desktop. Based on our users’ enthusiasm around the drag and drop upload feature, we think the reverse functionality would well received. If you are interested in contributing to a specification for this feature, please let us know (html5 [-at$]!