How to resume a paused or broken file upload

This is a guest post written by Simon Speich. Simon is a web developer, believer in web standards and a lover of Mozilla since Mozilla 0.8 (!).

Today, Simon is experimenting with the File API and the new Slice() method introduced in Firefox 4. Here is how he implements a resume upload feature in a file uploader.

Uploading a file is done with the XHR Level2 object. It provides different methods and events to handle the request (e.g., sending data and monitoring its progress) and to handle the response (e.g., checking if uploading was OK or an error occurred). For more information, read How to develop a HTML5 Image Uploader.

Unfortunately, the XHR object does not provide a method to pause and resume an upload. But it is possible to implement that functionality by combining the new File API’s slice() method with the XHR’s abort() method. Let’s see how.

Live demo

You can check out the live fileUploader demo or download the JavaScript and PHP code from github.com.

Pause and resume an upload

The idea is to provide the user with a button to pause an upload in progress and to resume it again later. Pausing the request is simple. Just abort the request with the abort() method. Make sure your user interface doesn’t report this as an error.

The harder part is resuming the upload, since the request was aborted and the connection closed. Instead of sending the whole file again, we use the blob’s mozSlice() method to first create a chunk containing the remaining part of the file. Then we create the new request, send the chunk, and append it to the part already saved on the server before the request was aborted.

Creating a chunk

The chunk can be created as:

var chunk = file.mozSlice(start, end);

All we need to know is where to start slicing, that is, the number of bytes that was already uploaded. The easiest way would be to save the ProgressEvent’s loaded property before we aborted the request. However, this number is not necessarily exactly the same as the number of bites written on the server. The most reliable approach is to send an additional request to fetch the size of the partially written file from the server before we upload again. Then this information can be used to slice the file and create the chunk.

Summarizing the above chain of events

(assuming an upload is already in progress):

  1. user pauses upload
  2. state of UI is set to paused
  3. uploading is aborted
  4. server stops writing file to disk
  5. user resumes upload
  6. state of UI is set to resuming
  7. get size of partially written file from server
  8. slice file into remaining part (chunk)
  9. upload chunk
  10. state of UI is set to uploading
  11. server appends data

JavaScript code

// Assuming that the request to fetch the already written bytes has just
// taken place and xhr.result contains the response from the server.
var start = xhr.result.numWrittenBytes;
var chunk = file.mozSlice(start, file.size);

var req = new XMLHttpRequest();
req.open('post', 'fnc.php?fnc=resume', true);

req.setRequestHeader("Cache-Control", "no-cache");
req.setRequestHeader("X-Requested-With", "XMLHttpRequest");
req.setRequestHeader("X-File-Name", file.name);
req.setRequestHeader("X-File-Size", file.size);

req.send(chunk);

PHP code

The only difference on the server side between handling a normal upload and a resumed upload is that in the latter case you need to append to your file instead of creating it.

$headers = getallheaders();
$protocol = $_SERVER[‘SERVER_PROTOCOL’];
$fnc = isset($_GET['fnc']) ? $_GET['fnc'] : null;
$file = new stdClass();
$file->name = basename($headers['X-File-Name']));
$file->size = $headers['X-File-Size']);

// php://input bypasses the php.ini settings, so we have to limit the file size ourselves:
$maxUpload = getBytes(ini_get('upload_max_filesize'));
$maxPost = getBytes(ini_get('post_max_size'));
$memoryLimit = getBytes(ini_get('memory_limit'));
$limit = min($maxUpload, $maxPost, $memoryLimit);
if ($headers['Content-Length'] > $limit) {
  header($protocol.' 403 Forbidden');
  exit('File size to big. Limit is '.$limit. ' bytes.');
}

$file->content = file_get_contents(’php://input’);
$flag = ($fnc == ‘resume’ ? FILE_APPEND : 0);
file_put_contents($file->name, $file->content, $flag);

function getBytes($val) {

$val = trim($val);
      $last = strtolower($val[strlen($val) - 1]);
      switch ($last) {
          case 'g': $val *= 1024;

case 'm': $val *= 1024;

case 'k': $val *= 1024;
      }

return $val;
}

Caution!

The PHP code example above does not do any security checks. A user can send and write any type of file to your disk or append to or even overwrite any of your files. So make sure you take the appropriate security measures when enabling uploading on your website.

Resume upload after an error

The sequence of events for pause-and-resume can also be used to continue uploading after a network error. Instead of trying to upload the whole file again, get the already written file size from the server and slice the file into a new chunk first.

Note about resuming a paused or broken file upload

Appending the chunk to the file might create a corrupted file, since you don’t have control over what the server writes after the request is aborted — if it writes anything at all.

Resume upload after a browser crash

You can take the pause-and-resume functionality even a step further. It is possible (at least in theory) to even recover uploading after an unexpected closing or crashing of the browser. The problem is that after the browser was closed, the file object, which was read into memory, is lost. The user would have to re-pick or drag over the file again first, before being able to slice the file to resume the upload.

Instead, you could use the new IndexedDB API and store the file before any uploading is done. Then after a browser crash, load the file from the database, slice into the remaining chunk and resume the upload.

About Simon Speich

Simon Speich is a web developer, believer in web standards and a lover of Mozilla since Mozilla 0.8 He is also passionate about photography. You can find out more about him on his website www.speich.net.

More articles by Simon Speich…

About Paul Rouget

Paul is a Firefox developer.

More articles by Paul Rouget…


18 comments

  1. Tim Reynolds

    I am concerned about the security implications of the new APIs that are exposed. What about ‘finishing’ an upload as a way to tack on malicious code? I think that along with the API there should be a standard server-side component developed so people don’t make the mistake of naive implementations.

    April 8th, 2011 at 10:59

    1. voracity

      I don’t understand your concerns.

      For one, the new APIs don’t introduce any new capabilities for manipulating servers, only browsers.

      For two, this isn’t going to encourage a swag of novice developers to try their hand at fancy file uploaders. If anything, it makes it *less* likely that novice developers will try that kind of thing because these APIs make it difficult to create a “state of the art” file uploader, with all the whiz-bangery (like progress bars and thumbnail previews and resumes and whatnot). Thus it is likely that 3rd parties will step in and provide drop-in libraries to make that stuff easy, just like with HTML editors and even simple things like photo zooming (e.g. Lightbox, etc.) Those 3rd parties would then do *exactly* what you (and perhaps even Ted) are asking for: creating standard, secure server side components.

      Let’s not have the browser makers be responsible for *all* the innovations: web library devs can do (and are doing) their bit too.

      April 9th, 2011 at 03:59

      1. voracity

        PS: It’s nice to see actual code on hacks.mozilla.org again. Apart from the Wiki Wednesday articles, it would be good if every post here was required to present at least 1 line of code.

        April 9th, 2011 at 04:03

  2. Tony Mechelynck

    To address Tim Reynolds’s concerns, and some other ones of mine, there should be a way to prevent a resume if the file has changed in the meantime. Comparing the partly saved file with the same length at the begin of the local file could be costly, but in the case of ADSL (where downloads are much faster than uploads) it might still be sufficiently faster than re-uploading from scratch to merit consideration. Another possibility (which does not address the case of browser crashes including AC power failures) would be saving a hash of the partial upload immediately after the abort (once it can be reasonably assumed that the last partial buffer has been written out on the remote storage) and saving that hash somewhere (maybe in disk cache) where we can hope to get it back from even after a restart.

    April 8th, 2011 at 11:47

  3. Bazzargh

    IIRC the browser doesnt handle resuming downloads after a crash either-I posted a patch on that bug aaages ago, probably bitrotted by now, but couldnt get a unit test for it running
    https://bugzilla.mozilla.org/show_bug.cgi?id=435799
    …should anyone be motivated to pick it up…
    It became much less of an issue for me over time, as firefox became more stable in later releases.

    April 8th, 2011 at 13:39

  4. Jeremy Walton

    I just made an uploader almost the same. Except, instead of doing full uploads I upload files in chunks. Firstly, because then I can set PHP to have an INSANELY small post size. This allows php to have a much smaller memory footprint. Secondly, for the small chunk, in javascript, I calculate the SHA1 of the chunk, this guarantees, that the chunk is sent over correctly. The two drawbacks are, depending on the chunk size and the clients you are serving, you could get a slightly slower upload and you upload a bit more data since for each chunk you are sending new http headers. This also allows me to do pause and resume a file. Add in a full file sha1, with a filename and size, and you could resume a file, even if they were to close the browser.

    April 11th, 2011 at 22:32

  5. Yansky

    “Instead, you could use the new IndexedDB API and store the file before any uploading is done”

    Could you clarify this? Do you mean you can store the actual file in the IndexedDB as a blob or do you mean you can store a reference to the file?

    Cheers.

    April 12th, 2011 at 02:16

  6. Simon

    @Yansky: You can’t store the binary data directly as a blob in the indexedDB since it only allows you to store javascript objects. But you can use the FileReader API to convert your file to a string first and then store it in the DB as an object’s property together with the file size and file name. This might not work for large files though, but haven’t had time to test it yet.

    Storing only the file reference wouldn’t help, since the user would have to re-access the file himself.

    April 12th, 2011 at 06:50

    1. Simon Speich

      With the upcoming Firefox 11 it will be possible to store files directly in the indexedDB, see https://bugzilla.mozilla.org/show_bug.cgi?id=712621#c3

      February 9th, 2012 at 11:01

      1. Simon Speich

        There’s a new post on hacks.mozilla.org which explains how to store a file in the IndexedDB: Storing images and files in IndexedDB. This could be used to resume an upload after a browser crash.

        February 26th, 2012 at 03:22

  7. Davit

    Are you just making up these X-* headers (like X-File-Name), or are they outlined (recommended) somewhere?

    August 19th, 2011 at 03:54

  8. Simon Speich

    X-something indicates a custom header, so yes they are made up, but especially the X-Requested-With is used widely.

    August 19th, 2011 at 23:43

  9. Alejandro Invertir en bolsa

    Excelent information. My internet connection is not good so I will try to implement this idea.

    September 9th, 2011 at 16:21

  10. Almas

    Hello,
    I can’t drop image from my desktop. When I drop it then the image is just view in my browser. The upload system is gone then.

    May I get the demo by browsing images, not drag and drop?

    Any help?

    Thanks
    Almas

    January 5th, 2012 at 05:05

  11. Simon Speich

    The demo doesn’t work at the moment. I’ll fix it as soon as possible and let you know.

    January 6th, 2012 at 01:17

  12. Simon Speich

    The demo works again. Sorry for the inconvenience.
    I updated the demo to work with dojo 1.7.1 and I switched to using dojo’s new AMD loader.

    January 7th, 2012 at 10:20

  13. Almas

    Hi,

    Drag and drop is working. Thanks for your fixing. But can it be done for multiple selection files. And when upload button is clicked then all files uploading shall be started at a time. From there I can pause a single file. Or if any internet connection is disrupted then all the uploading shall be paused. And these shall resume when net connection available.

    Thanks

    January 7th, 2012 at 10:46

  14. filkor

    This not worked me :( so I’ve made my own demo…

    http://dnduploader.filkor.org/

    – Close your browser or disconnect from the internet, you can continue the upload process when come back and drop the same files
    – Multiple files
    – All files start when hit the upload btn.
    – You can pause a single file, too
    – Source on github

    June 30th, 2012 at 14:29

Comments are closed for this article.