Mozilla

IndexedDB Articles

Sort by:

View:

  1. Breaking the Borders of IndexedDB

    In this article I want to share with you how to do some cool IndexedDB queries that aren’t ‘possible’ out of the box unless you add some ‘tricks’.

    The algorithms I’m going to show, except the ‘full-text-search’ one, were invented by me while I was writing on the open source javascript library Dexie.js. Some of them are not unique inventions since other people have discovered them too, but I believe that the case insensitive search algorithm, at least, is unique for my Dexie.js library.

    All code snippets in this article are conceptual. In case you need to look at bullet proof code, I encourage you to dive into the source of Dexie.js where all of these algorithms are implemented and thoroughly unit tested.

    The Four IndexedDB Queries

    IndexedDB API is only capable of doing four type of queries:

    1. IDBKeyRange.only (Exact match)
    2. IDBKeyRange.upperBound () – Find objects where property X is below certain value
    3. IDBKeyRange.lowerBound () – Find objects where property X is above certain value
    4. IDBKeyRange.bound() – Find objects where property X is between certain values.

    What I Will Show

    I am going to show you how to extend IndexedDB to support the following:

    anyOf()
    Find objects where property X equals any of a given set of possible values
    equalsIgnoreCase()
    Case insensitive search
    startsWith()
    Find strings starting with a certain prefix
    startsWithIgnoreCase()
    Find strings starting with case insensitive prefix
    or()
    Combine two queries and get the union of these two.
    full-text-search
    Find objects by searching for words contained in a large text

    The list could be much longer than that, but let’s start with these for now.

    None of these extended types of queries, except full-text-search require you to store any meta-data. They will work with any IndexedDB database already being used. For example, you do not have to store lowercase versions of strings that you need to do case insensitive matching on.

    So lets start showing the ‘tricks’ that breaks the borders of IndexedDB.

    anyOf()

    In SQL, this is equal to the IN keyword:

    SELECT * FROM TABLE WHERE X IN (a,b,c)

    The reason I begin with this one, is that it pretty straight forward and simple to understand. Understanding this one will make it easier to understand the algorithms for doing case insensitive search.

    Some Background Knowledge

    Let’s start by diving into some IndexedDB basics:
    To iterate ALL records in a table with IndexedDB, you call indexOrStore.openCursor() on the IDBObjectStore or IDBIndex instance. For each record, you’ll get called back on your onsuccess callback:

    // Start a transaction to operate on your 'friends' table
    var trans = db.transaction(["friends"], "readonly");
     
    // Get the 'name' index from your 'friends' table.
    var index = trans.objectStore("friends").index("name");
     
    // Start iteration by opening a cursor
    var cursorReq = index.openCursor();
     
    // When any record is found, you get notified in onsuccess
    cursorReq.onsuccess = function (e) {
     
        var cursor = e.target.result;
        if (cursor) {
            // We have a record in cursor.value
            console.log(JSON.stringify(cursor.value));
            cursor.continue();
        } else {
            // Iteration complete
        }
    };

    Overloaded Version of IDBCursor.continue()

    In the sample above, we call cursor.continue() without any argument. This will make the cursor advance to next key. But if we provide an argument to cursor.continue(nextKey), we tell the cursor to fast-forward to given key. If we were iterating the “name” index and wrote:

    cursor.continue("David");

    …the next onsuccess would have a cursor positioned at the first record where name equals “David” if that key exists. The specification requires that the cursor must be positioned at the first record that is equal to or greater than the specified key in the same sort order as the index.

    The Algorithm

    So lets say we have a huge database of friends and friends of friends with various names. Now we want to find all friends that have any of the following names: “David”, “Daniel” or “Zlatan”.

    We are going to do a cursor based iteration on the ‘name’ index as shown above, and use cursor.continue(nextName) to fast forward to the names we are looking for. When opening a cursor on an index, the order of found items will be in the sort order of that index. So if we do it on the ‘name’ index, we will get all friends in the sort order of their ‘name’.

    What we want to do first, is to sort() the array of names we are looking for, so we get the following JavaScript array:

    ["Daniel", "David", "Zlatan"]

    (since n comes before v). Then we do the following:

    1. call cursor.continue("Daniel") (first item in sorted set)
    2. onsuccess: Check if cursor.key == "Daniel". If so, include cursor.value in result and call cursor.continue() without arguments to check for more Daniels.
    3. When no more Daniels found, call cursor.continue("David") (next item…)
    4. onsuccess: Check if cursor.key == "David". If so, include cursor.value in result and call cursor.continue() without arguments to check for more Davids.
    5. When no more Davids found, call cursor.continue("Zlatan")
    6. onsuccess: Check if cursor.key == "Zlatan". If so, include cursor.value in result and call cursor.continue() without arguments to check for more Zlatans. Else, we could stop iterating by just don’t call cursor.continue() anymore because we know we wont find any more results (The set was sorted!)

    Sample Implementation of the Algorithm

    function comparer (a,b) {
        return a < b ? -1 : a > b ? 1 : 0;
    }
     
    function equalsAnyOf(index, keysToFind, onfound, onfinish) {
     
        var set = keysToFind.sort(comparer);
        var i = 0;
        var cursorReq = index.openCursor();
     
        cursorReq.onsuccess = function (event) {
            var cursor = event.target.result;
     
            if (!cursor) { onfinish(); return; }
     
            var key = cursor.key;
     
            while (key > set[i]) {
     
                // The cursor has passed beyond this key. Check next.
                ++i;
     
                if (i === set.length) {
                    // There is no next. Stop searching.
                    onfinish();
                    return;
                }
            }
     
            if (key === set[i]) {
                // The current cursor value should be included and we should continue
                // a single step in case next item has the same key or possibly our
                // next key in set.
                onfound(cursor.value);
                cursor.continue();
            } else {
                // cursor.key not yet at set[i]. Forward cursor to the next key to hunt for.
                cursor.continue(set[i]);
            }
        };
    }

    Case Insensitive Search

    Dexie.js implements case insensitive searching using a similar IDBCursor.continue(key) method as with the anyOf() algorithm above, however a little more complexity in the algorithm is needed.

    Let’s say we need to find “david” in table “friends” no matter its casing. “David” or “david” should both be found. Obviously, we could create an array containing all possible case combinations of “david” and then use the anyOf() algorithm above. However, the number of combinations would increase exponentially with the length of the key we’re looking for. But there is a trick we can use; since cursor.continue() will land on the next record in sort order, that will reveal what combinations we can skip when landing on the key that is not a match.

    What we do is to start searching for the lowest possible value of “David”, which would be “DAVID” since uppercase unicode characters have a lesser value than their corresponding lowercase version. If there is no “DAVID” in the DB, we will land on the least possible key yet greater than “DAVID”. Now, instead of testing the next combination of davids (which would be “DAVId”), we first inspect the key we landed on and from that key, we derive what would be the next “david” variant to search for. Look at the function nextCasing() in the code snippet below to see how we derive next possible version of a key based on the key that the IDBCursor landed on. I will not go through it line-by line here, but instead, I refer to the code comments in function nextCasing(key, lowerKey) {…} below.

    function findIgnoreCaseAlgorithm(index, needle, onfound, onfinish) {
     
        // index: An instance of IDBIndex
        // needle: The string to search for
        // onfound: A function to call for each found item
        // onfinish: A function to call when we're finshed searching.
     
        var upperNeedle = needle.toUpperCase();
        var lowerNeedle = needle.toLowerCase();
        var cursorReq = index.openCursor();
     
        cursorReq.onsuccess = function (event) {
            var cursor = event.target.result;
            if (!cursor) {
                // No more data to iterate over - call onfinish()
                onfinish();
                return;
            }
     
            var key = cursor.key;
            if (typeof key !== 'string') {
                // Just in case we stumble on data that isnt what we expect -
                // toLowerCase() wont work on this object. Check next.
                cursor.continue();
                return;
            }
     
            var lowerKey = key.toLowerCase();
            if (lowerKey === lowerNeedle) {
                onfound(cursor.value); // Notify caller we found somethin
                cursor.continue(); // Check next record, it might match too!
            } else {
                // Derive least possible casing to appear after key in sort order
                var nextNeedle = nextCasing(key, lowerKey, upperNeedle, lowerNeedle);
                if (nextNeedle) {
                    cursor.continue(nextNeedle);
                } else {
                    // No more possible case combinations to look for.
                    // Call onfinish() and dont call cursor.continue() anymore.
                    onfinish();
                }
            }
        };
     
        function nextCasing(key, lowerKey) {
            var length = Math.min(key.length, lowerNeedle.length); // lowerNeedle is from outer scope
            var llp = -1; // "llp = least lowerable position"
     
            // Iterate through the most common first chars for cursor.key and needle.
            for (var i = 0; i < length; ++i) {
                var lwrKeyChar = lowerKey[i];
     
                if (lwrKeyChar !== lowerNeedle[i]) {
                    // The char at position i differs between the found key and needle being
                    // looked for when just doing case insensitive match.
                    // Now check how they differ and how to trace next casing from this:
                    if (key[i] < upperNeedle[i]) {
                        // We could just append the UPPER version of the key we're looking for
                        // since found key is less than that.
                        return key.substr(0, i) + upperNeedle[i] + upperNeedle.substr(i + 1);
                    }
     
                    if (key[i] < lowerNeedle[i]) {
                        // Found key is between lower and upper version. Lets first append a
                        // lowercase char and the rest as uppercase.
                        return key.substr(0, i) + lowerNeedle[i] + upperNeedle.substr(i + 1);
                    }
     
                    if (llp >= 0) {
                        // Found key is beyond this key. Need to rewind to last lowerable
                        // position and return key + 1 lowercase char + uppercase rest.
                        return key.substr(0, llp) + lowerKey[llp] + upperNeedle.substr(llp + 1)
                    }
     
                    // There are no lowerable positions - all chars are already lowercase
                    // (or non-lowerable chars such as space, periods etc)
     
                    return null;
                }
     
                if (key[i] < lwrKeyChar) {
                    // Making lowercase of this char would make it appear after key.
                    // Therefore set llp = i.
                    llp = i; 
            }
     
            // All first common chars of found key and the key we're looking for are equal
            // when ignoring case.
            if (length < lowerNeedle.length) {
                // key was shorter than needle, meaning that we may look for key + UPPERCASE
                // version of the rest of needle.
                return key + upperNeedle.substr(key.length);
            }
     
            // Found key was longer than the key we're looking for
            if (llp < 0) {
                // ...and there is no way to make key we're looking for appear after found key.
                return null;
            } else {
                // There is a position of a char, that if we make that char lowercase,
                // needle will become greater than found key.
                return key.substr(0, llp) + lowerNeedle[llp] + upperNeedle.substr(llp + 1);
            }
        }
    }

    Performance

    In a performance test I created 10,000 objects with random strings and let the equalsIgnoreCase() algorithm try to find items in it. For Opera, it took between 4 and 5 milliseconds to find 6 matching items among 10,000. For Mozilla latest version, it took 7 ms. If instead iterating through all 10,000 items and comparing manually, it took 1514 ms for Mozilla and 346 ms for Opera.

    startsWith(str)

    Matching prefix of string keys is the most straight-forward trick you can do with IndexedDB. It’s not unique for Dexie.js as other libraries support it as well. However, for the sake of completeness, here is how it is done: Just do an IDBKeyRange.bound() where lowerBound is the prefix and upperBound is a string that would include all possible continuations of the prefix. The simplest way to do this is just to append a character with highest possible value; ‘\uffff’.

    IDBKeyRange.bound(prefix, prefix + '\uffff', false, false)

    startsWithIgnoreCase(str)

    When the matched string should be compared without case sensitivity, we need to do a cursor search, as for equalsIgnoreCase(). We can actually take the same algorithm but change how we compare each entry. Instead of comparing using the ‘==’ operator, we use String.indexOf(). This will give us the value 0 if the string starts with given value. So, just copy the code samples from equalsIgnoreCase() above and change the onsuccess part to the following:

    cursorReq.onsuccess = function (event) {
        var cursor = event.target.result;
        if (!cursor) {
            // No more data to iterate over - call onfinish()
            onfinish();
            return;
        }
     
        var key = cursor.key;
        if (typeof key !== 'string') {
            // Just in case we stumble on data that isnt what we expect -
            // toLowerCase() wont work on this object. Check next.
            cursor.continue();
            return;
        }
     
        var lowerKey = key.toLowerCase();
        if (lowerKey.indexOf(lowerNeedle) === 0) {
            onfound(cursor.value); // Notify caller we found somethin
            cursor.continue(); // Check next record, it might match too!
        } else {
            // Derive least possible casing to appear after key in sort order
            var nextNeedle = nextCasing(key, lowerKey, upperNeedle, lowerNeedle);
            if (nextNeedle) {
                cursor.continue(nextNeedle);
     
            } else {
                // No more possible case combinations to look for.
                // Call onfinish() and dont call cursor.continue() anymore.
                onfinish();
            }
        }
    };

    Logical OR

    IndexedDB has no support for logical or. You can only specify one keyrange at a time. However, what it does have support for, is to run several queries in parallel – even when using same transaction (as long as the queries are readonly queries). Even if the queries wouldn’t run in parallel, it would still work but a little less performant. The OR operation in Dexie.js has been unit tested with Chrome, IE, Firefox and Opera.

    The only thing we need to do except executing both queries in parallel, is to remove any duplicates. To do that, we can use a set of found primary keys. Whenever a new record match is found on any of the parallel queries, it first checks the set if it’s already included. If not, it calls onfound for the entry and sets set[primKey] = true so that if the same entry would be found on the other query, it would silently ignore to call onfound().

    Here’s how it’s done. The code snipped below is a simplified version only supporting the logical OR of two standard IDBKeyRange queries. With Dexie, you can do arbritary number of OR with any standard or extended operation such as equalsIgnoreCase().

    function logical_or(index1, keyRange1, index2, keyRange2, onfound, onfinish) {
        var openCursorRequest1 = index1.openCursor(keyRange1);
        var openCursorRequest2 = index2.openCursor(keyRange2);
        assert(index1.objectStore === index2.objectStore); // OR can only be done on same store
        var primKey = index1.objectStore.keyPath;
        var set = {};
        var resolved = 0;
     
        function complete() {
            if (++resolved === 2) onfinish();
        }
     
        function union(item) {
            var key = JSON.stringify(item[primKey]);
            if (!set.hasOwnProperty(key)) {
                set[key] = true;
                onfound(item);
            }
        }
     
        openCursorRequest1.onsuccess = function (event) {
            var cursor = event.target.result;
            if (cursor) {
                union(cursor.value);
            } else {
                complete();
            }
        }
     
        openCursorRequest2.onsuccess = function (event) {
            var cursor = event.target.result;
            if (cursor) {
                union(cursor.value);
            } else {
                complete();
            }
        }
    }

    To access this algorithm with Dexie.js, you type something like the following:

    db.friends.where('name').equalsIgnoreCase('david').or('shoeSize').above(40).toArray(fn)

    … which would make given callback function (fn) be called with an array of the results.

    When using parallel OR execution, the sort order will not be valid. Partly because the two different queries execute on different indexes with different sort order, but mainly because the two queries run in parallel by the browser background threads and we cannot know which onsuccess is called before the other. However, this can be resolved by implementing onfound to push the items to an array, and onfinish to sort it using any requested sort order:

    var index1 = transaction.objectStore("friends").index("name");
    var index2 = transaction.objectStire("friends").index("shoeSize");
    var keyRange1 = IDBKeyRange.bound ("Da", "Da\uffff");
    var keyRange2 = IDBKeyRange.lowerBound (40, true);
    //
    // SELECT * FROM friends WHERE name LIKE 'Da%' OR shoeSize &gt; 40 ORDERBY shoeSize;
    //
     
    var result = [];
    logical_or (index1, keyRange1, index2, keyRange2, function (friend) {
        result.push(friend);
    }, function () {
        result.sort (function (a,b) { return a.shoeSize - b.shoeSize; });
     
    });

    Full Text Search

    Searching for certain words in a large string (text field) is another common use case that is not supported out-of-the-box by IndexedDB. However, it can be easily implemented by splitting the text into words and store the resulting set in a ‘multiEntry’ index. With all my other algorithms in this article, no meta-data is required to make them work, but with Full Text Search, it must be prepared by adding a specific multiEntry index and each time a text is changed, the multiEntry index must be updated accordingly with the resulting set of words in the text.

    1. When defining your schema, create an index “words” with multiEntry set to true.
      • In bare-bone IndexedDB: store.createIndex("words", "words", { multiEntry: true });
      • In Dexie.js: Prefix with asterisk (*):db.version(1).stores({blogs: '++id,title,text,*words'});
    2. Whenever you update your text, split the text into words and store all unique words in the “words” property of the object.

      var blogEntry = new BlogEntry(
       
          "Blogging about IndexedDB",
       
          "Much to say about IndexedDB there is... bla bla bla - hope I'm not boring you...");
       
      db.put(blogEntry);
       
      blogEntry.setText("...changing my blog post to another IndexedDB text...");
       
      db.put(blogEntry);
       
      function BlogEntry(title, words) {
       
          ...
       
          this.setText = function (value) {
              this.text = value;
              this.words = getAllWords(value);
          }
       
          function getAllWords(text) {
              var allWordsIncludingDups = text.split(' ');
              var wordSet = allWordsIncludingDups.reduce(function (prev, current) {
                  prev[current] = true;
                  return prev;
              }, {});
              return Object.keys(wordSet);
          }
      }
    3. To find an item by searching for a word, use the “words” index to launch the query on.
      • bare-bone IndexedDB: index.openCursor(IDBKeyRange.only('IndexedDB'));
      • Dexie.js: db.blogs.where('words').equalsIgnoreCase('indexeddb')

    An example of how to do this with Dexie follows in the FullTextSearch.js file in Dexie

    Live Samples

    If you want to play with the samples in your browser or phone, go to Samples on the Dexie.js wiki where you have these samples and some more to experiment with.

    A Last Word

    Whether you are just curious about the IndexedDB API, or you’re implementing your own IndexedDB library, I hope the techniques shown in this article could be useful for you when you need to maximize the posibilities with IndexedDB.

    I would also like to encourage you to have a look at Dexie.js to discover how it may help you achieve your goals. A lot of effort has been put to make this library well documented and tested. The library is still young but the few users that has discovered it so far, have expressed a great deal of enthusiasm in the Dexie.js forum.

  2. The Making of the Time Out Firefox OS app

    A rash start into adventure

    So we told our client that yes, of course, we would do their Firefox OS app. We didn’t know much about FFOS at the time. But, hey, we had just completed refactoring their native iOS and Android apps. Web applications were our core business all along. So what was to be feared?

    More than we thought, it turned out. Some of the dragons along the way we fought and defeated ourselves. At times we feared that we wouldn’t be able to rescue the princess in time (i.e. before MWC 2013). But whenever we got really lost in detail forest, the brave knights from Mozilla came to our rescue. In the end, it all turned out well and the team lived happily ever after.

    But here’s the full story:

    Mission & challenge

    Just like their iOS and Android apps, Time Out‘s new Firefox OS app was supposed to allow browsing their rich content on bars, restaurants, things to do and more by category, area, proximity or keyword search, patient zero being Barcelona. We would need to show results as illustrated lists as well as visually on a map and have a decent detail view, complete with ratings, access details, phone button and social tools.

    But most importantly, and in addition to what the native apps did, this app was supposed to do all of that even when offline.

    Oh, and there needed to be a presentable, working prototype in four weeks time.

    Cross-platform reusability of the code as a mobile website or as the base of HTML5 apps on other mobile platforms was clearly prio 2 but still to be kept in mind.

    The princess was clearly in danger. So we arrested everyone on the floor that could possibly be of help and locked them into a room to get the basics sorted out. It quickly emerged that the main architectural challenges were that

    • we had a lot of things to store on the phone, including the app itself, a full street-level map of Barcelona, and Time Out’s information on every venue in town (text, images, position & meta info),
    • at least some of this would need to be loaded from within the app; once initially and synchronizable later,
    • the app would need to remain interactively usable during these potentially lengthy downloads, so they’d need to be asynchronous,
    • whenever the browser location changed, this would be interrupted

    In effect, all the different functionalities would have to live within one single HTML document.

    One document plus hash tags

    For dynamically rendering, changing and moving content around as required in a one-page-does-all scenario, JavaScript alone didn’t seem like a wise choice. We’d been warned that Firefox OS was going to roll out on a mix of devices including the very low cost class, so it was clear that fancy transitions of entire full-screen contents couldn’t be orchestrated through JS loops if they were to happen smoothly.

    On the plus side, there was no need for JS-based presentation mechanics. With Firefox OS not bringing any graveyard of half-dead legacy versions to cater to, we could (finally!) rely on HTML5 and CSS3 alone and without fallbacks. Even beyond FFOS, the quick update cycles in the mobile environment didn’t seem to block the path for taking a pure CSS3 approach further to more platforms later.

    That much being clear, which better place to look for best practice examples than Mozilla Hacks? After some digging, Thomas found Hacking Firefox OS in which Luca Greco describes the use of fragment identifiers (aka hashtags) appended to the URL to switch and transition content via CSS alone, which we happily adopted.

    Another valuable source of ideas was a list of GAIA building blocks on Mozilla’s website, which has since been replaced by the even more useful Building Firefox OS site.

    In effect, we ended up thinking in terms of screens. Each physically a <div>, whose visibility and transitions are governed by :target CSS selectors that draw on the browser location’s hashtag. Luckily, there’s also the onHashChange event that we could additionally listen to in order to handle the app-level aspects of such screen changes in JavaScript.

    Our main HTML and CSS structure hence looked like this:

    And a menu

    We modeled the drawer menu very similarily, just that it sits in a <nav> element on the same level as the <section> container holding all the screens. Its activation and deactivation works by catching the menu icon clicks, then actively changing the screen container’s data-state attribute from JS, which triggers the corresponding CSS3 slide-in / slide-out transition (of the screen container, revealing the menu beneath).

    This served as our “Hello, World!” test for CSS3-based UI performance on low-end devices, plus as a test case for combining presentation-level CSS3 automation with app-level explicit status handling. We took down a “yes” for both.

    UI

    By the time we had put together a dummy around these concepts, the first design mockups from Time Out came in so that we could start to implement the front end and think about connecting it to the data sources.

    For presentation, we tried hard to keep the HTML and CSS to the absolute minimum. Mozilla’s GAIA examples being a very valuable source of ideas once more.

    Again, targeting Firefox OS alone allowed us to break free of the backwards compatibility hell that we were still living in, desktop-wise. No one would ask us Will it display well in IE8? or worse things. We could finally use real <section>, <nav>, <header>, and <menu> tags instead of an army of different classes of <div>. What a relief!

    The clear, rectangular, flat and minimalistic design we got from Time Out also did its part to keep the UI HTML simple and clean. After we were done with creating and styling the UI for 15 screens, our HTML had only ~250 lines. We later improved that to 150 while extending the functionality, but that’s a different story.

    Speaking of styling, not everything that had looked good on desktop Firefox even in its responsive design view displayed equally well on actual mobile devices. Some things that we fought with and won:

    Scale: The app looked quite different when viewed on the reference device (a TurkCell branded ZTE device that Mozilla had sent us for testing) and on our brand new Nexus 4s:

    After a lot of experimenting, tearing some hair and looking around how others had addressed graceful, proportional scaling for a consistent look & feel across resolutions, we stumbled upon this magic incantation:

    <meta name="viewport" content="user-scalable=no, initial-scale=1,
    maximum-scale=1, width=device-width" />

    What it does, to quote an article at Opera, is to tell the browser that there is “No scaling needed, thank you very much. Just make the viewport as many pixels wide as the device screen width”. It also prevents accidental scaling while the map is zoomed. There is more information on the topic at MDN.

    Then there are things that necessarily get pixelated when scaled up to high resolutions, such as the API based venue images. Not a lot we could do about that. But we could at least make the icons and logo in the app’s chrome look nice in any resolution by transforming them to SVG.

    Another issue on mobile devices was that users have to touch the content in order to scroll it, so we wanted to prevent the automatic highlighting that comes with that:

    li, a, span, button, div
    {
        outline:none;
        -moz-tap-highlight-color: transparent;
        -moz-user-select: none;
        -moz-user-focus:ignore
    }

    We’ve since been warned that suppressing the default highlighting can be an issue in terms of accessibility, so you might wanted to consider this carefully.

    Connecting to the live data sources

    So now we had the app’s presentational base structure and the UI HTML / CSS in place. It all looked nice with dummy data, but it was still dead.

    Trouble with bringing it to life was that Time Out was in the middle of a big project to replace its legacy API with a modern Graffiti based service and thus had little bandwidth for catering to our project’s specific needs. The new scheme was still prototypical and quickly evolving, so we couldn’t build against it.

    The legacy construct already comprised a proxy that wrapped the raw API into something more suitable for consumption by their iOS and Android apps, but after close examination we found that we better re-re-wrap that on the fly in PHP for a couple of purposes:

    • Adding CORS support to avoid XSS issues, with the API and the app living in different subdomains of timeout.com,
    • stripping API output down to what the FFOS app really needed, which we could see would reduce bandwidth and increase speed by magnitude,
    • laying the foundation for harvesting of API based data for offline use, which we already knew we’d need to do later

    As an alternative to server-side CORS support, one could also think of using the SystemXHR API. It is a mighty and potentially dangerous tool however. We also wanted to avoid any needless dependency on FFOS-only APIs.

    So while the approach wasn’t exactly future proof, it helped us a lot to get to results quickly, because the endpoints that the app was calling were entirely of our own choice and making, so that we could adapt them as needed without time loss in communication.

    Populating content elements

    For all things dynamic and API-driven, we used the same approach at making it visible in the app:

    • Have a simple, minimalistic, empty, hidden, singleton HTML template,
    • clone that template (N-fold for repeated elements),
    • ID and fill the clone(s) with API based content.
    • For super simple elements, such as <li>s, save the cloning and whip up the HTML on the fly while filling.

    As an example, let’s consider the filters for finding venues. Cuisine is a suitable filter for restaurants, but certainly not for museums. Same is true for filter values. There are vegetarian restaurants in Barcelona, but certainly no vegetarian bars. So the filter names and lists of possible values need to be asked of the API after the venue type is selected.

    In the UI, the collapsible category filter for bars & pubs looks like this:

    The template for one filter is a direct child of the one and only

    <div id="templateContainer">

    which serves as our central template repository for everything cloned and filled at runtime and whose only interesting property is being invisible. Inside it, the template for search filters is:

    <div id="filterBoxTemplate">
      <span></span>
      <ul></ul>
    </div>

    So for each filter that we get for any given category, all we had to do was to clone, label, and then fill this template:

    $('#filterBoxTemplate').clone().attr('id', filterItem.id).appendTo(
    '#categoryResultScreen .filter-container');
    ...
    $("#" + filterItem.id).children('.filter-button').html(
    filterItem.name);

    As you certainly guessed, we then had to to call the API once again for each filter in order to learn about its possible values, which were then rendered into <li> elements within the filter‘s <ul> on the fly:

    $("#" + filterId).children('.filter_options').html(
    '<li><span>Loading ...</span></li>');
    
    apiClient.call(filterItem.api_method, function (filterOptions)
    {
      ...
      $.each(filterOptions, function(key, option)
      {
        var entry = $('<li filterId="' + option.id + '"><span>'
          + option.name + '</span></li>');
    
        if (selectedOptionId && selectedOptionId == filterOptionId)
        {
          entry.addClass('filter-selected');
        }
    
        $("#" + filterId).children('.filter_options').append(entry);
      });
    ...
    });

    DOM based caching

    To save bandwidth and increase responsiveness in on-line use, we took this simple approach a little further and consciously stored more application level information in the DOM than needed for the current display if that information was likely needed in the next step. This way, we’d have easy and quick local access to it without calling – and waiting for – the API again.

    The technical way we did so was a funny hack. Let’s look at the transition from the search result list to the venue detail view to illustrate:

    As for the filters above, the screen class for the detailView has an init() method that populates the DOM structure based on API input as encapsulated on the application level. The trick now is, while rendering the search result list, to register anonymous click handlers for each of its rows, which – JavaScript passing magic – contain a copy of, rather than a reference to, the venue objects used to render the rows themselves:

    renderItems: function (itemArray)
    {
      ...
    
      $.each(itemArray, function(key, itemData)
      {        
        var item = screen.dom.resultRowTemplate.clone().attr('id', 
          itemData.uid).addClass('venueinfo').click(function()
        {
          $('#mapScreen').hide();
          screen.showDetails(itemData);
        });
    
        $('.result-name', item).text(itemData.name);
        $('.result-type-label', item).text(itemData.section);
        $('.result-type', item).text(itemData.subSection);
    
        ...
    
        listContainer.append(item);
      });
    },
    
    ...
    
    showDetails: function (venue)
    {
      require(['screen/detailView'], function (detailView)
      {
        detailView.init(venue);
      });
    },

    In effect, there’s a copy of the data for rendering each venue’s detail view stored in the DOM. But neither in hidden elements nor in custom attributes of the node object, but rather conveniently in each of the anonymous pass-by-value-based click event handlers for the result list rows, with the added benefit that they don’t need to be explicitly read again but actively feed themselves into the venue details screen as soon a row receives a touch event.

    And dummy feeds

    Finishing the app before MWC 2013 was pretty much a race against time, both for us and for Time Out’s API folks, who had an entirely different and equally – if not more so – sportive thing to do. Therefore they had very limited time for adding to the (legacy) API that we were building against. For one data feed, this meant that we had to resort to including static JSON files into the app’s manifest and distribution; then use relative, self-referencing URLs as fake API endpoints. The illustrated list of top venues on the app’s main screen was driven this way.

    Not exactly nice, but much better than throwing static content into the HTML! Also, it kept the display code already fit for switching to the dynamic data source that eventually materialized later, and compatible with our offline data caching strategy.

    As the lack of live data on top venues then extended right to their teaser images, we made the latter physically part of the JSON dummy feed. In Base64 :) But even the low-end reference device did a graceful job of handling this huge load of ASCII garbage.

    State preservation

    We had a whopping 5M of local storage to spam, and different plans already (as well as much higher needs) for storing the map and application data for offline use. So what to do with this liberal and easily accessed storage location? We thought we could at least preserve the current application state here, so you’d find the app exactly as you left it when you returned to it.

    Map

    A city guide is the very showcase of an app that’s not only geo aware but geo centric. Maps fit for quick rendering and interaction in both online and offline use were naturally a paramount requirement.

    After looking around what was available, we decided to go with Leaflet, a free, easy to integrate, mobile friendly JavaScript library. It proved to be really flexible with respect to both behaviour and map sources.

    With its support for pinching, panning and graceful touch handling plus a clean and easy API, Leaflet made us arrive at a well-usable, decent-looking map with moderate effort and little pain:

    For a different project, we later rendered the OSM vector data for most of Europe into terabytes of PNG tiles in cloud storage using on-demand cloud power. Which we’d recommend as an approach if there’s a good reason not to rely on 3rd party hosted apps, as long as you don’t try this at home; Moving the tiles may well be slower and more costly than their generation.

    But as time was tight before the initial release of this app, we just – legally and cautiously(!) – scraped ready-to use OSM tiles off MapQuest.com.

    The packaging of the tiles for offline use was rather easy for Barcelona because about 1000 map tiles are sufficient to cover the whole city area up to street level (zoom level 16). So we could add each tile as a single line into the manifest.appache file. The resulting, fully automatic, browser-based download on first use was only 10M.

    This left us with a lot of lines like

    /mobile/maps/barcelona/15/16575/12234.png
    /mobile/maps/barcelona/15/16575/12235.png
    ...

    in the manifest and wishing for a $GENERATE clause as for DNS zone files.

    As convenient as it may seem to throw all your offline dependencies’ locations into a single file and just expect them to be available as a consequence, there are significant drawbacks to this approach. The article Application Cache is a Douchebag by Jake Archibald summarizes them and some help is given at Html5Rocks by Eric Bidleman.

    We found at the time that the degree of control over the current download state, and the process of resuming the app cache load in case that the initial time users spent in our app didn’t suffice for that to complete was rather tiresome.

    For Barcelona, we resorted to marking the cache state as dirty in Local Storage and clearing that flag only after we received the updateready event of the window.applicationCache object but in the later generalization to more cities, we moved the map away from the app cache altogether.

    Offline storage

    The first step towards offline-readiness was obviously to know if the device was online or offline, so we’d be able to switch the data source between live and local.

    This sounds easier than it was. Even with cross-platform considerations aside, neither the online state property (window.navigator.onLine), the events fired on the <body> element for state changes (“online” and “offline”, again on the <body>), nor the navigator.connection object that was supposed to have the on/offline state plus bandwidth and more, really turned out reliable enough.

    Standardization is still ongoing around all of the above, and some implementations are labeled as experimental for a good reason :)

    We ultimately ended up writing a NetworkStateService class that uses all of the above as hints, but ultimately and very pragmatically convinces itself with regular HEAD requests to a known live URL that no event went missing and the state is correct.

    That settled, we still needed to make the app work in offline mode. In terms of storage opportunities, we were looking at:

    Storage Capacity Updates Access Typical use
    App / app cache, i.e. everything listed in the file that the value of appcache_path in the app‘s webapp.manifest points to, and which is and therefore downloaded onto the device when the app is installed. <= 50M. On other platforms (e.g. iOS/Safari), user interaction required from 10M+. Recommendation from Moziila was to stay <2M. Hard. Requires user interaction / consent, and only wholesale update of entire app possible. By (relative) path HTML, JS, CSS, static assets such as UI icons
    LocalStorage 5M on UTF8-platforms such as FFOS, 2.5M in UTF16, e.g. on Chrome. Details here Anytime from app By name Key-value storage of app status, user input, or entire data of modest apps
    Device Storage (often SD card) Limited only by hardware Anytime from app (unless mounted as UDB drive when cionnected to desktop computer) By path, through Device Storage API Big things
    FileSystem API Bad idea
    Database Unlimited on FFOS. Mileage on other platforms varies Anytime from app Quick and by arbitrary properties Databases :)

    Some aspects of where to store the data for offline operation were decided upon easily, others not so much:

    • the app, i.e. the HTML, JS, CSS, and UI images would go into the app cache
    • state would be maintained in Local Storage
    • map tiles again in the app cache. Which was a rather dumb decision, as we learned later. Barcelona up to zoom level 16 was 10M, but later cities were different. London was >200M and even reduced to max. zoom 15 still worth 61M. So we moved that to Device Storage and added an actively managed download process for later releases.
    • The venue information, i.e. all the names, locations, images, reviews, details, showtimes etc. of the places that Time Out shows in Barcelona. Seeing that we needed lots of space, efficient and arbitrary access plus dynamic updates, this had to to go into the Database. But how?

    The state of affairs across the different mobile HTML5 platforms was confusing at best, with Firefox OS already supporting IndexedDB, but Safari and Chrome (considering earlier versions up to Android 2.x) still relying on a swamp of similar but different sqlite / WebSQL variations.

    So we cried for help and received it, as always when we had reached out to the Mozilla team. This time in the form of a pointer to pouchDB, a JS-based DB layer that at the same time wraps away the different native DB storage engines behind a CouchDB-like interface and adds super easy on-demand synchronization to a remote CouchDB-hosted master DB out there.

    Back last year it still was in pre-alpha state but very usable already. There were some drawbacks, such as the need for adding a shim for WebSql based platforms. Which in turn meant we couldn’t rely on storage being 8 bit clean, so that we had to base64 our binaries, most of all the venue images. Not exactly pouchDB’s fault, but still blowing up the size.

    Harvesting

    The DB platform being chosen, we next had to think how we’d harvest all the venue data from Time Out’s API into the DB. There were a couple of endpoints at our disposal. The most promising for this task was proximity search with no category or other restrictions applied, as we thought it would let us harvest a given city square by square.

    Trouble with distance metrics however being that they produce circles rather than squares. So step 1 of our thinking would miss venues in the corners of our theoretical grid

    while extending the radius to (half the) the grid’s diagonal, would produce redundant hits and necessitate deduplication.

    In the end, we simply searched by proximity to a city center location, paginating through the result indefinitely, so that we could be sure to to encounter every venue, and only once:

    Technically, we built the harvester in PHP as an extension to the CORS-enabled, result-reducing API proxy for live operation that was already in place. It fed the venue information in to the master CouchDB co-hosted there.

    Time left before MWC 2013 getting tight, we didn’t spend much time on a sophisticated data organization and just pushed the venue information into the DB as one table per category, one row per venue, indexed by location.

    This allowed us to support category based and area / proximity based (map and list) browsing. We developed an idea how offline keyword search might be made possible, but it never came to that. So the app simply removes the search icon when it goes offline, and puts it back when it has live connectivity again.

    Overall, the app now

    • supported live operation out of box,
    • checked its synchronization state to the remote master DB on startup,
    • asked, if needed, permission to make the big (initial or update) download,
    • supported all use cases but keyword search when offline.

    The involved components and their interactions are summarized in this diagram:

    Organizing vs. Optimizing the code

    For the development of the app, we maintained the code in a well-structured and extensive source tree, with e.g. each JavaScript class residing in a file of its own. Part of the source tree is shown below:

    This was, however, not ideal for deployment of the app, especially as a hosted Firefox OS app or mobile web site, where download would be the faster, the fewer and smaller files we had.

    Here, Require.js came to our rescue.

    It provides a very elegant way of smart and asynchronous requirement handling (AMD), but more importantly for our purpose, comes with an optimizer that minifies and combines the JS and CSS source into one file each:

    To enable asynchronous dependency management, modules and their requirements must be made known to the AMD API through declarations, essentially of a function that returns the constructor for the class you’re defining.

    Applied to the search result screen of our application, this looks like this:

    define
    (
      // new class being definied
      'screensSearchResultScreen',
    
      // its dependencies
      ['screens/abstractResultScreen', 'app/applicationController'],
    
      // its anonymous constructor
      function (AbstractResultScreen, ApplicationController)
      {
        var SearchResultScreen = $.extend(true, {}, AbstractResultScreen,
        {
          // properties and methods
          dom:
          {
            resultRowTemplate: $('#searchResultRowTemplate'),
            list: $('#search-result-screen-inner-list'),
            ...
          }
          ...
        }
        ...
    
        return SearchResultScreen;
      }
    );

    For executing the optimization step in the build & deployment process, we used Rhino, Mozilla’s Java-based JavaScript engine:

    java -classpath ./lib/js.jar:./lib/compiler.jar   
      org.mozilla.javascript.tools.shell.Main ./lib/r.js -o /tmp/timeout-webapp/
      $1_config.js

    CSS bundling and minification is supported, too, and requires just another call with a different config.

    Outcome

    Four weeks had been a very tight timeline to start with, and we had completely underestimated the intricacies of taking HTML5 to a mobile and offline-enabled context, and wrapping up the result as a Marketplace-ready Firefox OS app.

    Debugging capabilities in Firefox OS, especially on the devices themselves, were still at an early stage (compared to clicking about:app-manager today). So the lights in our Cologne office remained lit until pretty late then.

    Having built the app with a clear separation between functionality and presentation also turned out a wise choice when a week before T0 new mock-ups for most of the front end came in :)

    But it was great and exciting fun, we learned a lot in the process, and ended up with some very useful shiny new tools in our box. Often based on pointers from the super helpful team at Mozilla.

    Truth be told, we had started into the project with mixed expectations as to how close to the native app experience we could get. We came back fully convinced and eager for more.

    In the end, we made the deadline and as a fellow hacker you can probably imagine our relief. The app finally even received its 70 seconds of fame, when Jay Sullivan shortly demoed it at Mozilla’s MWC 2013 press conference as a showcase for HTML5′s and Firefox OS’s offline readiness (Time Out piece at 7:50). We were so proud!

    If you want to play with it, you can find the app in the marketplace or go ahead try it online (no offline mode then).

    Since then, the Time Out Firefox OS app has continued to evolve, and we as a team have used the chance to continue to play with and build apps for FFOS. To some degree, the reusable part of this has become a framework in the meantime, but that’s a story for another day..

    We’d like to thank everyone who helped us along the way, especially Taylor Wescoatt, Sophie Lewis and Dave Cook from Time Out, Desigan Chinniah and Harald Kirschner from Mozilla, who were always there when we needed help, and of course Robert Nyman, who patiently coached us through writing this up.

  3. localForage: Offline Storage, Improved

    Web apps have had offline capabilities like saving large data sets and binary files for some time. You can even do things like cache MP3 files. Browser technology can store data offline and plenty of it. The problem, though, is that the technology choices for how you do this are fragmented.

    localStorage gets you really basic data storage, but it’s slow and can’t handle binary blobs. IndexedDB and WebSQL are asynchronous, fast, and support large data sets, but their APIs aren’t very straightforward. Even still, neither IndexedDB nor WebSQL have support from all of the major browser vendors and that doesn’t seem like something that will change in the near future.

    If you need to write a web app with offline support and don’t know where to start, then this is the article for you. If you’ve ever tried to start working with offline support but it made your head spin, this article is for you too. Mozilla has made a library called localForage that makes storing data offline in any browser a much easier task.

    around is an HTML5 Foursquare client that I wrote that helped me work through some of the pain points of offline storage. We’re still going to walk through how to use localForage, but there’s some source for those of you that like learn by perusing code.

    localForage is a JavaScript library that uses the very simple localStorage API. localStorage gives you, essentially, the features of get, set, remove, clear, and length, but adds:

    • an asynchronous API with callbacks
    • IndexedDB, WebSQL, and localStorage drivers (managed automatically; the best driver is loaded for you)
    • Blob and arbitrary type support, so you can store images, files, etc.
    • support for ES6 Promises

    The inclusion of IndexedDB and WebSQL support allows you to store more data for your web app than localStorage alone would allow. The non-blocking nature of their APIs makes your app faster by not hanging the main thread on get/set calls. Support for promises makes it a pleasure to write JavaScript without callback soup. Of course, if you’re a fan of callbacks, localForage supports those too.

    Enough talk; show me how it works!

    The traditional localStorage API, in many regards, is actually very nice; it’s simple to use, doesn’t enforce complex data structures, and requires zero boilerplate. If you had a configuration information in an app you wanted to save, all you need to write is:

    // Our config values we want to store offline.
    var config = {
        fullName: document.getElementById('name').getAttribute('value'),
        userId: document.getElementById('id').getAttribute('value')
    };
     
    // Let's save it for the next time we load the app.
    localStorage.setItem('config', JSON.stringify(config));
     
    // The next time we load the app, we can do:
    var config = JSON.parse(localStorage.getItem('config'));

    Note that we need to save values in localStorage as strings, so we convert to/from JSON when interacting with it.

    This appears delightfully straightforward, but you’ll immediately notice a few issues with localStorage:

    1. It’s synchronous. We wait until the data has been read from the disk and parsed, regardless of how large it might be. This slows down our app’s responsiveness. This is especially bad on mobile devices; the main thread is halted until the data is fetched, making your app seem slow and even unresponsive.

    2. It only supports strings. Notice how we had to use JSON.parse and JSON.stringify? That’s because localStorage only supports values that are JavaScript strings. No numbers, booleans, Blobs, etc. This makes storing numbers or arrays annoying, but effectively makes storing Blobs impossible (or at least VERY annoying and slow).

    A better way with localForage

    localForage gets past both these problems by using asynchronous APIs but with localStorage’s API. Compare using IndexedDB to localForage for the same bit of data:

    IndexedDB Code

    // IndexedDB.
    var db;
    var dbName = "dataspace";
     
    var users = [ {id: 1, fullName: 'Matt'}, {id: 2, fullName: 'Bob'} ];
     
    var request = indexedDB.open(dbName, 2);
     
    request.onerror = function(event) {
        // Handle errors.
    };
    request.onupgradeneeded = function(event) {
        db = event.target.result;
     
        var objectStore = db.createObjectStore("users", { keyPath: "id" });
     
        objectStore.createIndex("fullName", "fullName", { unique: false });
     
        objectStore.transaction.oncomplete = function(event) {
            var userObjectStore = db.transaction("users", "readwrite").objectStore("users");
        }
    };
     
    // Once the database is created, let's add our user to it...
     
    var transaction = db.transaction(["users"], "readwrite");
     
    // Do something when all the data is added to the database.
    transaction.oncomplete = function(event) {
        console.log("All done!");
    };
     
    transaction.onerror = function(event) {
        // Don't forget to handle errors!
    };
     
    var objectStore = transaction.objectStore("users");
     
    for (var i in users) {
        var request = objectStore.add(users[i]);
        request.onsuccess = function(event) {
            // Contains our user info.
            console.log(event.target.result);
        };
    }

    WebSQL wouldn’t be quite as verbose, but it would still require a fair bit of boilerplate. With localForage, you get to write this:

    localForage Code

    // Save our users.
    var users = [ {id: 1, fullName: 'Matt'}, {id: 2, fullName: 'Bob'} ];
    localForage.setItem('users', users, function(result) {
        console.log(result);
    });

    That was a bit less work.

    Data other than strings

    Let’s say you want to download a user’s profile picture for your app and cache it for offline use. It’s easy to save binary data with localForage:

    // We'll download the user's photo with AJAX.
    var request = new XMLHttpRequest();
     
    // Let's get the first user's photo.
    request.open('GET', "/users/1/profile_picture.jpg", true);
    request.responseType = 'arraybuffer';
     
    // When the AJAX state changes, save the photo locally.
    request.addEventListener('readystatechange', function() {
        if (request.readyState === 4) { // readyState DONE
            // We store the binary data as-is; this wouldn't work with localStorage.
            localForage.setItem('user_1_photo', request.response, function() {
                // Photo has been saved, do whatever happens next!
            });
        }
    });
     
    request.send()

    Next time we can get the photo out of localForage with just three lines of code:

    localForage.getItem('user_1_photo', function(photo) {
        // Create a data URI or something to put the photo in an img tag or similar.
        console.log(photo);
    });

    Callbacks and promises

    If you don’t like using callbacks in your code, you can use ES6 Promises instead of the callback argument in localForage. Let’s get that photo from the last example, but use promises instead of a callback:

    localForage.getItem('user_1_photo').then(function(photo) {
        // Create a data URI or something to put the photo in an <img> tag or similar.
        console.log(photo);
    });

    Admittedly, that’s a bit of a contrived example, but around has some real-world code if you’re interested in seeing the library in everyday usage.

    Cross-browser support

    localForage supports all modern browsers. IndexedDB is available in all modern browsers aside from Safari (IE 10+, IE Mobile 10+, Firefox 10+, Firefox for Android 25+, Chrome 23+, Chrome for Android 32+, and Opera 15+). Meanwhile, the stock Android Browser (2.1+) and Safari use WebSQL.

    In the worst case, localForage will fall back to localStorage, so you can at least store basic data offline (though not blobs and much slower). It at least takes care of automatically converting your data to/from JSON strings, which is how localStorage needs data to be stored.

    Learn more about localForage on GitHub, and please file issues if you’d like to see the library do more!

  4. Monster Madness – creating games on the web with Emscripten

    When our engineering teams at Trendy Entertainment & Nom Nom Games decided on the strategy of developing one of our new Unreal Engine 3 games — Monster Madness Online — as a cross-platform title, we knew that a frictionless multiplayer web browser version would be central to this experience. The big question, however, was determining what essential technologies to utilize in order to bring our game onto the web. As a C++ oriented developer, we determined quickly that rewriting the game engine from the ground-up was out of the question. We’d need a solution that would allow us to port our existing code in an efficient manner into a format usable in the browser…

    TL;DR? Watch the video!

    Playing the Field

    We looked hard at the various options in front of us: FlasCC (a GCC Flash compiler), Google’s NaCl, a custom native C++ extension, or Mozilla’s Emscripten & asm.js.

    In our tests, Flash ran slowly and had inconsistent behaviors between Pepper (Chrome) and Adobe’s plugin version. Combined with the increasingly onerous plugin requirement, we opted to look elsewhere for a more seamless, forward-thinking approach.

    NaCl had the issue of requiring a walled-garden distribution site that would separate us from direct contact with our users, and also being processor-specific. pNaCL eliminated the walled-garden requirement and added dynamic code compilation support, but still had the issues of being processor-specific code necessitating in our view device-specific testing, and a potentially long startup time as the code would be linked on first run. Finally, only working in Chrome would be a dealbreaker for our desire to have our game run in all major browsers.

    A custom plugin/extension with C++ would require lots of testing & maintenance efforts on our part to run across different browsers, processor architectures, and operating systems, and such an installation requirement would likely scare away many potential players.

    As it turned out, for our team’s purposes the Emscripten compiler & asm.js proved to be the best solution to these challenges, and when combined with a set of other new-ish Web API’s, enabled the browser to become a fully featured platform for instant high-end 3D gaming. This just took a little trial & error to figure out exactly how we’d piece it together…and that’s what we’ll be reviewing here!

    First Steps into a Brave New World

    We Trendy game engineers are primarily old-school C++ programmers, so it was something of a revelation that Emscripten could compile our existing application (built on Epic Game’s Unreal Engine 3) into asm.js optimized Javascript with little to no changes.

    The primary Unreal Engine 3-specific code tweaks that were necessary to get the project to compile & run with Emscripten, were essentially… 1, 2, 3:

    1.
    // Esmcripten needs 4 byte alignment
    FNameEntry* Allocate( INT Size )
    {
        #if EMSCRIPTEN
           Size = Align( Size, 4 );
        #endif
     
        ......
    }
     
    2.
    // Script execution: llvm needs aligned data
    #if EMSCRIPTEN 
        #define XFER(T)
        {
            T Temp;
     
            if (!Ar.IsLoading())
            {
                appMemcpy(&Temp, &Script(iCode), sizeof(T));
            }
     
            Ar << Temp;
     
            if (!Ar.IsSaving())
            {
                appMemcpy(&Script(iCode), &Temp, sizeof(T));
            }
     
            iCode += sizeof(T);
        }
    #else
        #define XFER(T) { Ar << *(T*)&Script(iCode); iCode += sizeof(T); }
    #endif
     
    3.
    // This function needs to synchronously complete IO requests for single-threaded Emscripten IO to work, so added a ServiceRequestsSynchronously() to the end of it which flushes & blocks till the IO request finishes.
    FAsyncIOSystemBase::QueueIORequest()

    No really, that was about it! Within a day of fiddling with it, we had our game’s Javascript ‘executable’ compiled & running in the browser. Crashing, due to not having a graphics API implememented — but running with log output! Thankfully, we already had Unreal Engine3’s OpenGL ES2 version of the rendering subsystem ready to utilize, so porting the renderer to WebGL only took another day.

    WebGL appeared to essentially have a superset of features compared to OpenGL ES2, so the shaders and methods used matched up by simply changing some API calls. In fact, we were able to do improvements by making use of WebGL’s floating point render targets for certain postprocessing effects, such as edge outlining and dynamic shadows.

    EmscriptenGamePost
    Postprocessing makes everything prettier!

    But how’s it run?

    Now we had something rendering in the browser, and with a quick change to capture input, we could start playing the game and analyzing its performance. What we found was very encouraging: straight ‘out of the box’, in Firefox the asm.js version of the game was getting nearly 33% of the performance of the native executable. And this was comparing a single-threaded web application to the multi-threaded native executable (so really, not a fair comparison! ;). This was about 2x the performance we saw with our quick Flash port (which we still utilize as a fallback for older browsers that don’t yet support asm.js, though we eventually hope to deprecate entirely).

    Its performance in Chrome was less astonishing, more towards 20% of native performance, but still within our target margins: namely, can it run on a 2011-model Macbook Air at 45-60 FPS (with Vsync disabled)? The answer, thankfully, was yes. We hope Google will continue to improve the performance of asm.js on their browser over time. But as it currently stands, we believe unless you’re making the browser version of ‘Crysis’ with this tech (which may not be far off), it seems you have enough performance even in Chrome to do most kinds of web games.

    BrowserFramerate
    60 FPS on an old Macbook Air

    Putting the Pieces into Place

    So within a week from starting, we had turned our Unreal Engine 3 PC game into a well-running, graphically-rich web game. But where to take it from here? Well, it’d still need: Audio, Networking, Streaming, and Storage. Let’s discuss the various techniques used for each of these systems.

    Audio

    This was a no-brainer, as there is only really one robust standardized web audio system apart from Flash: WebAudio. Again, this API matched up pretty well to its mobile cousin, OpenSL, for which we already had an integration. So once we switched out the various calls, we had .

    There was an apparent issue in Mac Chrome where sounds flagged “looping” would sometimes never become destroyed, so we implemented a Chrome-specific hack to manually loop the sound, and filed a bug report with Google. Ah well, one thing we’ve seen with browser API’s is there’s not a 100% guarantee that every browser will implement the functionality to perfect specification, but it gets the job done!

    Networking

    This proved a little trickier. First, we investigated WebRTC as used in the Bananabread demo, but WebRTC of course is for browser-to-browser communication which is actually not what we were looking to do. Our online game service uses a server-and-client architecture with centralized infrastructure, and so WebSockets is the API to utilize in that case. The tricky part is that we have to handle all the WebSockets incoming and outgoing data in JavaScript buffers, and then pass that along to the “C++” Emscripten-compiled game.

    With some callbacks, this worked out, but we also had to take our UDP game server code and place the WebSockets TCP-style layer onto it — some iteration was necessary to get the packets to be formatted in exactly the way that WebSockets expects, but once we did that, our browser game was communicating with our backend-hosted Linux dedicated game servers with no problems!

    Streaming & Storage

    One advantage to being on the web is easy access to the browser’s asynchronous downloading functionalities to stream-in content. We certainly made use of this with our game, with the initial download clocking in at under 10 MB. Everything else streams in on-demand as you play using standard browser http download requests: Textures, Sound Effects, Music, even Skeletal Meshes and Level Packages. But the bigger question is how to reliably store this content. We don’t want to just rely on the browser cache, since it’s not good for guaranteed immediate gameplay loading as we can’t pre-query whether something exists on disk in the regular browser cache or not.

    For this, we used the IndexedDB API, which lets us asynchronously save and retrieve data objects from a secure abstracted storage location. It works in both Chrome and Firefox, though it’s still finicky as the database can occasionally become corrupted (perhaps if terminated during async writes) and has to be regenerated. In the worst case, this simply results in a re-download of content the user already had received.

    We’re currently looking into this issue, but that aside, IndexedDB certainly works well and has the advantage of providing our application standard file IO functionality, useful to store content that we download. (UPDATE: Firefox Nightly build as of 12/10 seems to automatically reset the IndexedDB storage if this happens and it may not recur.)

    Play it Now, and Embrace the Future!

    While we still have more profiling and tweaking to, as we’re just now starting to use Firefox’s VTune support to symbolically profile the asm.js performance within the browser. Even still, we’re pretty pleased with where the things currently stand. But don’t take our word for it, please try it yourselves right here, no installation or sign-up required:

    Try our demo test anonymously In-browser Here!
    (Please bear with us if our game servers limit access under load, we’re still testing our backend scalability!)

    We at Trendy envision a day when anybody can play any game no matter where they are or what device they happen to have, without friction or gateways or middlemen. With the right combination of these cutting-edge web technologies, that day can be today. We hope other enterprising game developers will join us in reaching players directly through the web, which thanks to Emscripten & asm.js, may well become the most powerful and far-reaching “game console” of all!

  5. Building a Notes App with IndexedDB, Redis and Node.js

    In this post, I’ll be talking about how to create a basic note-taking app that syncs local and remote content if you are online and defaults to saving locally if offline.

    notes app sample

    Using Redis on the server-side

    When adding records in Redis, we aren’t working with a relational database like in MySQL or PostgreSQL. We are working with a structure like IndexedDB where there are keys and values. So what do we need when we only have keys and values to work with for a notes app? We need unique ids to reference each note and a hash of the note metadata. The metadata in this example, consists of the new unique id, a creation timestamp and the text.

    Below is a way of creating an id with Redis in Node and then saving the note’s metadata.

    // Let's create a unique id for the new note.
    client.incr('notes:counter', function (err, id) {
     
    ...
     
        // All note ids are referenced by the user's email and id.
        var keyName = 'notes:' + req.session.email + ':' + id;
        var timestamp = req.body.timestamp || Math.round(Date.now() / 1000);
     
        // Add the new id to the user's list of note ids.
        client.lpush('notes:' + req.session.email, keyName);
     
        // Add the new note to a hash.
        client.hmset(keyName, {
          id: id,
          timestamp: timestamp,
          text: finalText
        });
     
    ...
     
    });

    This gives us the following key pattern for all notes on the server-side:

    1. notes:counter contains all unique ids starting at 1.
    2. notes:<email> contains all the note ids that are owned by the user. This is a list that we reference when we want to loop through all the user’s notes to retrieve the metadata.
    3. notes:<email>:<note id> contains the note metadata. The user’s email address is used as a way to reference this note to the correct owner. When a user deletes a note, we want to verify that it matches the same email that they are logged in with, so you don’t have someone deleting a note that they don’t own.

    Adding IndexedDB on the client-side

    Working with IndexedDB requires more code than localStorage. But because it is asynchronous, it makes it a better option for this app. The main reason for why it is a better option is two-fold:

    1. You don’t want to wait around for all your notes to process before the page renders all elements. Imagine having thousands of notes and having to wait for all of them to loop through before anything on the page appears.
    2. You can’t save note objects as objects – you have to convert them to strings first, which means you will have to convert them back to objects before they are rendered. So something like { id: 1, text: 'my note text', timestamp: 1367847727 } would have to be stringified in localStorage and then parsed back after the fact. Now imagine doing this for a lot of notes.

    Both do not equate to an ideal experience for the user – but what if we want to have the ease of localStorage’s API with the asynchronous features of IndexedDB? We can use Gaia’s async_storage.js file to help merge the two worlds.

    If we’re offline, we need to do two things similar to the server-side:

    1. Save a unique id for the note and apply it in an array of ids. Since we can’t reference a server-side id created by Redis, we’ll use a timestamp.
    2. Save a local version of the note metadata.
    var data = {
      content: rendered,
      timestamp: id,
      text: content
    };
     
    asyncStorage.setItem(LOCAL_IDS, this.localIds, function () {
      asyncStorage.setItem(LOCAL_NOTE + id, data, function () {
        ...
      });
    });

    The structure of the IndexedDB keys are very similar to the Redis ones. The pattern is as follows:

    1. All local ids are saved in a localNoteIds array
    2. All local note objects are saved in note:local:<id>
    3. All remote/synced ids are saved in a noteIds array
    4. All remote/synced note objects are saved in note:<id>
    5. Local notes use a timestamp for their unique id and this is converted to a valid server id once Redis saves the data

    Once we’re online, we can upload the local notes, save the remote ones on the client-side and then delete the local ones.

    Triggering note.js on the client-side

    Whenever we refresh the page, we need to attempt a sync with the server. If we are offline, let’s flag that and only grab what we have locally.

    /**
     * Get all local and remote notes.
     * If online, sync local and server notes; otherwise load whatever
     * IndexedDB has.
     */
    asyncStorage.getItem('noteIds', function (rNoteIds) {
      note.remoteIds = rNoteIds || [];
     
      asyncStorage.getItem('localNoteIds', function (noteIds) {
        note.localIds = noteIds || [];
     
        $.get('/notes', function (data) {
          note.syncLocal();
          note.syncServer(data);
     
        }).fail(function (data) {
          note.offline = true;
          note.load('localNoteIds', 'note:local:');
          note.load('noteIds', 'note:');
        });
      });
    });

    Almost done!

    The code above provides the basics for a CRD notes app with support for local and remote syncing. But we’re not done yet.

    On Safari, IndexedDB is not supported as they still use WebSQL. This means none of our IndexedDB code will work. To make this cross-browser compatible, we need to include a polyfill for browsers that only support WebSQL. Include this before the rest of the code and IndexedDB support should work.

    The Final Product

    You can try out the app at http://notes.generalgoods.net

    The Source Code

    To view the code for this app feel free to browse the repository on Github.

  6. Using IndexedDB API today – the IndexedDB polyfills

    This is a guest post from Parashuram Narasimhan on how to use IndexedDB today.

    Using the polyfills mentioned in this article, web developers can start using IndexedDB APIs in their applications and support a wider range of browsers.

    The IndexedDB API has matured into a stable specification with support by major browser vendors. However, the specification is still not supported in all browsers, making it harder to use in production. There are also some interesting differences in the implementations among browsers that support the specification. This article explores a couple of polyfills that could be leveraged to enable developers to use IndexedDB across different browsers.

    Polyfill using WebSql

    WebSql was one of the first specifications for data storage in the browsers and is supported in some browsers that do not have an implementation of IndexeddB yet. However, the specification is no longer in active maintenance. As mentioned in the document: “it was on the W3C Recommendation track but specification work has stopped. The specification reached an impasse: all interested implementors have used the same SQL backend (Sqlite), but we need multiple independent implementations to proceed along a standardisation path”.

    This polyfill using WebSql utilizes the WebSQL implementations to expose the IndexedDB APIs. To use the polyfill, simply link or include the indexeddb.shim.js file.

    The polyfill assigns window.indexedDB to be window.mozIndexedDB, window.webkitIndexedDB or window.msIndexedDB; if any of those implementations are available. If no implementation is available, the polyfill assigns a ‘window.shimIndexedDB’ to window.indexedDB. Web applications can thus start using window.indexedDB as the starting point for all database operations. Internally, the polyfill uses the WebSql tables to store object store data and borrows heavily from the IndexedDB implementation in Firefox. For example, it has separate tables and databases to maintain a record of the database versions, or the values for object store definitions (IDBObjectStoreParameters). More implementation details about the internals can be found in the blog post on it.

    The polyfill has been tested to work with various IndexedDB libraries like PouchDB, LINQ2IndexedDB, JQuery-IndexedDB and DB.JS. Since it leverages WebSql, applications can write code according to the IndexedDB API on web browsers like Opera and Safari, and additionally on mobile browsers like Safari on iPad/iPhone, or mobile development platforms like Cordova that have WebSql enabled browsers embedded in them.

    It is implemented in Javascript and is a work in progress. Note that it may not fully confirm to the specification and you discover issues, you could file a bug or send a pull request to the source repository.

    Polyfill for setVersion

    An earlier version of the IndexedDB Specification used the “setVersion” call to change the version of the IndexedDB Database. This was later revised to an easier to use “onupgradeneeded” callback. Though Internet Explorer and Firefox support the newer “onupgradeneeded” to initiate database version changes for creating or deleting objectStores and Indexes, Google Chrome (22.0.1194.0 canary) still uses the older setVersion.

    All browsers will soon use the newer “onupgradeneded” method soon; till then, this simple polyfill for setVersion lets you use the IndexedDB API with the onupgradeneeded method.

    The polyfill substitutes the indexeddb.open() call with an openReqShim() call. The openReqShim() call invokes setVersion() and then converts it to the “onupgradeneeded” callback if only “setVersion()” is supported. If the implementation supports the “onupgradeneeded”, openReqShim() is simply a transparent call to the indexedb.open method. Hence, indexeddb.open() calls should be replaced with the openReqShim() call to use this polyfill.

  7. Why no FileSystem API in Firefox?

    A question that I get asked a lot is why Firefox doesn’t support the FileSystem API. Usually, but not always, they are referring specifically to the FileSystem and FileWriter specifications which Google is implementing in Chrome, and which they have proposed for standardization in W3C.

    The answer is somewhat complex, and depends greatly on what exact capabilities of the above two specifications the person is actually wanting to use. The specifications are quite big and feature full, so it’s no surprise that people are wanting to do very different things with it. This blog post is an attempt at giving my answer to this question and explain why we haven’t implemented the above two specifications. But note that this post represents my personal opinion, intended to spur more conversation on this topic.

    As stated above, people asking for “FileSystem API support” in Firefox are actually often interested in solving many different problems. In my opinion most, but so far not all, of these problems have better solutions than the FileSystem API. So let me walk through them below.

    Storing resources locally

    Probably the most common thing that people want to do is to simply store a set of resources so that they are available without having to use the network. This is useful if you need quick access to the resources, or if you want to be able to access them even if the user is offline. Games are a very common type of application where this is needed. For example an enemy space ship might have a few associated images, as well as a couple of associated sounds, used when the enemy is moving around the screen and shooting. Today, people generally solve this by storing the images and sound files in a file system, and then store the file names of those files along with things like speed and firepower of the enemy.

    However it seems a bit non-optimal to me to have to store some data separated from the rest. Especially when there is a solution which can store both structured data as well as file data. IndexedDB treats file data just like any other type of data. You can write a File or a Blob into IndexedDB just like you can store strings, numbers and JavaScript objects. This is specified by the IndexedDB spec and so far implemented in both the Firefox and IE implementations of IndexedDB. Using this, you can store all information that you need in one place, and a single query to IndexedDB can return all the data you need. So for example, if you were building a web based email client, you could store an object like:

    {
      subject: "Hi there",
      body: "Hi Sven,\\nHow are you doing...",
      attachments: [blob1, blob2, blob3]
    }

    Another advantage here is that there’s no need to make up file names for resources. Just store the File or Blob object. No name needed.

    In Firefox’s IndexedDB implementation (and I believe IE’s too) the files are transparently stored outside of the actual database. This means that performance of storing a file in IndexedDB is just as good as storing the file in a filesystem. It does not bloat the database itself slowing down other operations, and reading from the file means that the implementation just reads from an OS file, so it’s just as fast as a filesystem.

    Firefox IndexedDB implementation is even smart enough that if you store the same Blob multiple files in a IndexedDB database it just creates one copy of the file. Writing further references to the same Blob just adds to an internal reference counter. This is completely transparent to the web page, the only thing it will notice is faster writes and less resource use. However I’m not sure if IE does the same, so check there first before relying on it.

    Access pictures and music folders

    The second most common thing that people ask for related to a file system APIs is to be able to access things like the user’s picture or music libraries. This is something that the FileSystem API submitted to W3C doesn’t actually provide, though many people seems to think it does. To satisfy that use-case we have the DeviceStorage API. This API allows full file system capabilities for “user files”. I.e. files that aren’t specific to a website, but rather resources that are managed and owned by the user and that the user might want to access through several apps. Such as photos and music. The DeviceStorage API is basically a simple file system API mostly optimized for these types of files.

    We’re still in the process of specifying and implementing this API. It’s available to test with in recent nightly builds, but so far isn’t enabled by default. The main problem with exposing this functionality to the web is security. You wouldn’t want just any website to read or modify your images. We could put up a prompt like we do with the GeoLocation API, given that this API potentially can delete all your pictures from the last 10 years, we probably want something more. This is something we are actively working on. But it’s definitely the case here that security is the hard part here, not implementing the low-level file operations.

    Low-level file manipulation

    A less common request is the ability to do low-level create, read, update and delete (CRUD) file operations. For example being able to write 10 bytes in the middle of a 10MB file. This is not something IndexedDB supports right now, it only allows adding and removing whole files. This is supported by the FileWriter specification draft. However I think this part of this API has some pretty fundamental problems. Specifically there are no locking capabilities, so there is no way to do multiple file operations and be sure that another tab didn’t modify or read the file in between those operations. There is also no way to do fsync which means that you can’t implement ACID type applications on top of FileWriter, such as a database.

    We have instead created an API with the same goal, but which has capabilities for locking a file and doing multiple operations. This is done in a way to ensure that there is no risk that pages can forget to unlock a file, or that deadlocks can occur. The API also allows fsync operations which should enable doing things like databases on top of FileHandle. However most importantly, the API is done in such a way that you shouldn’t need to nest asynchronous callbacks as much as with FileWriter. In other words it should easier to use for authors. You can read more about FileHandle at

    https://wiki.mozilla.org/WebAPI/FileHandleAPI

    The filesystem URL scheme

    There is one more capability that exist in the FileSystem API not covered above. The specification introduces a new filesystem: URL scheme. When loading URLs from filesystem: it returns the contents of files in stored using the FileSystem API. This is a very cool feature for a couple of reasons. First of all these URLs are predictable. Once you’ve stored a file in the file system, you always know which URL can be used to load from it. And the URL will continue to work as long as the file is stored in the file system, even if the web page is reloaded. Second, relative URLs work with the filesystem: scheme. So you can create links from one resource stored in the filesystem to another resource stored in the filesystem.

    Firefox does support the blob: URL scheme, which does allow loading data from a Blob anywhere where URLs can be used. However it doesn’t have the above mentioned capabilities. This is something that I’d really like to find a solution for. If we can’t find a better solution, implementing the Google specifications is definitely an option.

    Conclusions

    As always when talking about features to be added to the web platform it’s important to talk about use cases and capabilities, and not jump directly to a particular solution. Most of the use cases that the FileSystem API aims to solve can be solved in other ways. In my opinion many times in better ways.

    This is why we haven’t prioritized implementing the FileSystem API, but instead focused on things like making our IndexedDB implementation awesome, and coming up with a good API for low-level file manipulation.

    Focusing on IndexedDB has also meant that we very soon have a good API for basic file storage available in 3 browsers: IE10, Firefox and Chrome.

    On a related note, we just fixed the last known spec compliance issues in our IndexedDB implementation, so Firefox 16 will ship with IndexedDB unprefixed!

    As always, we’re very interested in getting feedback from other people, especially from web developers. Do you think that FileSystem API is something we should prioritize? If so, why?

  8. There is no simple solution for local storage

    TL;DR: we have to stop advocating localStorage as a great opportunity for storing data as it performs badly. Sadly enough the alternatives are not nearly as supported or simple to implement.

    When it comes to web development you will always encounter things that sound too good to be true. Sometimes they are good, and all that stops us from using them is our notion of being conspicuous about *everything* as developers. In a lot of cases, however, they really are not as good as they seem but we only find out after using them for a while that we are actually “doing it wrong”.

    One such case is local storage. There is a storage specification (falsely attributed to HTML5 in a lot of examples) with an incredibly simple API that was heralded as the cookie killer when it came out. All you have to do to store content on the user’s machine is to access the navigator.localStorage (or sessionStorage if you don’t need the data to be stored longer than the current browser session):

    localStorage.setItem( 'outofsight', 'my data' );
    console.log( localStorage.getItem( 'outofsight' ) ); // -> 'my data'

    This local storage solution has a few very tempting features for web developers:

    • It is dead simple
    • It uses strings for storage instead of complex databases (and you can store more complex data using JSON encoding)
    • It is well supported by browsers
    • It is endorsed by a lot of companies (and was heralded as amazing when iPhones came out)

    A few known issues with it are that there is no clean way to detect when you reach the limit of local storage and there is no cross-browser way to ask for more space. There are also more obscure issues around sessions and HTTPS, but that is just the tip of the iceberg.

    The main issue: terrible performance

    LocalStorage also has a lot of drawbacks that aren’t quite documented and certainly not covered as much in “HTML5 tutorials”. Especially performance oriented developers are very much against its use.

    When we covered localStorage a few weeks ago using it to store images and files in localStorage it kicked off a massive thread of comments and an even longer internal mailing list thread about the evils of localStorage. The main issues are:

    • localStorage is synchronous in nature, meaning when it loads it can block the main document from rendering
    • localStorage does file I/O meaning it writes to your hard drive, which can take long depending on what your system does (indexing, virus scanning…)
    • On a developer machine these issues can look deceptively minor as the operating system cached these requests – for an end user on the web they could mean a few seconds of waiting during which the web site stalls
    • In order to appear snappy, web browsers load the data into memory on the first request – which could mean a lot of memory use if lots of tabs do it
    • localStorage is persistent. If you don’t use a service or never visit a web site again, the data is still loaded when you start the browser

    This is covered in detail in a follow-up blog post by Taras Glek of the Mozilla performance team and also by Andrea Giammarchi of Nokia.

    In essence this means that a lot of articles saying you can use localStorage for better performance are just wrong.

    Alternatives

    Of course, browsers always offered ways to store local data, some you probably never heard of as shown by evercookie (I think my fave when it comes to the “evil genius with no real-world use” factor is the force-cached PNG image to be read out in canvas). In the internal discussions there was a massive thrust towards advocating IndexedDB for your solutions instead of localStorage. We then published an article how to store images and files in IndexedDB and found a few issues – most actually related to ease-of-use and user interaction:

    • IndexedDB is a full-fledged DB that requires all the steps a SQL DB needs to read and write data – there is no simple key/value layer like localStorage available
    • IndexedDB asks the user for permission to store data which can spook them
    • The browser support is not at all the same as localStorage, right now IndexedDB is supported in IE10, Firefox and Chrome and there are differences in their implementations
    • Safari, Opera, iOS, Opera Mobile, Android Browser favour WebSQL instead (which is yet another standard that has been officially deprecated by the W3C)

    As always when there are differences in implementation someone will come up with an abstraction layer to work around that. Parashuram Narasimhan does a great job with that – even providing a jQuery plugin. It feels wrong though that we as implementers have to use these. It is the HTML5 video debate of WebM vs. H264 all over again.

    Now what?

    There is no doubt that the real database solutions and their asynchronous nature are the better option in terms of performance. They are also more matured and don’t have the “shortcut hack” feeling of localStorage. On the other hand they are hard to use in comparison, we already have a lot of solutions out there using localStorage and asking the user to give us access to storing local files is unacceptable for some implementations in terms of UX.

    The answer is that there is no simple solution for storing data on the end users’ machines and we should stop advocating localStorage as a performance boost. What we have to find is a solution that makes everybody happy and doesn’t break the current implementations. This might prove hard to work around. Here are some ideas:

    • Build a polyfill library that overrides the localStorage API and stores the content in IndexedDB/WebSQL instead? This is dirty and doesn’t work around the issue of the user being asked for permission
    • Implement localStorage in an asynchronous fashion in browsers – actively disregarding the spec? (this could set a dangerous precedent though)
    • Change the localStorage spec to store asynchronously instead of synchronously? We could also extend it to have a proper getStorageSpace interface and allow for native JSON support
    • Define a new standard that allows browser vendors to map the new API to the existing supported API that matches the best for the use case?

    We need to fix this as it doesn’t make sense to store things locally and sacrifice performance at the same time. This is a great example of how new web standards give us much more power but also make us face issues we didn’t have to deal with before. With more access to the OS, we also have to tread more carefully.

  9. Storing images and files in IndexedDB

    The other day we wrote about how to Save images and files in localStorage, and it was about being pragmatic with what we have available today. There are, however, a number of performance implications with localStorage – something that we will cover on this blog later – and the desired future approach is utilizing IndexedDB. Here I’ll walk you through how to store images and files in IndexedDB and then present them through an ObjectURL.

    Continued…

  10. Webinar: IndexedDB with Jonas Sicking

    Update 2011-12-20: The video recording of this webinar is now available:

    IndexedDB is the emerging standard for structured client-side data storage. The IndexedDB standard is supported by current versions of Firefox and Chrome, and support for it is expected in Internet Explorer 10.

    With this growing maturity and support, it’s time to start experimenting with what IndexedDB can do for Web applications. Working with IndexedDB requires a shift in mindset for many Web developers, as it is more similar to “NoSQL” systems like CouchDB or MongoDB than to traditional relational databases.

    IndexedDB will be the theme for December’s MDN DevDerby. And to kick off the theme for the month, on December 1st, we’re offering an MDN Webinar on IndexedDB with Jonas Sicking, one of the editors of the IndexedDB draft standard and one of the implementors of IndexedDB in Gecko.

    When
    December 1st, at 9:00 a.m. US Pacific Time (17:00 UTC). Add this event to your Google calendar:
    Where
    Air Mozilla, with text chat on #airmozilla on irc.mozilla.org. (There’s an IRC widget on the Air Mozilla page if you need it.)

    We’ll record this session for those who can’t attend (or can’t use Flash, which is currently used by Air Mozilla for live streaming video).

    We’d like to get a rough estimate of how many people will be attending. If you happen to use Plancast, and you plan to attend the seminar, please join the event on Plancast.