Mozilla

JavaScript Articles

Sort by:

View:

  1. JAL – Just Another Loader for JavaScript

    A long time ago I saw the film "Interview with the vampire" starring Tom Cruise, Brad Pitt and Kirsten Dunst. The scene that struck me the most is when Pitt's character realizes that Lestat is using him in order to adapt to the current age. For a developer this is not a very bad rule. In fact it is actually quite good. If you want to keep up and stay on top, follow the bleeding edge, experiment and copy what others are doing. Reverse engineering and reinventing the wheel is a bliss. Apply this to open source and we – developers, hackers, designers – have a wide array of tools at our hands. Just think of "View Source" in the Web Browsers. Without it we wouldn't be where we are today. Copying is learning. Inventing is impossible without standing on the shoulders of our predecessors.

    The company where I work, Tail-f Systems has just recently open sourced a small JavaScript library called JAL, which is an acronym for Just Another Loader. This is an infant project, it lacks certain features but does the job and does it well. It is, as the name implies, a tool for parallel conditional dependency loading of resource files. We use it in our Web UI for loading scripts and CSS files. It is there for one reason only: To speed things up!

    We tested YepNope, which is a great loader, but felt it could be faster. It also had features we didn't really need. So we wrote our own. We reinvented the wheel. How hard could it be? Well, it was pretty hard.

    What we needed was a resource loader that could load, not only JavaScript but stylesheets as well. It also needed to be able to load resources in parallel and in groups to handle dependencies, such as loading jQuery before loading a jQuery plugin. The final requirement was conditional loading, i.e. load JSON.js if the browser is lacking native JSON support.

    Parallel dependency loading

    A typical setup looks something like this:

    $loader
        .load('js/shape.js')
        .load([
              'js/circle.js'
            , 'js/rectangle.js'
        ])
        .load('js/square.js')
        .ready(function() {
            // Start app
        })

    There are three dependency groups set up. The first one loads a shape. The second loads a circle and a rectangle, which are dependent on shape. The last group contains a square which is derived from a rectangle. In this trivial example, the speedup happens in the second group since the circle and the rectangle are loaded in parallel. Now, imagine you have a large number of scripts with different dependencies in your application. The traditional way is to concatenate all the scripts into one large bundle and then minify that bundle. What you are actually doing is that you are loading your scripts the old fashioned way, one after another. Modern browser are capable of loading scripts and resources in parallel. They actually open up multiple connections to a web server and load multiple resources all at the same time. So if you have a script that takes, say, 5 seconds to load and you break that into 5 pieces and load the pieces in parallel the loading time becomes, theoretically, 1 second. That is five times faster than before!

    Conditional loading

    Now to conditional loading. Conditional loading is where you load a resource if a certain condition is met. Does the browser have native JSON support? No? Well, we'll fix that! Here's an example of loading a JSON polyfill:

    $loader
        .when(typeof window.JSON === 'undefined', function(loader) {
            loader.load('js/json.js')
        })

    Done is done

    Once a resource group has loaded, JAL allows you execute code. Here is an example where the "ready" event in jQuery is halted until all scripts have loaded.

    $loader
        .load('js/jquery.min.js')
        .done(function(){
            // Stop jQuery from triggering the "ready" event
            $.holdReady(true)
        })
        .load([
              'js/script-one.min.js'
            , 'js/script-two.min.js'
        ])
        .ready(function() {
            // Allow jQuery to trigger the "ready" event
            $.holdReady(false)
            // Start app
        })

    How it was done

    Writing JAL was both challenging and fun. The most difficult part was to make sure that the load order was respected between the groups. This was tricky since things were happening fast and there was a big performance difference between the browsers.

    JAL was implemented using a resource queue and a polling function. The queue is locked until a resource group has been loaded. Once loaded the "done" event is fired. This allows you to inject one or more resource groups to the front of the queue, if you ever need that. After the "done" event has been triggered the queue is unlocked and the poller is free to load the next resource group.

    The poller itself is started once the loader sequence has been executed. This is done by pushing the poller to the top of the script stack using setTimeout with a timeout of 0 milliseconds. It's a classic example of how the single threaded model of a web browser’s JavaScript engine can be used.

    Closing words

    Do you have a large concatenated JavaScript file? Is it minified and gzipped? Is it loading fast? Do you want faster? Then minify and gzip your resource files individually and use a conditional parallel dependency loader instead.

  2. The making of a hack – Media Query Mario

    Like any developer, I love any shiny new tech demo that finds its way into my browser; some of the things people are putting together absolutely blows my mind with the level of creativity and technical skill on show.

    After attending WebDevConf 2012 in mid October, I felt the usual heightened sense of inspiration that a good conference gives us all. On my way back to London, I happened to see a tweet about the current Mozilla Dev Derby in my Twitter stream and, still inspired, thought about creating something to enter myself. That something turned into a tech demo called Media Query Mario; a mash up of Media Queries, CSS3 Animations and HTML5 audio.

    Where to start?

    Thinking of the idea came as a result of which new technologies I wanted to experiment with the most at the time. I had been meaning to delve into CSS animation for some time and combining this with media queries – the focus of that months Dev Derby – seemed pretty logical. Letting the CSS fire off the animations instead of needing JavaScript to do this seemed like a very natural fit.

    Choosing Mario 3 for the animation was simply the first thing that popped into my head. I wanted the animation to be a side scrolling 2D affair and being a retro game nerd, Mario instantly came to mind. Anyone with more than a fleeting interest in 2D Mario games would then see that Mario 3 was the only real choice for my animation (although I’m free to argue against any opposing opinions on the ‘best’ 2D Mario game anytime!)

    One question I’ve been asked since putting the demo out is: why choose CSS animations when other technologies may have been more suitable? The main reason is that I simply wanted to see what they could do. There are plenty of demos showcasing just how awesome canvas and SVG are; my demo is by no means meant to advocate the use of CSS animations over those technologies. I just wanted to give a decent benchmark of where CSS animation is at right now, and at least add them to the conversation when people are choosing which technology is right for their project.

    There was only one rule I set myself when starting to put together the demo – I wanted to stick rigidly to animating using CSS wherever possible. If it was possible to do something in CSS, I wanted to use it, irrespective of performance or how fiddly it was to implement. I’ll come back to how I think it performed in retrospect later.

    Push any button to start

    One of the earliest issues I came up against was knowing what width the user would be viewing the animation at. This was not only important in terms of what size to design the animation to, but especially in terms of how much of the level was on show at any one time. The more of the level on show, the more I would need to animate at any one time.

    After a little thought around how Mario 3 itself was presented, it made sense to make use of the original menu screen to help control this. As well as acting as a holding screen while the animation assets loaded, it would ensure the user resized their browser window down to a dimension I could specify, before then allowing the animation to be started. This was controlled by adding a conditional media query hiding the animation start button:

    @media screen and (max-width: 320px), (min-width: 440px) {
        .startBtn {
            display:none;
        }
    }

    Planning the actual animation itself, I wanted to mirror the way the original game would have been played as much as possible. To help with this I found a video clip that traversed through the level at a pace that I could replicate. This helped me plan the image and sound assets I would need, the speed of the animation and started the thinking around how to animate different enemies and power-ups throughout the level.

    With the structure of the demo planned out, I now just needed the assets. As you might expect, you don’t have to search for too long online to find original game images, sprites and sound files. For my demo, I used NESmaps and Mario Mayhem for the level map and character/object sprites and The Mushroom Kingdom for the sound files. I had to do a small amount of image editing myself, but these gave me a really great start.

    You can view the final spritesheet I used for the animation below.

    Letsa Go!

    So I had an idea planned out and had found my assets; I was ready to start putting it all together in code.

    First, I set about learning the specifics of CSS3 animations. A couple of resources really helped me out; MDN is always a great place to start and is no exception for CSS animations. I would also recommend any of these great articles by Peter, Chris or David – all provide an excellent introduction to getting started with CSS3 animations.

    I won’t look to replicate the depth those articles cover, but will highlight the key properties I made use of in the demo. For brevity, I’ll be covering the CSS3 syntax unprefixed, but if trying any of this out for yourself, prefixes should be included in your code to ensure the animations work across different browsers.

    A quick development tip worth mentioning when using newer CSS3 features such as CSS animations, is that using a preprocessor, such as LESS or SASS, is a massive lifesaver and something I’d highly recommend. Creating mixins that abstract the vendor prefixes out of the code you are directly working with helps keep visual clutter down when writing the code, as well as saving a whole load of time when changing CSS property values down the line.

    Before we get into specific techniques used in the demo, we need to understand that an animation consists of two main parts; the animation’s properties and its related keyframes.

    Animation Properties

    An animation can be built up with a number of related properties. The key properties I made use of were:

    //set the name of the animation, which directly relates to a set of keyframes
    animation-name: mario-jump;
     
    //the amount of time the animation will run for, in milliseconds or seconds
    animation-duration: 500ms;
     
    //how the animation progresses over the specified duration (i.e. ease or linear) 
    animation-timing-function: ease-in-out;
     
    //how long the animation should wait before starting, in milliseconds or seconds
    animation-delay: 0s;
     
    //how many times the animation should execute
    animation-iteration-count: 1;
     
    //if and when the animation should apply the rendered styles to the element being animated
    animation-fill-mode: forwards;

    The use of the animation-fill-mode property was especially important in the demo, as it was used to tell the animation to apply the final rendered styles to the element once the animation had finished executing. Without this, the element would revert to it’s pre-animated state.

    So for example, when animating an element’s left position 30 pixels from an initial position of 0px, if no animation-fill-mode is set, the element will return to 0px after animating. If the fill-mode is set to forwards the element will stay positioned at its final position of left: 30px.

    Keyframes

    The Keyframes at-rule lets you specify the steps in a CSS animation. At its most basic level this could be defined as:

    @keyframes mario-move {
        from { left:0px;   }
        to   { left:200px; }
    }

    Where from and to are keywords for 0% and 100% of the animation duration respectively. To show a more complex example we can also code something like this, which, relating back to the demo, animates Mario jumping between several platforms using multiple keyframes:

    @keyframes mario-jump-sequence {
        0% { bottom:30px; left: 445px; }
        20% { bottom:171px; left: 520px; }
        30% { bottom:138px; left: 544px; }
        32% { bottom:138px; left: 544px; }
        47% { bottom:228px; left: 550px; }
        62% { bottom:138px; left: 550px; }
        64% { bottom:138px; left: 550px; }
        76% { bottom:233px; left: 580px; }
        80% { bottom:253px; left: 590px; }
        84% { bottom:273px; left: 585px; }
        90% { bottom:293px; left: 570px; }
        100% { bottom:293px; left: 570px; }
    }

    So if the above animation was 1 second long, Mario would move from position bottom: 30px; left: 445px; at 0 seconds (0% through the animation) to bottom: 138px; left: 520px; during the first 200ms (or 20%) of your animation. This carries on like this throughout the keyframes defined.

    Animating the action

    Considering the above, the type of animations I created in the demo can be broken down into 3 broad categories:

    • Movement such as Mario jumping or a coin appearing out of a question box.
    • Spriting controls the background-image position of characters and objects in the animation.
    • Looping any animation that is to be repeated for x number of milliseconds or seconds.

    Movement

    Movement covers roughly 75% of all of the animations in the demo. For example, this includes character movement (i.e. Mario running and jumping), power-ups appearing and question boxes being hit. What makes each movement animation differ is the animation-timing-function, the animation-duration and the animation-delay properties.

    The animation-timing-function property helps control the speed of the animation over its duration. Wherever possible I used easing, such as ease-in or ease-in-out to save having to be too precise when defining animation keyframes. Where this didn’t create the effect I needed, I resorted to setting the animation-timing-function to linear and using the keyframes to specify the exact movement I required.

    An example a movement animation can be seen by this jump sequence.

    Spriting

    To control the image background-position of the characters and objects in the animation, I used the step-end timing-function:

    .mario {
        animation-timing-function: step-end;
        ...
    }

    Initially, I thought I may need to use JavaScript to control the image spriting by adding and removing classes to my elements. However, after experimenting with how the step-end timing keyword was implemented, I found it perfectly stepped through the keyframes I had defined, one keyframe at a time.

    To show this in action, take a look at the following examples, which show a simple Mario walking animation and Mario transforming after grabbing a power-up.

    Using step-end in this way wasn’t completely pain free however. To my frustration, when these sprite animations were stacked up over multiple media queries, I found that there was a glitch in WebKit that caused the animation to render differently to the keyframes I had defined. Admittedly, the use of CSS animations in this way is an edge case for browser rendering, but I have filed it as a bug in Chromium, and am hopeful this will be looked at in the future and ironed out.

    LOOPING

    Whenever an animation needed to be repeated over a period of time, looping was defined by adjusting the animation-iteration-count:

    //the animation repeats 5 times
    animation-iteration-count: 5;
     
    //the animation repeats infinitely
    animation-iteration-count: infinite;

    An example of this from the demo would be the rotation of the fireball].

    Through these 3 types of animation, the whole demo was constructed. The final layer was to add in the audio.

    Adding Audio

    Although I had previously downloaded all of the sound files I needed in .wav format, I had to convert them into a format that was usable with HTML5 audio; .ogg and .mp3. I used Switch Audio Convertor (on Mac) to do this, but any good audio conversion software should do the job.

    Once I had the converted files, I needed to detect which file type to serve to the browser. This required a couple of lines of JavaScript to detect support:

    var audio = new Audio(); //define generic audio object for testing
    var canPlayOgg = !!audio.canPlayType && audio.canPlayType('audio/ogg; codecs="vorbis"') !== "";
    var canPlayMP3 = !!audio.canPlayType && audio.canPlayType('audio/mp3') !== "";

    I then created a function to set some default audio parameters for each sound, as well as setting the source file based on the format previously detected to be supported by the browser:

    //generic function to create all new audio elements, with preload
    function createAudio (audioFile, loopSet) {
        var tempAudio = new Audio();
        var audioExt;
     
        //based on the previous detection set our supported format extension
        if (canPlayMP3) {
            audioExt = '.mp3';
        } else if (canPlayOgg) {
            audioExt = '.ogg';
        }
     
        tempAudio.setAttribute('src', audioFile + audioExt); //set the source file
        tempAudio.preload = 'auto'; //preload the sound file so it is ready to play
     
        //set whether the sound file would loop or not
        //looping was used for the animations background music
        tempAudio.loop = (loopSet === true ? true : false);
     
        return tempAudio;
    }
    var audioMarioJump = createAudio("soundboard/smb3_jump"); //an example call to the above function

    It was then just a case of playing the sound at the correct time in sync with the animation. To do this, I needed to use JavaScript to listen for the animation events animationstart and animationend – or in WebKit, webkitAnimationStart and webkitAnimationEnd. This allowed me to listen to when my defined animations were starting or ending and trigger the relevant sound to play.

    When an event listener is fired, the event returns the animationName property, which we can use as an identifier to play the relevant sound:

    mario.addEventListener('animationstart', marioEventListener);
     
    function marioEventListener(e) {
        if (e.animationName === 'mario-jump') {
            audioMarioJump.play();
        }
    }

    If you have multiple animationstart events for one element, such as Mario in my demo, you can use a switch statement to handle the animationName that has triggered the event listener.

    Since writing the demo, I have found that you can also target individual keyframes in an animation by using the Keyframe Event JS shim by Joe Lambert, which gives you even more control over when you can hook into your animation.

    Game Complete

    The response to the demo has been more positive than I’d ever hoped for since it was released. Like any hack, there are things I’d like to go back and improve with more time, but I think it’s more valuable to put what I learned into my next project. I think that the demo has shown that CSS animations can be used to create some amazing effects from fairly simple code, but also brought one bigger issue to my mind while putting it together.

    While complex CSS animations actually perform very well, the creation of such an animation is fairly longwinded. Sure, there are tools out there designed to help with this, such as Adobe Edge Animate and Sencha Animator, but both of these output CSS animations wrapped up in JavaScript. This seems a massive shame to me, as the power of CSS animations is surely in the fact that they shouldn’t have to rely on another technology to execute. I’m not sure if there is a potential way around this, other than coding it by hand yourself, but if anyone knows of any I’d be interested to hear of them in the comments.

    Going back to my earlier comment about comparing CSS animations to using canvas and SVG, I think all have a place at the table when discussing what technology to use for animation. However, the sooner the barrier of time spent to craft complex animations like this one can be lowered, the more relevance, and potential use cases, CSS animations will have in the projects that we do.

  3. JavaScript Style Badge – Your JS Signature

    I recently launched a new hobby website of mine: http://jsstyle.github.com/. The purpose of this page is simple: after filling out a JS-related questionnaire, users are awarded by a small fingerprint of their answers (somewhat similar to the Geek Code). It is possible to use the generated badge as an e-mail signature or to impress your friends. There is a second purpose for this web as well: measuring and gathering of selected answers, which allows for some neat comparison and usage statistics.

    This article explains some design decisions and implementation techniques used during the development of the JS Style Badge.

    Page navigation

    My goal was to design a website which does not reload, but keep the amount of necessary JS code at an absolute minimum. Fortunately, there is a pretty neat way to do this in pure HTML+CSS. We use semantic HTML5, naturally, and give the page a proper <nav> section with local anchor links:

    <nav>
      <ul>
        <li><a href="#page1">To page 1</li>
        <li><a href="#page2">To page 2</li>
        <li><a href="#page3">To page 3</li>
      </ul>
    </nav>
    <section id="page1">...</section>
    <section id="page2">...</section>
    <section id="page3">...</section>

    Then, a tiny CSS one-liner (with the crucial :target pseudoclass) kicks in:

    section[id]:not(:target) { display: none; }

    And voilà – we have a working cross-page navigation with full browser history support.

    Questions and their Answers

    All the questions and their potential answers are defined in a separate file, def.js. This allows for easy maintenance of the questionnaire.
    It is necessary to assign some IDs to questions (these need to be immutable and unique) and answers (immutable and unique within one question). These IDs are used to:

    • Guarantee fixed question ordering in the generated data (even if the visual ordering of question changes)
    • Guarantee the chosen answer, even if its textation or order changes
    • Represent the color and/or character in the generated image/ascii

    As an example, the question “Semicolons” has an ID of “;” – this makes it the fifth question in the resulting fingerprint (IDs are sorted lexicographically). Its answer “sometimes” has an ID of “=“, to be used in the ASCII signature. This answer is third (sorted by IDs), which corresponds to a blue color in the answer palette (to be used in the <canvas> image).

    Results: ASCII and <canvas>

    When the questionnaire is complete, we need to generate the resulting badge. Actually, three different things need to be generated: image version, ASCII version and the URL, which is used as a permalink.

    Image

    This is the most straightforward task: take a HTML5 <canvas>, fill it with a proper background color, render a “JS” at the right bottom corner. (Remark: the official JS logo is not drawn with a font; it is a purely geometric shape. I decided to go with Arial, as it is relatively similar.)
    Individual answers are represented by small colored squares. Their order is given by the sort order of question IDs; in the image, the ordering goes like this:

    0 2 5 9
    1 4 8
    3 7
    6
    

    …and so on. Converting the answer index to a pair of [x, y] coordinates is a simple mathematical exercise. We pick the square color from a fixed palette, based on the sort order of the picked answer. When the user skipped a question, we leave the corresponding square transparent.

    ASCII

    Textual version corresponds to the image version, but instead of colored squares, answer IDs are used to visualize the output. The whole signature is rendered into a <textarea> element; there is a tiny bit of JS which “selects all” when the area is clicked.
    I spent some time looking for an optimal styling of a <textarea>: with a proper width/height ratio, aesthetic font and a reasonable line height. The optimal solution for me is the Droid Sans Mono typeface, implemented using the CSS @font-face.

    URL

    We want the generated permalinks to be truly permanent: invariant to question/answer textation or ordering. To achieve this, a simple algorithm encodes the picked answers:

    1. Sort questions by their IDs
    2. For every question, take the user’s answer. If the question was not answered, output “-”
    3. If the question was answered, take that answer’s ID and get its unicode code points
    4. Answers use IDs from unicode range 32..127. Subtract 32 and left-pad with zero to generate a value from “00″ to “99″
    5. Concatenate these values and/or hyphens (for empty questions)

    The resulting “hash” does not need to be URL encoded, as it consists solely of numbers.

    Sharing is caring

    I decided to include links to three popular sharing services. They all expose a sharing API, but not all of them expect you to build their sharing UIs via JavaScript calls. Let’s have a look:

    • Google Plus button is the most straightforward: after including the JS API file, it is sufficient to call the gapi.plusone.render function. Two minor caveats:
      1. Make sure the button’s container is appended in the page when your render into it.
      2. The resulting button is hard to align perfectly; some !important CSS tweaks were necessary.
    • Twitter does not expect you to build stuff on the fly. It is necessary to create a hyperlink, fill it with data-* attributes and append the Twitter JS API to the page.
    • Finally, the LinkedIn share button is very peculiar: once their sharing API is loaded, it is necessary to create a <script> node with a type of IN/Share, enrich it with proper attributes, append to page and call IN.parse().

    Conclusion

    I had some fun time writing this tiny service; so far, over 1400 signatures were generated by users. As this number grows bigger, more and more interesting JS usage patterns emerge in the usage statistics. If you have not done it so far, go ahead and generate your own JS Style Badge!

  4. The Web Developer Toolbox: Backbone

    This is the fourth in a series of articles dedicated to useful libraries that all web developers should have in their toolbox. The intent is to show you what those libraries can do and help you to use them at their best. This fourth article is dedicated to the Backbone library.

    Introduction

    Backbone is a library originally written by Jeremy Ashkenas (also famous for having created CoffeeScript).

    Backbone is an implementation of the MVC design pattern in JavaScript. It allows you to build applications easier to maintain by strongly dividing responsibility of each application component. Actually due to it’s high flexibility, Backbone is more something like a super controller in the MVC design pattern than a true MVC implementation. It gives you freedom to choose your own model or view system as long as you make sure they are compatible with its API.

    Basic usage

    Backbone is made of 4 core objects that will be used to drive your application: Collection, Model, View, Router. To make things a little clearer here is a quick schema of their interaction:

    The Model objects

    Those kind of objects are the heart of your application. They will contain the whole logic of your application and they will dispatch events each time they are updated. That way, you can easily bind a View object to a model in order to react to any change. Those objects are actually wrappers around your own application business logic (Functions, Objects, Libraries, whatever).

    The Collection objects

    As stated by its name, this type of object is a collection of Model objects with their own logic to sort them, filter them, etc. This object is a convenient way to make the glue between the model and the view because it is some sort of a super Model object. Any change send by a Model object in a collection is also sent by the collection so it makes easy to bind a view to several Model objects.

    The View objects

    Backbone views are almost more convention than code — they don’t determine anything about your HTML or CSS for you, you’re free to use any JavaScript templating library such as Mustache, haml-js, etc. The idea is to organize your interface into logical views, backed by models, each of which can be updated independently when the model changes, without having to redraw the page. Instead of digging into a JSON object, looking up an element in the DOM, and updating the HTML by hand, you can bind your view’s render function to the model’s “change” event — and thanks to that everywhere that model data is displayed in the UI, it’s immediately updated.

    The Router objects

    Those objects provides methods for routing URLs, and connecting them to actions and events on Model objects. It relies on the History API to nicely handle the URLs. For browsers which don’t yet support the History API, it handles graceful fallback and transparent translation to a hash fragments’ URL.

    So as you can see it’s not necessarily a canonical implementation of the MVC design pattern but it allows you to work that way with high flexibility.

    Getting started or Digging into it

    Digging into Backbone is not that simple. As you can see, I haven’t tried to provide some code sample in this article. Even if the documentation is well written it’s sometimes a bit difficult to understand how to use the full API. Fortunately, there is some very good tutorials and projects out there and I recommend the following:

    If you know some other good resources, feel free to add it through the comments ;)

    Limits and precautions

    One of the biggest limitations of backbone is its dependency of two other libraries: Underscore and jQuery (or jQuery like library such as Zepto). The former provide some very useful (and missing) features to JavaScript, the latter is conveniently used to access and manipulate the DOM easily as well as dealing with DOM Events.

    Another point you should take care of is that backbone remains a very low level library that can be hard to deploy and to use easily. This is mainly due to the fact that it’s just a library rather than a full framework with coding conventions. Some aside projects try to make it more user friendly. One of the best known is the Chaplin project.

    Conclusion

    Backbone is one of the best libraries to help you build powerful applications. Even if its MVC implementation is somewhat unconventional, it’s a very good way to structure your code and being able to make your code base grow without too much trouble. Of course there are other libraries that do similar things such as Ember or Knockout. If you plan to work on a large application, you really should consider using one of them.

  5. Creating the future of mobile with Firefox OS – resources, docs and more!

    Just under a month ago I wrote a personal post about my thoughts on Firefox OS and why I think there is something ‘magical’ about what it stands for and the possibilities it brings to the table. This post is a follow-up that aims to cover much of the same ground but with extra detail and more of a technical focus.

    What is Firefox OS?

    In short, Firefox OS is about taking the technologies behind the Web, like JavaScript, and using them to produce an entire mobile operating system. It’s effectively a mobile OS powered by JavaScript!

    Firefox OS screenshots

    This is achieved with a custom version of Gecko, the rendering engine in Firefox, that introduces a variety of new JavaScript APIs needed to create a phone-like experience; things like WebSMS to send text messages, and WebTelephony to make phone calls.

    You might be wondering what’s running Gecko, as a phone can’t naturally boot directly into Gecko. To do that, the phone boots into a very lightweight Linux kernel that, in turn, boots the Gecko process. The process is a little more involved than that and much more detail can be found in the B2G Architecture documentation, including how Gecko accesses the radio hardware and other phone-specific functionality.

    The Firefox OS project also aims to combine many of the other projects at Mozilla into a single vision, what we refer to as the Web as the platform. These projects include the Open Web Apps initiative and Persona, our solution to identity and logins on the Web (formally known as BrowserID). It’s the combination of these various technologies that completes Firefox OS.

    If you want to find out more technical information about the OS then definitely check out the Firefox OS pages on MDN.

    Why Firefox OS?

    A couple of common questions that come up are, “Why Firefox OS?” and more specifically, “Why build a mobile OS using JavaScript?” These are incredibly important questions so let’s take a moment to delve into them in a little detail.

    Why build a mobile OS using JavaScript?

    Answering this question can quite simply be boiled down to one sentence; because it’s possible. It’s not the one and only answer but it succinctly handles most of the arguments against JavaScript being used in this way.

    A longer answer is that a JavaScript-powered OS unlocks a whole range of possibilities that aren’t normally or easily available to developers and users with existing operating systems.

    The most obvious of the possibilities is the ability to build applications using the technologies that we already use to build websites; namely JavaScript, CSS, and HTML. While not a truly unique feature of Firefox OS — projects like PhoneGap have done this for years on ‘native’ platforms — it allows developers everywhere to create mobile applications without having to learn native languages and APIs.

    Another draw of JavaScript is that it’s both extremely well documented and free to develop with. Anyone could sit down for a weekend and put together an application without having to pay for a single thing. Obviously that’s not true in the majority of cases, as people tend to buy their own hosting or tooling, but theoretically there is nothing to stop you building with these technologies for free.

    What’s arguably most interesting about JavaScript being used in this way is that it inherently enables physical devices to communicate using the same APIs that we already use on websites. In effect, instead of accessing the Web through a mobile browser the entire phone is now capable of accessing and communicating with any Web API. For example, there is nothing to stop you building an application for Firefox OS that uses WebRTC (once added) to create Skype-like P2P video communication between phones, desktop computers, or anything else that supports WebRTC.

    This really only scrapes the surface of “Why JavaScript?” but it certainly gives you a feel of how this is both interesting and important, beyond the tired debate of ‘native’ vs. Web. If you’re still not convinced, just think for a moment about how you can now customise an entire mobile OS using nothing by JavaScript. You’d be hard pushed to deny that it’s pretty darn interesting!

    OK, but why Firefox OS?

    Effectively, Firefox OS has been built to put our money where our mouth is (so to speak) and prove that JavaScript is capable of doing what we say it can do. However, there is much more to the project than just proving the the technology is fast enough.

    The first reason ‘Why Firefox OS’ is that the mobile ecosystem is overrun with proprietary platforms, most of which prevent you from easily moving between various platforms. What Firefox OS aims to achieve is a truly ‘open’ platform that doesn’t lock you in and inherently makes it as easy and possible to move between devices as and when you choose.

    Mozilla is effectively replicating its success with Firefox, in which it stormed the browser market and showed users that there is an alternative, one that lets them be in control of how they use the Web. In this case, it’s less about browsers and more about mobile platforms and application portability.

    Another reason is that Firefox OS is an attempt to push the Web forward into the world of physical devices. One direct benefit of this is the addition of brand new Web standards and APIs that allow for things like hardware access using JavaScript.

    Plenty of challenges

    It’s fair to say that the Firefox OS journey will contain a number of technical challenges along the way, however that’s part of the fun and the reasons why we’re working on it.

    One of those challenges is how to manage an apps ecosystem that is open and distributed. This is something that we are tackling with the Open Web Apps initiative and the Mozilla Marketplace. It’s a challenge that we are dealing with as things progress and as we learn more about how things work best, as is the nature with new ways of thinking.

    Another of the challenges is making sure that the phone runs as fast as possible, creating the best experience possible. This also relates to questions raised within the developer community around the performance capabilities of JavaScript, particularly when it is used to do things that are perceived to be complex, or when it is compared against ‘native’ technologies. This is a challenge that we are taking very seriously and one which we feel we can overcome. In fact, it’s a challenge that I believe we have already overcome.

    One prime example of how capable JavaScript has become is seeing beautiful JavaScript games running in Firefox OS at near-enough 60 frames per-second, on a low-end, cheap phone.

    Beyond the mobile phone

    While the phone aspect of Firefox OS is immediately interesting, you should consider the wider implications of a JavaScript OS and what possibilities it unlocks. For example, what other devices could benefit from being powered by JavaScript? And, what would a network of JavaScript-powered devices allow us to do — things like Ubiquitous Computing, perhaps?

    These aren’t things that we are exploring directly at Mozilla, but they are things that are now inherently possible as a result of the work that we’re doing. There is nothing to stop you taking the Firefox OS source code from GitHub and porting it to a device that we’ve never even considered.

    We’re already starting to see this happen with examples like a Firefox OS port for the Raspberry Pi, as well as another for the Pandaboard.

    What about a game console powered by Firefox OS? A TV, or set-top box? What about a fridge? Individually, these are all interesting projects, but together they offer something we don’t really have at the moment, a network of different devices powered by the same, open technologies and able to access and communicate across the Web with the same APIs.

    We are a long way away from that kind of world but it is projects like Firefox OS that may pave the way for it to happen. You could even be a part of it!

    Getting started with Firefox OS

    The hope is that by now you’re sufficiently interested in Firefox OS to begin exploring, experimenting and playing with it. The good news is that there are a whole host of ways that you can do that.

    Documentation

    One of the first places to start is the MDN documentation surrounding Firefox OS and its related technologies. This is where you’ll find everything you need to know about the developer-facing aspects of the platform.

    If you’re more interested with the inner-workings of the platform then you’ll want to cast an eye over the B2G wiki, which outlines much of the internals in plenty of detail.

    Source code

    If you’re keen to get to grips with the source code of Firefox OS then you’ll want to head over to GitHub and check it out. The two main repositories that you want are ‘b2g’ (the underlying Gecko engine) and ‘gaia’ (everything you can see, the OS).

    Getting involved

    There are a few ways to get involved with the project. You could check out some of the issues and get involved in fixing them, or perhaps just hang out in the mailing list for B2G, or the one for Gaia, and take part in the discussions there.

    If you just want to ask a few immediate questions then try out the #b2g and #gaia rooms on irc.mozilla.org. We’re all pretty friendly!

    Development options

    If you just want to dig in and make some applications, or perhaps customise the OS, then you’ll need to know about the various development options available to you. They are covered in some detail on MDN but here is a brief overview.

    The simplest method to get started is running Gaia (the visual side of Firefox OS) within Firefox Nightly. This doesn’t give you a true representation of a phone environment but it will allow you to install applications and use all of the developer tools within the browser that you’re already used to.

    Slightly more involved than Nightly is using the desktop B2G client. This is effectively a chromeless build of Firefox that looks phone-like has some added APIs that aren’t normally available in standard Firefox. This doesn’t replicate phone hardware but it’s the next best thing before starting to develop on an actual device.

    Setting up the desktop B2G client isn’t too hard, but it could be made easier. In the meantime, projects like r2d2b2g aim to make the process super simple. Definitely worth checking out.

    The last method, and arguably the most important one, is developing on an actual Firefox OS device. This is the only method that will give you a true representation of how your application will perform. It is also the only method that will give you access to the all the new APIs that come with Firefox OS.

    Right now, you’ll need to build and install Firefox OS on one of the supported devices. In the future you will be able to skip this step and get access to devices that already run Firefox OS. We don’t have any dates for that just yet.

    Go forth and be part of something big

    My hope is that by now you should now have enough inspiration and information to go forth and begin building for this new platform, powered by the technologies you already use. We hope you do and we’d love to see what you come up with.

    It’s not every day that you get the opportunity to be a part of something that could quite literally change the way we do things.

  6. HTML5 in Sao Paulo, Brazil – the bootleg recordings

    It is always nice to have the opportunity to get to travel and meet developers in various communities in the world: to understand their context, challenges and interests!

    In April I was in South America, and part of that included giving two talks at a MDN Hack Day (well, evening) in Sao Paulo in Brazil. They were filmed with a handcam from front row by Laura Loenert, but I do believe that the videos with sound, combined with the slides, can prove to be useful for sharing – see them as the bootleg version. :-)

    Besides, I prefer that we share what we have – even though it might be rough – than having a lot of material that never gets out there.

    So, I hope you will enjoy these:

    HTML5, The Open Web, and what it means for you

    Video


    (If you’ve opted in to HTML5 video on YouTube you will get that, otherwise it will fallback to Flash)

    Slides


    HTML5, The Open Web, and what it means for you – MDN Hack Day, Sao Paulo from Robert Nyman

    JavaScript APIs – The Web is the Platform

    This talk is similar to the The Web is the Platform presentation at the .toster conference, but a couple of other bits, including a look at Mozilla Collusion.

    Video

    Slides

    JavaScript APIs – The Web is the Platform – MDN Hack Day, Sao Paulo from Robert Nyman

  7. No Single Benchmark for the Web

    Google released a new JavaScript benchmark a few days ago called Octane. New benchmarks are always welcome, as they push browsers to new levels of performance in new areas. I was particularly pleased to see the inclusion of pdf.js, which unlike most benchmarks is real-world code, as well as the GB Emulator which is a very interesting type of performance-intensive code. However, every benchmark suite has limitations, and it is worth keeping that in mind, especially given the new benchmark’s title in the announcement and in the project page as “The JavaScript Benchmark Suite for the Modern Web” – which is a high goal to set for a single benchmark.

    Now, every benchmark must pick some code to run out of all the possible code out there, and picking representative code is very hard. So it is always understandable that benchmarks are never 100% representative of the code that exists and is important. However, even taking that into account, I have concerns with some of the code selected to appear in Octane: There are better versions of two of the five new benchmarks, and performance on those better versions is very different than the versions that do appear in Octane.

    Benchmarking black boxes

    One of the new benchmarks in Octane is “Mandreel”, which is the Bullet physics engine compiled by Mandreel, a C++ to JS compiler. Bullet is definitely interesting code to include in a benchmark. However the choice of Mandreel’s port is problematic. One issue is that Mandreel is a closed-source compiler, a black box, making it hard to learn from it what kind of code is efficient and what should be optimized. We just have a generated code dump, which, as a commercial product, would cost money for anyone to reproduce those results with modifications to the original C++ being run or a different codebase. We also do not have the source code compiled for this particular benchmark: Bullet itself is open source, but we don’t know the specific version compiled here, nor do we have the benchmark driver code that uses Bullet, both of which would be necessary to reproduce these results using another compiler.

    An alternative could have been to use Bullet compiled by Emscripten, an open source compiler that similarly compiles C++ to JS (disclaimer: I am an Emscripten dev). Aside from being open, Emscripten also has a port of Bullet (a demo can be seen here) that can interact in a natural way with regular JS, making it usable in normal web games and not just compiled ones, unlike Mandreel’s port. This is another reason for preferring the Emscripten port of Bullet instead.

    Is Mandreel representative of the web?

    The motivation Google gives for including Mandreel in Octane is that Mandreel is “used in countless web-based games.” It seems that Mandreel is primarily used in the Chrome Web Store (CWS) and less outside in the normal web. The quoted description above is technically accurate: Mandreel games in the CWS are indeed “web-based” (written in JS+HTML+WebGL) even if they are not actually “on the web”, where by “on the web” I mean outside of the walled garden of the CWS and in the normal web that all browsers can access. And it makes perfect sense that Google cares about the performance of code that runs in the CWS, since Google runs and profits from that store. But it does call into question the title of the Octane benchmark as “The JavaScript Benchmark Suite for the Modern Web.”

    Performance of generated code is highly variable

    With that said, it is still fair to say that compiler-generated code is increasing in importance on the web, so some benchmark must be chosen to represent it. The question is how much the specific benchmark chosen represents compiled code in general. On the one hand the compiled output of Mandreel and Emscripten is quite similar: both use large typed arrays, the same Relooper algorithm, etc., so we could expect performance to be similar. That doesn’t seem to always be the case, though. When we compare Bullet compiled by Mandreel with Bullet compiled by Emscripten – I made a benchmark of that a while back, it’s available here – then on my MacBook pro, Chrome is 1.5x slower than Firefox on the Emscripten version (that is, Chrome takes 1.5 times as long to execute in this case), but 1.5x faster on the Mandreel version that Google chose to include in Octane (that is, Chrome receives a score 1.5 times larger in this case). (I tested with Chrome Dev, which is the latest version available on Linux, and Firefox Aurora which is the best parallel to it. If you run the tests yourself, note that in the Emscripten version smaller numbers are better while the opposite is true in the Octane version.)

    (An aside, not only does Chrome have trouble running the Emscripten version quickly, but that benchmark also exposes a bug in Chrome where the tab consistently crashes when the benchmark is reloaded – possibly a dupe of this open issue. A serious problem of that nature, that does not happen on the Mandreel-compiled version, could indicate that the two were optimized differently as a result of having received different amounts of focus by developers.)

    Another issue with the Mandreel benchmark is the name. Calling it Mandreel implies it represents all Mandreel-generated code, but there can be huge differences in performance depending on what C/C++ code is compiled, even with a single compiler. For example, Chrome can be 10-15x slower than Firefox on some Emscripten-compiled benchmarks (example 1, example 2) while on others it is quite speedy (example). So calling the benchmark “Mandreel-Bullet” would have been better, to indicate it is just one Mandreel-compiled codebase, which cannot represent all compiled code.

    Box2DWeb is not the best port of Box2D

    “Box2DWeb” is another new benchmark in Octane, in which a specific port of Box2D to JavaScript is run, namely Box2DWeb. However, as seen here (see also this), Box2DWeb is significantly slower than other ports of Box2D to the web, specifically Mandreel and Emscripten’s ports from the original C++ that Box2D is written in. Now, you can justify excluding the Mandreel version because it cannot be used as a library from normal JS (just as with Bullet before), but the Emscripten-compiled one does not have that limitation and can be found here. (Demos can be seen here and here.)

    Another reason for preferring the Emscripten version is that it uses Box2D 2.2, whereas Box2DWeb uses the older Box2D 2.1. Compiling the C++ code directly lets the Emscripten port stay up to date with the latest upstream features and improvements far more easily.

    It is possible that Google surveyed websites and found that the slower Box2DWeb was more popular, although I have no idea whether that was the case, but if so that would partially justify preferring the slower version. However, even if that were true, I would argue that it would be better to use the Emscripten version because as mentioned earlier it is faster and more up to date. Another factor to consider is that the version included in Octane will get attention and likely an increase in adoption, which makes it all the more important to select the one that is best for the web.

    I put up a benchmark of Emscripten-compiled Box2D here, and on my machine Chrome is 3x slower than Firefox on that benchmark, but 1.6x faster on the version Google chose to include in Octane. This is a similar situation to what we saw earlier with the Mandreel/Bullet benchmark and it raises the same questions about how representative a single benchmark can be.

    Summary

    As mentioned at the beginning, all benchmarks are imperfect. And the fact that the specific code samples in Octane are ones that Chrome runs well does not mean the code was chosen for that reason: The opposite causation is far more likely, that Google chose to focus on optimizing those and in time made Chrome fast on them. And that is how things properly work – you pick something to optimize for, and then optimize for it.

    However, in 2 of the 5 new benchmarks in Octane there are good reasons for preferring alternative, better versions of those two benchmarks as we saw before. Now, it is possible that when Google started to optimize for Octane, the better options were not yet available – I don’t know when Google started that effort – but the fact that better alternatives exist in the present makes substantial parts of Octane appear less relevant today. Of course, if performance on the better versions was not much different than the Octane versions then this would not matter, but as we saw there were in fact significant differences when comparing browsers on those versions: One browser could be significantly better on one version of the same benchmark but significantly slower on another.

    What all of this shows is that there cannot be a single benchmark for the modern web. There are simply too many kinds of code, and even when we focus on one of them, different benchmarks of that particular task can behave very differently.

    With that said, we shouldn’t be overly skeptical: Benchmarks are useful. We need benchmarks to drive us forward, and Octane is an interesting new benchmark that, even with the problems mentioned above, does contain good ideas and is worth focusing on. But we should always be aware of the limitations of any single benchmark, especially when a single benchmark claims to represent the entire modern web.

     

  8. The Web Developer Toolbox: Modernizr

    This is the third in a series of articles dedicated to useful libraries that all web developers should have in their toolbox. The intent is to show you what those libraries can do and help you to use them at their best. This third article is dedicated to the Modernizr library.

    Introduction

    Modernizer is a library originally written by Faruk Ateş.

    It is one of the key libraries for building cross-browser websites or applications in a modern fashion. The heart of the library is the web design pattern known as Progressive enhancement & Graceful degradation. This design pattern does not require Modernizr, but Modernizr can make things a lot easier. It detects the availability of native implementations for next-generation web technologies such as HTML5 or CSS3 and allow you to adapt your application accordingly, which is way better than trying some ugly voodoo user-agent sniffing.

    Basic usage

    Using this library is amazingly simple: Download it, link it to your pages—you’re done!

    Modernizr will automatically add some CSS classes to the root html element. For example if you want to test Web Sockets support, it will add a websockets class to the html element if the browser supports that feature, otherwise it will add the no-websockets class. It will do the same with JavaScript by adding a global variable Modernizr.websocket with a boolean value.

    Let’s see a simple example: Doing some stuff with RGBa colors.

    First: Download a customized version of Modernizr

    Modernizr, download page.

    Second: Link it to your document

    <!DOCTYPE html>
    <!--
    The "no-js" class is here as a fallback. 
    If Modernizr is not running, you'll know 
    something is wrong and you will be able to act 
    accordingly. In contrast, if everything goes well, 
    Modernizr will remove that special class.
    -->
    <html class="no-js">
    <head>
        <meta charset="utf-8">
        <title>I want to do stuff with RGBa</title>
        <script src="modernizr.js"></script>
    </head>
    <body>
    ...
    </body>
    </html>

    Third: Use it

    With CSS

    .rgba div {
        /* Do things with CSS for browsers that support RGBa colors */
    }
     
    .no-rgba div {
        /* Do things with CSS for browsers that DO NOT support RGBa colors */
    }

    With JavaScript

    if(Modernizr.rgba) {
        // Do things with JS for browsers that support RGBa colors
    } else {
        // Do things with JS for browsers that DO NOT support RGBa colors
    }

    Let’s see this silly example in action:

    Advanced usage

    The basic usage is already awesome when you have to deal with a heterogeneous environment (such as mobile browsers for example), but there’s more.

    Conditional loading

    Modernizr offers a convenient way to do conditional loading. Actually, the YepNope library is a standalone spin-off of the Modernizr project. So, if you wish, you can bundled YepNope directly inside Modernizr. It’s perfect if you want to load based polyfills depending on specific browser capacity.

    Modernizr.load({
        test: Modernizr.indexeddb,
        nope: "indexeddb-polyfill.js"
    });

    This is a very powerful tool: do not hesitate to read the documentation. Note that the Modernizr team maintain a list of very accurate polyfills. Feel free to use whatever you need (with caution, of course).

    Custom tests

    Modernizr come with a set of 44 tests for mainstream technologies. If you need to test some other technologies, Modernizr provide an API to build and plug your own tests.

    // Let's test the native JSON support ourselves
    Modernizr.addTest('json', function(){
        return window.JSON
            && window.JSON.parse
            && typeof window.JSON.parse === 'function'
            && window.JSON.stringify
            && typeof window.JSON.stringify === 'function';
    });
    

    Assuming the above test passes, there will be now a json class on the HTML element and Modernizr.json will be true. Otherwise, there will be a no-json class on the HTML element and Modernizr.json will be false.

    Dealing with CSS prefix

    CSS prefixes is a very sensitive subject. Modernizr provides cross-browser code to take care of this issue. Modernizr offers a very useful tool to deal with this: Modernizr.prefixed(). This method works with CSS properties (in the CSS OM camelCase style) as well as with DOM properties.

    For example, Modernizr.prefixed("transition") will return “MozTransition” with Firefox but “WebkitTransition” with Safari and Chrome.

    Testing media-queries

    There is currently no simple way to test a media query from JS in any browser. To help with that, Modernizr has a special tool: Modernizr.mq(). This method will test the media query of your choice and will return true or false accordingly.

    if(Modernizr.mq("screen and (max-width: 400px)")) {
        // Do some stuff for small screens
    }

    Limits and precautions

    This library is a fantastic tool but it’s not magic. You should use it with caution and do not forget about other techniques to deal with unpredictable behaviors. For example, do not forget to rely on the CSS cascade when it’s sufficient.

    The following example is a huge misuse of Modernizr:

    div {
        color : white;
    }
     
    .rgba div {
        background : rgba(0,0,0,.8);
    }
     
    .no-rgba div {
        background : #333;
    }

    If for some reason Modernizr is not executed, your text will not be readable (white text over a white background). In this specific case, you are better doing the following (which, by the way, is also easier to read and maintain):

    div {
        color : white;
        background : #333;
        background : rgba(0,0,0,.8);
    }

    So, don’t be blind when you use this library, take the time to think about what will happen if Modernizr is not available. In many case you have existing fallbacks, don’t forget to use them.

    Conclusion

    Modernizr is the most useful tool when you have to build large cross-browser stuff, from the oldest Internet Explorer 6 to the latest Firefox Nightly. Once you master it, you will be able to add some magic to your sites and applications. However, as with all the powerful tools, it takes some time to become comfortable with and to use it wisely at its full potential. But, Modernizr is definitely worth the effort.

  9. The Web Developer Toolbox: ThreeJS

    This is the second of a series of articles dedicated to the useful libraries that all web developers should have in their toolbox. The intent is to show you what those libraries can do and help you to use them at their best. This second article is dedicated to the ThreeJS library.

    Introduction

    ThreeJS is a library originally written by Ricardo Cabello Miguel aka “Mr. Doob“.

    This library makes WebGL accessible to common human beings. WebGL is a powerful API to manipulate 3D environments. This web technology is standardized by the Kronos group and Firefox, Chrome and Opera now implement it as a 3D context for the HTML canvas tag. WebGL is basically a web version of another standard : OpenGL ES 2.0. As a consequence, this API is a “low level” API that require skills and knowledge beyond what web designers are used to. That’s where ThreeJS comes into play. ThreeJS gives web developers access to the power of WebGL without all the knowledge required by the underlying API.

    Basic usage

    The library has good documentation with many examples. You’ll notice that some parts of the documentation are not complete yet (feel free to help). However, the library and examples source code are very well structured, so do not hesitate to read the source.

    Even though ThreeJS simplifies many things, you still have to be comfortable with some basic 3D concepts. Basically, ThreeJS uses the following concepts:

    1. The scene: the place where all 3D objects will be placed and manipulated in a 3D space.
    2. The camera: a special 3D object that will define the rendering point of view as well as the type of spatial rendering (perspective or isometric)
    3. The renderer: the object in charge of using the scene and the camera to render your 3D image.

    Within the scene, you will have several 3D objects which can be of the following types:

    • A mesh: A mesh is an object made of a geometry (the shape of your object) and a material (its colors and texture)
    • A light point: A special object that defines a light source to highlight all your meshes.
    • A camera, as described above.

    The following example will draw a simple wireframe sphere inside an HTML element with the id “myPlanet”.

    /**
     * First, let's prepare some context
     */
     
    // The WIDTH of the scene to render
    var __WIDTH__  = 400,
     
    // The HEIGHT of the scene to render
        __HEIGHT__ = 400,
     
    // The angle of the camera that will show the scene
    // It is expressed in degrees
        __ANGLE__ = 45,
     
    // The shortest distance the camera can see
        __NEAR__  = 1,
     
    // The farthest distance the camera can see
        __FAR__   = 1000
     
    // The basic hue used to color our object
        __HUE__   = 0;
     
    /**
     * To render a 3D scene, ThreeJS needs 3 elements:
     * A scene where to put all the objects
     * A camera to manage the point of view
     * A renderer place to show the result
     */
    var scene  = new THREE.Scene(), 
        camera = new THREE.PerspectiveCamera(__ANGLE__, 
                                             __WIDTH__ / __HEIGHT__, 
                                             __NEAR__, 
                                             __FAR__),
        renderer = new THREE.WebGLRenderer();
     
    /**
     * Let's prepare the scene
     */
     
    // Add the camera to the scene
    scene.add(camera);
     
    // As all objects, the camera is put at the 
    // 0,0,0 coordinate, let's pull it back a little
    camera.position.z = 300;
     
    // We need to define the size of the renderer
    renderer.setSize(__WIDTH__, __HEIGHT__);
     
    // Let's attach our rendering zone to our page
    document.getElementById("myPlanet").appendChild(renderer.domElement);
     
    /**
     * Now we are ready, we can start building our sphere
     * To do this, we need a mesh defined with:
     *  1. A geometry (a sphere) 
     *  2. A material (a color that reacts to light)
     */
    var geometry, material, mesh;
     
    // First let's build our geometry
    //
    // There are other parameters, but you basically just 
    // need to define the radius of the sphere and the 
    // number of its vertical and horizontal divisions.
    //
    // The 2 last parameters determine the number of 
    // vertices that will be produced: The more vertices you use, 
    // the smoother the form; but it will be slower to render. 
    // Make a wise choice to balance the two.
    geometry = new THREE.SphereGeometry( 100, 20, 20 );
     
    // Then, prepare our material
    var myMaterial = {
        wireframe : true,
        wireframeLinewidth : 2
    }
     
    // We just have to build the material now
    material = new THREE.MeshPhongMaterial( myMaterial );
     
    // Add some color to the material
    material.color.setHSV(__HUE__, 1, 1);
     
    // And we can build our the mesh
    mesh = new THREE.Mesh( geometry, material );
     
    // Let's add the mesh to the scene
    scene.add( mesh );
     
    /**
     * To be sure that we will see something, 
     * we need to add some light to the scene
     */
     
    // Let's create a point light
    var pointLight = new THREE.PointLight(0xFFFFFF);
     
    // and set its position
    pointLight.position.x = -100;
    pointLight.position.y = 100;
    pointLight.position.z = 400;
     
    // Now, we can add it to the scene
    scene.add( pointLight );
     
     
    // And finally, it's time to see the result
    renderer.render( scene, camera );

    And if you want to animate it (for example, make the sphere spin), it’s this easy:

    function animate() {
        // beware, you'll maybe need a shim 
        // to use requestAnimationFrame properly
        requestAnimationFrame( animate );
     
        // First, rotate the sphere
        mesh.rotation.y -= 0.003;
     
        // Then render the scene
        renderer.render( scene, camera );
    }
     
    animate();

    JSFiddle demo.

    Advanced usage

    Once you master the basics, ThreeJS provides you with some advanced tools.

    Rendering system

    As an abstraction layer, ThreeJS offer options to render a scene other than with WebGL. You can use the Canvas 2D API as well as SVG to perform your rendering. There is some difference between all these rendering contexts. The most obvious one is performance. Because WebGL is hardware accelerated, the rendering of complex scene is amazingly faster with it. On the other hand, because WebGL does not deal always well with anti-aliasing, the SVG or Canvas2D rendering can be better if you want to perform some cell-shading (cartoon-like) stuff. As a special advantage, SVG rendering gives you a full DOM tree of objects, which can be useful if you want access to those objects. It can have a high cost in term of performance (especially if you animate your scene), but it allows you to not rebuild a full retained mode graphic API.

    Mesh and particles

    ThreeJS is perfect for rendering on top of WebGL, but it is not an authoring tool. To model 3D objects, you have a choice of 3D software. Conveniently, ThreeJS is available with many scripts that make it easy to import meshes from several sources (Examples include: Blender, 3DSMax or the widely supported OBJ format).

    It’s also possible to easily deploy particle systems as well as using Fog, Matrix and custom shaders. ThreeJS also comes with a few pre-built materials: Basic, Face, Lambert, Normal, and Phong). A WebGL developer will be able to build his own on top of the library, which provide some really good helpers. Obviously, building such custom things requires really specific skills.

    Animating mesh

    If using requestAnimationFrame is the easiest way to animate a scene, ThreeJS provides a couple of useful tools to animate meshes individually: a full API to define how to animate a mesh and the ability to use “bones” to morph and change a mesh.

    Limits and precaution

    One of the biggest limitations of ThreeJS is related to WebGL. If you want to use it to render your scene, you are constrained by the limitations of this technology. You become hardware dependent. All browsers that claim to support WebGL have strong requirements in terms of hardware support. Some browsers will not render anything if they do not run with an appropriate hardware. The best way to avoid trouble is to use a library such as modernizr to switch between rendering systems based on each browser’s capabilities. However, take care when using non-WebGL rendering systems because they are limited (e.g. the Phong material is only supported in a WebGL context) and infinitely slower.

    In terms of browser support, ThreeJS supports all browsers that support WebGL, Canvas2D or SVG, which means: Firefox 3.6+, Chrome 9+, Opera 11+, Safari 5+ and even Internet Explorer 9+ if you do not use the WebGL rendering mode. If you want to rely on WebGL, the support is more limited: Firefox 4+, Chrome 9+, Opera 12+, Safari 5.1+ only. You can forget Internet Explorer (even the upcoming IE10) and almost all mobile browsers currently available.

    Conclusion

    ThreeJS drastically simplifies the process of producing 3D images directly in the browser. It gives the ability to do amazing visual effects with an easy to use API. By empowering you, it allow you to unleash your creativity.

    In conclusion, here are some cool usages of ThreeJS: