Mozilla

Dev Derby Articles

Sort by:

View:

  1. Announcing the winners of the July 2013 Dev Derby!

    This past summer, some of the most passionate and creative web developers out there innovated with the File API in our July Dev Derby contest. After sorting through the entries, an all-star cast of former judges–Peter Lubbers, Eric Shepherd, and David Walsh–decided on three winners and two runners-up.

    Not a contestant? There are other reasons to be excited. Most importantly, all of these demos are completely open-source, making them wonderful lessons in the exciting things you can do with the File API today.

    Dev Derby

    The Results

    Winners

    Runners-up

    Congratulations to these winners! As always, this represents only a small portion of the impressive work submitted to the contest. After you have finished playing with these winning demos, be sure to check out the rest. You will not be disappointed.

    The Dev Derby is currently on hiatus, but will be back before long. In the meantime, head over to the Demo Studio to see some general-interest demos and submit your own.

    Further reading

  2. An AR Game: Technical Overview

    An AR Game is the winning entry for the May 2013 Dev Derby. It is an augmented reality game, the objective being to transport rolling play pieces from a 2D physics world into a 3D space. The game is playable on GitHub, and demonstrated on YouTube. The objective of this article is to describe the underlying approaches to the game’s design and engineering.

    Technically the game is a simple coupling of four sophisticated open source technologies: WebRTC, JSARToolkit, ThreeJS, and Box2D.js. This article describes each one, and explains how we weaved them together. We will work in a stepwise fashion, constructing the game from the ground up. The code discussed in this article is available on github, with a tag and live link for each tutorial step. Specific bits of summarized source will be referenced in this document, with the full source available through the ‘diff’ links. Videos demonstrating application behaviour are provided where appropriate.

    git clone https://github.com/abrie/devderby-may-2013-technical.git

    This article will first discuss the AR panel (realspace), then the 2D panel (flatspace), and conclude with a description of their coupling.

    Panel of Realspace

    Realspace is what the camera sees — overlaid with augmented units.

    Begin with a Skeleton

    git checkout example_0
    live, diff, tag

    We will organize our code into modules using RequireJS. The starting point is a main module with two skeletal methods common to games. They are initialize() to invoke startup, and tick() for rendering every frame. Notice that the gameloop is driven by repeated calls to requestAnimationFrame:

    requirejs([], function() {
    
        // Initializes components and starts the game loop
        function initialize() {
        }
    
        // Runs one iteration of the game loop
        function tick() {
            // Request another iteration of the gameloop
            window.requestAnimationFrame(tick);
        }
    
        // Start the application
        initialize();
        tick();
    });
    

    The code so far gives us an application with an empty loop. We will build up from this foundation.

    Give the Skeleton an Eye

    git checkout example_1
    live, diff, tag

    AR games require a realtime video feed: HTML5‘s WebRTC provides this through access to the camera, thus AR games are possible in modern browsers like Firefox. Good documentation concerning WebRTC and getUserMedia may be found on developer.mozilla.org, so we won’t include the basics here.

    A camera library is provided in the form of a RequireJS module named webcam.js, which we’ll incorporate into our example.

    First the camera must be initialized and authorized. The webcam.js module invokes a callback on user consent, then for each tick of the gameloop a frame is copied from the video element to a canvas context. This is important because it makes the image data accessible. We’ll use it in subsequent sections, but for now our application is simply a canvas updated with a video frame at each tick.

    Something Akin to a Visual Cortex

    git checkout example_2
    live, diff, tag

    JSARToolkit is an augmented reality engine. It identifies and describes the orientation of fiducial markers in an image. Each marker is uniquely associated with a number. The markers recognized by JSARToolkit are available here as PNG images named according to their ID number (although as of this writing the lack of PNG extensions confuses Github.) For this game we will use #16 and #32, consolidated onto a single page:

    JSARToolkit found its beginnings as ARToolkit, which was written in C++ at the Univeristy of Washington‘s HITLab in Seattle. From there it has been forked and ported to a number of languages including Java, then from Java to Flash, and finally from Flash to JS. This ancestry causes some idiosyncrasies and inconsistent naming, as we’ll see.

    Let’s take a look at the distilled functionality:

     // The raster object is the canvas to which we are copying video frames.
     var JSARRaster = NyARRgbRaster_Canvas2D(canvas);
    
     // The parameters object specifies the pixel dimensions of the input stream.
     var JSARParameters = new FLARParam(canvas.width, canvas.height);
    
     // The MultiMarkerDetector is the marker detection engine
     var JSARDetector = new FLARMultiIdMarkerDetector(FLARParameters, 120);
     JSARDetector.setContinueMode(true);
    
     // Run the detector on a frame, which returns the number of markers detected.
     var threshold = 64;
     var count = JSARDetector.detectMarkerLite(JSARRaster, threshold);
    

    Once a frame has been processed by JSARDetector.detectMarkerLite(), the JSARDetector object contains an index of detected markers. JSARDetector.getIdMarkerData(index) returns the ID number, and JSARDetector.getTransformMatrix(index) returns the spatial orientation. Using these methods is somewhat complicated, but we’ll wrap them in usable helper methods and call them from a loop like this:

    var markerCount = JSARDetector.detectMarkerLite(JSARRaster, 90);
    
    for( var index = 0; index < markerCount; index++ ) {
        // Get the ID number of the detected marker.
        var id = getMarkerNumber(index);
    
        // Get the transformation matrix of the detected marker.
        var matrix = getTransformMatrix(index);
    }
    

    Since the detector operates on a per-frame basis it is our responsibility to maintain marker state between frames. For example, any of the following may occur between two successive frames:

    • a marker is first detected
    • an existing marker’s position changes
    • an existing marker disappears from the stream.

    The state tracking is implemented using ardetector.js. To use it we instantiate a copy with the canvas receiving video frames:

    // create an AR Marker detector using the canvas as the data source
    var detector = ardetector.create( canvas );
    

    And with each tick the canvas image is scanned by the detector, triggering callbacks as needed:

    // Ask the detector to make a detection pass.
    detector.detect( onMarkerCreated, onMarkerUpdated, onMarkerDestroyed );
    

    As can be deduced from the code, our application now detects markers and writes its discoveries to the console.

    Reality as a Plane

    git checkout example_3
    live, diff, tag

    An augmented reality display consists of a reality view overlaid with 3D models. Rendering such a display normally consists of two steps. The first is to render the reality view as captured by the camera. In the previous examples we simply copied that image to a canvas. But we want to augment the display with 3D models, and that requires a WebGL canvas. The complication is that a WebGL canvas has no context into which we can copy an image. Instead we render a textured plane into the WebGL scene, using images from the webcam as the texture. ThreeJS can use a canvas as a texture source, so we can feed the canvas receiving the video frames into it:

    // Create a texture linked to the canvas.
    var texture = new THREE.Texture(canvas);
    

    ThreeJS caches textures, therefore each time a video frame is copied to the canvas a flag must be set to indicate that the texture cache should be updated:

    // We need to notify ThreeJS when the texture has changed.
    function update() {
        texture.needsUpdate = true;
    }
    

    This results in an application which, from the perspective of a user, is no different than example_2. But behind the scenes it’s all WebGL; the next step is to augment it!

    Augmenting Reality

    git checkout example_4
    live, diff, tag, movie

    We’re ready to add augmented components to the mix: these will take the form of 3D models aligned to markers captured by the camera. First we must allow the ardector and ThreeJS to communicate, and then we’ll be able to build some models to augment the fiducial markers.

    Step 1: Transformation Translation

    Programmers familiar with 3D graphics will know that the rendering process requires two matrices: the model matrix (transformation) and a camera matrix (projection). These are supplied by the ardetector we implemented earlier, but they cannot be used as is — the matrix arrays provided by ardetector are incompatible with ThreeJS. For example, the helper method getTransformMatrix() returns a Float32Array, which ThreeJS does not accept. Fortunately the conversion is straightforward and easily done through a prototype extension, also known as monkey patching:

    // Allow Matrix4 to be set using a Float32Array
    THREE.Matrix4.prototype.setFromArray = function(m) {
     return this.set(
      m[0], m[4], m[8],  m[12],
      m[1], m[5], m[9],  m[13],
      m[2], m[6], m[10], m[14],
      m[3], m[7], m[11], m[15]
     );
    }
    

    This allows us to set the transformation matrix, but in practice we’ll find that updates have no effect. This is because of ThreeJS’s caching. To accommodate such changes we construct a container object and set the matrixAutoUpdate flag to false. Then for each update to the matrix we set matrixWorldNeedsUpdate to true.

    Step 2: Cube Marks the Marker

    Now we’ll use our monkey patches and container objects to display colored cubes as augmented markers. First we make a cube mesh, sized to fit over the fiducial marker:

    function createMarkerMesh(color) {
        var geometry = new THREE.CubeGeometry( 100,100,100 );
        var material = new THREE.MeshPhongMaterial( {color:color, side:THREE.DoubleSide } );
    
        var mesh = new THREE.Mesh( geometry, material );
    
        //Negative half the height makes the object appear "on top" of the AR Marker.
        mesh.position.z = -50;
    
        return mesh;
    }
    

    Then we enclose the mesh in the container object:

    function createMarkerObject(params) {
        var modelContainer = createContainer();
    
        var modelMesh = createMarkerMesh(params.color);
        modelContainer.add( modelMesh );
    
        function transform(matrix) {
            modelContainer.transformFromArray( matrix );
        }
    }
    

    Next we generate marker objects, each one corresponding to a marker ID number:

    // Create marker objects associated with the desired marker ID.
        var markerObjects = {
            16: arobject.createMarkerObject({color:0xAA0000}), // Marker #16, red.
            32: arobject.createMarkerObject({color:0x00BB00}), // Marker #32, green.
        };
    

    The ardetector.detect() callbacks apply the transformation matrix to the associated marker. For example, here the onCreate handler adds the transformed model to the arview:

    // This function is called when a marker is initally detected on the stream
    function onMarkerCreated(marker) {
        var object = markerObjects[marker.id];
    
        // Set the objects initial transformation matrix.
        object.transform( marker.matrix );
    
        // Add the object to the scene.
        view.add( object );
    }
    });
    

    Our application is now a functioning example of augmented reality!

    Making Holes

    In An AR Game the markers are more complex than coloured cubes. They are “warpholes”, which appear to go -into- the marker page. The effect requires a bit of trickery, so for the sake of illustration we’ll construct the effect in three steps.

    Step 1: Open the Cube

    git checkout example_5
    live, diff, tag, movie

    First we remove the top face of the cube to create an open box. This is accomplished by setting the face’s material to be invisible. The open box is positioned behind/underneath the marker page by adjusting the Z coordinate to half of the box height.

    The effect is interesting, but unfinished — and perhaps it is not immediately clear why.

    Step 2: Cover the Cube in Blue

    git checkout example_6
    live, diff, tag, movie

    So what’s missing? We need to hide the part of the box which juts out from ‘behind’ the marker page. We’ll accomplish this by first enclosing the box in a slightly larger box. This box will be called an “occluder”, and in step 3 it will become an invisibility cloak. For now we’ll leave it visible and colour it blue, as a visual aid.

    The occluder objects and the augmented objects are rendered into the same context, but in separate scenes:

    function render() {
        // Render the reality scene
        renderer.render(reality.scene, reality.camera);
    
        // Render the occluder scene
        renderer.render( occluder.scene, occluder.camera);
    
        // Render the augmented components on top of the reality scene.
        renderer.render(virtual.scene, virtual.camera);
    }
    

    This blue jacket doesn’t yet contribute much to the “warphole” illusion.

    Step 3: Cover the Cube In Invisibility

    git checkout example_7
    live, diff, tag, movie

    The illusion requires that the blue jacket be invisible while retaining its occluding ability — it should be an invisible occluder. The trick is to deactivate the colour buffers, thereby rendering only to the depth buffer. The render() method now becomes:

    function render() {
        // Render the reality scene
        renderer.render(reality.scene, reality.camera);
    
        // Deactivate color and alpha buffers, leaving only depth buffer active.
        renderer.context.colorMask(false,false,false,false);
    
        // Render the occluder scene
        renderer.render( occluder.scene, occluder.camera);
    
        // Reactivate color and alpha buffers.
        renderer.context.colorMask(true,true,true,true);
    
        // Render the augmented components on top of the reality scene.
        renderer.render(virtual.scene, virtual.camera);
    }
    

    This results in a much more convincing illusion.

    Selecting Holes

    git checkout example_8
    live, diff, tag

    An AR Game allows the user to select which warphole to open by positioning the marker underneath a targeting reticule. This is a core aspect of the game, and it is technically known as object picking. ThreeJS makes this a fairly simple thing to do. The key classes are THREE.Projector() and THREE.Raycaster(), but there is a caveat: despite the key method having a name of Raycaster.intersectObject(), it actually takes a THREE.Mesh as the parameter. Therefore we add a mesh named “hitbox” to createMarkerObject(). In our case it is an invisible geometric plane. Note that we are not explicitly setting a position for this mesh, leaving it at the default (0,0,0), relative to the markerContainer object. This places it at the mouth of the warphole object, in the plane of the marker page, which is where the face we removed would be if we hadn’t removed it.

    Now we have a testable hitbox, we make a class called Reticle to handle intersection detection and state tracking. Reticle notifications are incorporated into the arview by including a callback when we add an object with arivew.add(). This callback will be invoked whenever the object is selected, for example:

    view.add( object, function(isSelected) {
        onMarkerSelectionChanged(marker.id, isSelected);
    });
    

    The player is now able to select augmented markers by positioning them at the center of the screen.

    Refactoring

    git checkout example_9
    live, diff, tag

    Our augmented reality functionality is essentially complete. We are able to detect markers in webcam frames and align 3D objects with them. We can also detect when a marker has been selected. We’re ready to move on to the second key component of An AR Game: the flat 2D space from which the player transports play pieces. This will require a fair amount of code, and some preliminary refactoring would help keep everything neat. Notice that a lot of AR functionality is currently in the main application.js file. Let’s excise it and place it into a dedicated module named realspace.js, leaving our application.js file much cleaner.

    Panel of Flatspace

    git checkout example_10
    live, diff, tag

    In An AR Game the player’s task is to transfer play pieces from a 2D plane to a 3D space. The realspace module implemented earlier serves as the the 3D space. Our 2D plane will be managed by a module named flatspace.js, which begins as a skeletal pattern similar to those of application.js and realspace.js.

    The Physics

    git checkout example_11
    live, diff, tag

    The physics of the realspace view comes free with nature. But the flatspace pane uses simulated 2D physics, and that requires physics middleware. We’ll use a JavaScript transpilation of the famous Box2D engine named Box2D.js. The JavaScript version is born from the original C++ via LLVM, processed by emscripten.

    Box2D is a rather complex piece of software, but well documented and well described. Therefore this article will, for the most part, refrain from repeating what is already well-documented in other places. We will instead describe the common issues encountered when using Box2D, introduce a solution in the form of a module, and describe its integration into flatspace.js.

    First we build a wrapper for the raw Box2D.js world engine and name it boxworld.js. This is then integrated into flatspace.

    This does not yield any outwardly visible affects, but in reality we are now simulating an empty space.

    The Visualization

    It would be helpful to be able to see what’s happening. Box2D thoughtfully provides debug rendering, and Box2D.js facilitates it through something like virtual functions. The functions will draw to a canvas context, so we’ll need to create a canvas and then supply the VTable with draw methods.

    Step 1: Make A Metric Canvas

    git checkout example_12
    live, diff, tag

    The canvas will map a Box2D world. A canvas uses pixels as its unit of measurement, whereas Box2D describes its space using meters. We’ll need methods to convert between the two, using a pixel-to-meter ratio. The conversion methods use this constant to convert from pixels to meters, and from meters to pixels. We also align the coordinate origins. These methods are associated with a canvas and all are wrapped into the boxview.js module. This makes it easy to incorporate it into flatspace:

    It is instantiated during initialization, its canvas then added to the DOM:

    view = boxview.create({
        width:640,
        height:480,
        pixelsPerMeter:13,
    });
    
    document.getElementById("flatspace").appendChild( view.canvas );
    

    There are now two canvases on the page ‐ the flatspace and the realspace. A bit of CSS in application.css puts them side-by-side:

    #realspace {
        overflow:hidden;
    }
    
    #flatspace {
        float:left;
    }
    

    Step 2: Assemble A Drafting Kit

    git checkout example_13
    live, diff, tag

    As mentioned previously, Box2D.js provides hooks for drawing a debug sketch of the world. They are accessed via a VTable through the customizeVTable() method, and subsequently invoked by b2World.DrawDebugData(). We’ll take the draw methods from kripken’s description, and wrap them in a module called boxdebugdraw.js.

    Now we can draw, but have nothing to draw. We need to jump through a few hoops first!

    The Bureaucracy

    A Box2D world is populated by entities called Bodies. Adding a body to the boxworld subjects it to the laws of physics, but it must also comply with the rules of the game. For this we create a set of governing structures and methods to manage the population. Their application simplifies body creation, collision detection, and body destruction. Once these structures are in place we can begin to implement the game logic, building the system to be played.

    Creation

    git checkout example_14
    live, diff, tag

    Let’s liven up the simulation with some creation. Box2D Body construction is somewhat verbose, involving fixtures and shapes and physical parameters. So we’ll stow our body creation methods in a module named boxbody.js. To create a body we pass a boxbody method to boxworld.add(). For example:

    function populate() {
        var ball = world.add(
            boxbody.ball,
            {
                x:0,
                y:8,
                radius:10
            }
        );
    }
    

    This yields an undecorated ball in midair experiencing the influence of gravity. Under contemplation it may bring to mind a particular whale.

    Registration

    git checkout example_15
    live, diff, tag

    We must be able to keep track of the bodies populating flatworld. Box2D provides access to a body list, but it’s a bit too low level for our purposes. Instead we’ll use a field of b2Body named userData. To this we assign a unique ID number subsequently used as an index to a registry of our own design. It is implemented in boxregistry.js, and is a key aspect of the flatspace implementation. It enables the association of bodies with decorative entities (such as sprites), simplifies collision callbacks, and facilitates the removal of bodies from the simulation. The implementation details won’t be described here, but interested readers can refer to the repo to see how the registry is instantiated in boxworld.js, and how the add() method returns wrapped-and-registered bodies.

    Collision

    git checkout example_16
    live, diff, tag

    Box2D collision detection is complicated because the native callback simply gives two fixtures, raw and unordered, and all collisions that occur in the world are reported, making for a lot of conditional checks. The boxregistry.js module avails itself to managing the data overload. Through it we assign an onContact callback to registered objects. When a Box2D collision handler is triggered we query the registry for the associated objects and check for the presence of a callback. If the object has a defined callback then we know its collision activity is of interest. To use this functionality in flatspace.js, we simply need to assign a collision callback to a registered object:

    function populate() {
        var ground = world.add(
            boxbody.edge,
            {
                x:0,
                y:-15,
                width:20,
                height:0,
            }
        );
    
        var ball = world.add(
            boxbody.ball,
            {
                x:0,
                y:8,
                radius:10
            }
        );
    
        ball.onContact = function(object) {
            console.log("The ball has contacted:", object);
        };
    }
    

    Deletion

    git checkout example_17
    live, diff, tag

    Removing bodies is complicated by the fact that Box2D does not allow calls to b2World.DestroyBody() from within b2World.Step(). This is significant because usually you’ll want to delete a body because of a collision, and collision callbacks occur during a simulation step: this is a conundrum! One solution is to queue bodies for deletion, then process the queue outside of the simulation step. The boxregistry addresses the problem by furnishing a flag, isMarkedForDeletion, for each object. The collection of registered objects is iterated and listeners are notified of the deletion request. The iteration happens after a simulation step, so the deletion callback cleanly destroys the bodies. Perceptive readers may notice that we are now checking the isMarkedForDeletion flag before invoking collision callbacks.

    This happens transparently as far as flatspace.js is concerned, so all we need to do is set the deletion flag for a registered object:

    ball.onContact = function(object) {
        console.log("The ball has contacted:", object);
        ball.isMarkedForDeletion = true;
    };
    

    Now the body is deleted on contact with the ground.

    Discerning

    git checkout example_18
    live, diff, tag

    When a collision is detected An AR Game needs to know what the object has collided with. To this end we add an is() method for registry objects, used for comparing objects. We will now add a conditional deletion to our game:

    ball.onContact = function(object) {
        console.log("The ball has contacted:", object);
        if( object.is( ground ) ) {
            ball.isMarkedForDeletion = true;
        }
    };
    

    A 2D Warphole

    git checkout example_19
    live, diff, tag

    We’ve already discussed the realspace warpholes, and now we’ll implement their flatspace counterparts. The flatspace warphole is simply a body consisting of a Box2D sensor. The ball should pass over a closed warphole, but through an open warphole. Now imagine an edge case where a ball is over a closed warphole which is then opened up. The problem is that Box2D’s onBeginContact handler behaves true to its name, meaning that we detected warphole contact during the closed state but have since opened the warphole. Therefore the ball is not warped, and we’re left with a bug. Our fix is to use a cluster of sensors. With a cluster there will be a series of BeginContact events as the ball moves across the warphole. Thus we can be confident that opening a warphole while the ball is over it will result in a warp. The sensor cluster generator is named hole and is implemented in boxbody.js. The generated cluster looks like this:

    The Conduit

    At this point we’ve made JSARToolkit and Box2D.js into usable modules. We’ve used them to create warpholes in realspace and flatspace. The objective of An AR Game is to transport pieces from flatspace to the realspace, so it is necessary that the warpholes communicate. Our approach is as follows:

    1. git checkout example_20
      live, diff, tag

      Notify the application when a realspace warphole’s state changes.

    2. git checkout example_21
      live, diff, tag

      Set flatspace warphole states according to realspace warphole states.

    3. git checkout example_22
      live, diff, tag

      Notify the application when a ball transits an open flatspace warphole.

    4. git checkout example_23
      live, diff, tag

      Add a ball to realspace when the application receives a notification of a transit.

    Conclusion

    This article has shown the technical underpinnings of An AR Game. We have constructed two panes of differing realities and connected them with warpholes. A player may now entertain him or herself by transporting a ball from flatspace to realspace. Technically this is interesting, but generally it is not fun!

    There is still much to be done before this application becomes a game, but they are outside the scope of this article. Among the remaining tasks are:

    • Sprites and animations.
    • Introduce multiple balls and warpholes.
    • Provide a means of interactively designing levels.

    Thanks for reading! We hope this has inspired you to delve into this topic more!

  3. Announcing the winners of the June 2013 Dev Derby!

    This June, some of the most creative web developers out there pushed the limits of WebGL in our June Dev Derby contest. After sorting through the entries, our expert judges–James Padolsey and Maire Reavy–decided on three winners and three runners-up.

    Not a contestant? There are other reasons to be excited. Most importantly, all of these demos are completely open-source, making them wonderful lessons in the exciting things you can do with WebGL today.

    Dev Derby

    The Results

    Winners

    Runners-up

    Congratulations to these winners and to everyone who competed! The Web is a better, more expansive place because of their efforts.

    Further reading

  4. Interview with Micah Elizabeth Scott, winner of the Web Workers Dev Derby

    Micah Elizabeth ScottMicah Elizabeth Scott won the Web Workers Dev Derby with Zen photon garden, her impressive (and fun) interactive web raytracer. Recently, I had the chance to learn more about Micah: her work, her ambitions, and her thoughts on the future of web development.

    The interview

    How did you become interested in web development?

    I’ve been into building things for as long as I can remember. I love
    making things, and I’ll often learn new tools just for the sake of
    giving myself a different set of challenges and constraints to work
    with. My first big web project was an early collaboration tool for
    open source development, dubbed “CIA” because it spies on your source
    code commits.

    Can you tell us a little about how Zen photon garden works?

    Zen photon garden is a type of raytracer, which is to say it simulates
    the path that individual rays of light take as they bounce around in a
    scene. It’s a two-dimensional raytracer though, which opens up kind of
    a neat new possibility for visualizing how light works.

    A traditional three-dimensional raytracer traces rays “backwards”,
    casting rays out from each pixel on a virtual camera, bouncing it off
    of the objects in your scene, until it finally reaches a source of
    light. Each pixel of the scene comes about by counting, on average,
    how many photons would reach that portion of the virtual camera.

    In Zen photon garden, light rays emanate from a lamp and move along
    the image plane in two dimensions. Instead of visualizing the single
    point where a ray reaches the camera, I visualize the entire ray as it
    bounces through the scene. Each ray turns into a sequence of line
    segments, beginning at the light source and bouncing off of any number
    of objects before it’s eventually absorbed. This process repeats
    hundreds of thousands of times, and the image you see is a statistical
    average of these many light rays.

    The inner loop of Zen photon garden is quite specialized. For each
    light ray, I need to trace its path by intersecting it with the
    objects in the scene, and each segment of this path is visualized by
    drawing an anti-aliased line into a high-dynamic-range 32-bit
    accumulation buffer. After tracing a bunch of these rays, the
    high-dynamic-range buffer is mapped to an 8-bit-per-channel image
    according to the current camera exposure setting, and that image is
    drawn to a Canvas.

    These anti-aliased lines need to be fast and very high quality. Any
    errors in the uniformity of the line’s brightness, for example, will
    affect the smoothness of the final image. To get the combination of
    speed and accuracy I need, this line drawing algorithm is implemented
    in pure Javascript by a pool of Web Worker threads. This pool has to
    be managed carefully so that the app can draw with high throughput
    when you leave it alone, but it can still respond with low latency
    when you’re interactively adding objects to the scene.

    What was your biggest challenge in developing Zen photon garden?

    The hardest part of implementing Zen photon garden was making it run
    as fast as possible on all of the latest web browsers. Thankfully
    these days it’s relatively easy to write an app that runs on all
    browsers, but making it run optimally is tricky when your application
    is CPU-bound. Small changes to the inner loops would cause big
    differences in how well each Javascript engine’s optimizer performs.
    This required a lot of trial and error, and a few trips back to the
    drawing board.

    What makes the web an exciting platform for you?

    To me the killer feature of the web is its universality. Modern web
    browsers are nearly ubiquitous, and it’s the fastest way to take a
    weird new experimental concept and get it into people’s hands right
    now. As someone who loves exploring the intersection of art and
    technology, this means it’s finally possible to send your friends a
    link to your latest art project without having to worry about what
    operating system they’re using or whether they have the right library
    dependencies installed.

    What new web technologies are you most excited about?

    WebGL is really exciting to me, but as someone who used to write
    graphics drivers and worry about security for a living it also kind of
    terrifies me!

    The web technology I’m most excited about would have to be asm.js
    actually. I’ve always enjoyed getting my hands dirty with low-level
    graphics code, and even in today’s world of GPU acceleration and
    high-level 2D canvas APIs, I still find plenty of reasons to push
    pixels. Having a way to get near-native performance in a very reliable
    way across all major browsers would open up some great new creative
    possibilities, and I’m excited to see where that leads.

    If you could change one thing about the web, what would it be?

    It’d be great if we could find a way to ease the tension between those
    who see the web as a content platform and those who see it as a
    software operating system. Right now it feels like HTML is too
    unwieldy to be a document markup language, and it’s just barely
    starting to get the services you’d expect from a modern operating
    environment.

    Do you have any advice for other ambitious web developers?

    Plan to prototype a lot of things, keep the ideas that stick, and
    throw the rest away. Respect the web as a platform, and try to be
    playful about exploring its margins. Understand but don’t begrudge the
    ways in which web programming is different from other kinds of
    programming.

    Further reading

  5. Interview with Giovanny Granada, winner of the Geolocation Dev Derby

    Giovanny Granada won the most recent Geolocation Dev Derby with GoGeoTweet, his wonderful web-based visualization of Twitter activity happening nearby. Recently, I had the chance to learn more about Giovanny: his work, his ambitions, and his thoughts on the future of web development.

    The interview

    How did you become interested in web development?

    I became interested in development because I saw that my motivation was to innovate, create new things, and experiment with different technologies toward the goal of creating new and complete tools.

    Can you tell us a little about how GoGeoTweet works?

    The application works by using the Geolocation API and the Twitter API to show Tweets published within 1km of you–a very useful tool if you’re at an event or a special place and would like to know what Twitter users are saying.

    What was your biggest challenge in developing GoGeoTweet?

    The biggest challenge was using the Geolocation API and the Twitter API show to show only Tweets within a 1km radius.

    What makes the web an exciting platform for you?

    The ability to create things using technologies of all types and obtain great results. Building free alternatives that are full of new experiences and that share knowledge is something you can only do on the web.

    What new web technologies are you most excited about?

    Right now I’m excited to explore and learn about WebGL, HTML5 and all of the Web APIs that are making the web better.

    If you could change one thing about the web, what would it be?

    I would change web standards to limit the way applications can be created (for example, prohibiting Flash), to discourage applications that do not use modern technologies. I am totally sure that the web would be better if that changed.

    Do you have any advice for other ambitious web developers?

    If you can imagine you can create! Let’s do it! There are no limits!

    Further reading

  6. Interview with Sebastian Dorn, winner of the Drag and Drop Dev Derby

    Sebastian DornSebastian Dorn won the Drag and Drop Dev Derby with Pete’s Adventure, his wonderful web-based interactive story. Recently, I had the chance to learn more about Seba: his work, his ambitions, and his thoughts on the future of web development.

    The interview

    How did you become interested in web development?

    I think it was around the time I was in middle school. My father read an
    IT magazine and since I was at least a little bit interested, I flicked
    through it as well. There was a series in it about building web sites
    and I thought “I want to try that, building my own site”.

    So I built my first frames-using, table-layouted, GIF-plastered web
    sites–every atrocity you can imagine and some more–using HTML and
    CSS, but without knowing that something like CSS classes existed. Some
    time later I found a free host and put my “Hello, this is me” site
    online. Some years later I became interested in blogging, so I started
    learning PHP and MySQL to write my own CMS.

    Can you tell us a little about how Pete’s Adventure works?

    My goal was to show some other aspect of Drag&Drop in each level:
    Reading meta data like the file size from a dropped file, displaying a
    dropped image or dragging an HTML element from inside the page around.
    There isn’t really anything special in the code. Each level has its own
    JS file with functions to prepare the stage by adding HTML and event
    listeners.

    What was your biggest challenge in developing Pete’s Adventure?

    Not really anything that had to do with coding. At first, I wanted to
    use better drawings. But some horribly misshapen Petes later I gave up
    on that and went ahead with the pixelated look you can see now.

    Then there is the sound and music. I probably sat two hours at the piano
    keyboard, trying to come up with melodies which could be easily looped.
    This was the first time since the recorder lessons in middle school that
    I tried to compose.

    Ah, well, I got a little… agitated while trying to get the drop part of
    Drag&Drop to work for the level where you drag the slimey note to Pete.
    It only works in Firefox when you give the dragged element some transfer
    data, for example an empty string.

    What makes the web an exciting platform for you?

    How easy it is to create and share. Even without a server backend you
    can build exciting demos in HTML/CSS/JS and then just upload it
    somewhere, toss a friend the link and they can see it. To view it, other
    people only need an up-to-date browser–no plugins, no worrying about
    OS compatibility.

    What new web technologies are you most excited about?

    Basically everything that helps making plugins obsolete.

    I wonder if there will be more 3D in-browser rendering with WebGL in the
    future. Animated, interactive films? Games? CAD software?

    Firefox OS and building apps only with JavaScript sounds interesting,
    too. I’m not really that much into mobile development at the moment, but
    I’m interested in how that will develop. Will it become a really good
    alternative to iOS/Android? Or will it end as obscure toy for enthusiasts?

    If you could change one thing about the web, what would it be?

    Making the Internet immune to large scale blocking and censoring. No
    government should be able to cut off the communication channels of its
    people.

    On a less political note: I would be very pleased to see the same audio,
    video and image formats supported in every browser. Finding out that
    WebKit doesn’t support APNG was as a surprise for me.

    Do you have any advice for other ambitious web developers?

    Learning a new language or feature thereof works better, if you put some
    motivation behind it. Maybe you can build an useful browser extension
    with it, or some fascinating demo to show off. Make it fun!

    For other great advice I’d like to quote Jake from Adventure Time:
    “Sucking at something is the first step to becoming sorta good at
    something.”

    Further reading

  7. Announcing the winners of the May 2013 Dev Derby

    This May, some of the most creative web developers out there pushed the limits of getUserMedia in our May Dev Derby contest. After sorting through the entries, our four expert judges–James Padolsey, Janet Swisher, Maire Reavy, and Randell Jesup–decided on three winners and two runners-up.

    Not a contestant? There are other reasons to be excited. Most importantly, all of these demos are completely open-source, making them wonderful lessons in the exciting things you can do with getUserMedia today.

    Dev Derby

    The results

    Winners

    Runners-up

    To call these entries mind-blowing would be an understatement. I would say they left me speechless, but quite the opposite was true–I found myself sharing them with everyone I could. Naming just a few winners was especially difficult this month, so please join me in congratulating all of our competitors for making the web so much more exciting than it was just a couple of months ago.

    Want to get a head start on an upcoming Derby? We are accepting demos related to the File API in our ongoing July contest. Head over to the Dev Derby to get started.

    Further reading

  8. Interview with Parashuram Narasimhan, winner of the Offline Dev Derby

    Parashuram Narasimhan won the Offline Dev Derby with The conference, his web utility for beating unreliable conference connectivity. Recently, I had the chance to learn more about Parashuram: his work, his ambitions, and his thoughts on the future of web development.

    The interview

    How did you become interested in web development?

    Like most computer science majors, I started with systems programming. However, during the early days of Firefox, I used to play around with Venkman (before Firebug) and other Firefox developer tools to take a peek at how web pages were written. The undocumented and nascent web platform got me interested, and I found myself hacking around the limitations of the web platform. That is how I started writing code for the web.

    Can you tell us a little about how The conference works?

    The conference is a set of static HTML pages that sync data with a remote CORS enabled CouchDB server. The Sync functionality is taken care by PouchDB which implements the Couch synchronization protocol.

    With no server side code, all functionality and interactions are handled using on Backbone.js the browser. The static pages are styled using Twitter Bootstrap and are responsive for the mobile too.

    What was your biggest challenge in developing The conference?

    IndexedDB is not supported by all browsers today. Given the nature of the application, it was important for it to run on mobile devices that are easiest to use between sessions in a conference. Getting WiFi right at conferences is also hard, and the application had to work great with flaky connectivity. I had to use the IndexedDB polyfill to ensure that it runs across all browser, and even on mobile platforms.

    What makes the web an exciting platform for you?

    The openness of the web is the most exciting part. I just joined Microsoft Open Technologies and I am able to see how the open nature of the web is helping me with a lot of interesting projects at large scale. That combined with the current limitations is a great breeding ground for hackers and tinkerers to show amazing innovation. I like the idea of having to write once, and see it working everywhere. I am glad to see the web flowing out of the browser into systems like B2G and Windows 8.

    What new web technologies are you most excited about?

    Offline storage has always been my favourite and I would love to see it gain more traction. I am also impressed by the work done on pointer events and the efficiency at which the W3C working group is finalizing the standards. I also follow WebRTC and CSS3.

    If you could change one thing about the web, what would it be?

    The web seemed to have frozen before the HTML5 revolution. This was the time when native applications seem to become popular. I wish the web platforms had moved as fast, so that app developers considered it an alternative to writing applications for specific platforms. Looks like its getting there though.

    Do you have any advice for other ambitious web developers?

    In a project, the best code is the code that is not written. With so many web developers working on the web, I usually don’t have to reinvent the wheel and can always reuse someone else’s well tested code. It is good that I am embarrassed about the code I wrote in the past–it just tells me that I maturing as a programmer :P

    Further reading

  9. Interview with Koen Kivits, winner of the Multi-touch Dev Derby

    Koen Kivits won the Multi-touch Dev Derby with TouchCycle, his wonderful TRON-inspired mobile game. Recently, I had the chance to learn more about Koen: his work, his ambitions, and his thoughts on the future of web development.

    Koen Kivits

    The interview

    How did you become interested in web development?

    I’ve been creating websites since high school, but I didn’t really get serious about web development until I started working two and a half years ago. I wasn’t specifically hired for web development, but I kind of ended up there. I came in just as our company was launching a major new web based product, which has grown immensely since then. The challenges we faced during this ongoing growth and how we were able to solve them really made me view the web as a serious platform.

    Can you tell us a little about how TouchCycle works?

    The game itself basically consists of an arena with 2 or more players on it. Each player has a position and a target to which it is moving. As each player moves, it leaves a trail in the arena.

    Each segment in a player’s trail is defined as simple linear equation, which makes it really easy to calculate intersections between segments. Collision detection is then done by checking whether a player’s upcoming trail segment intersects with an already existing trail segment.

    The arena is drawn on a <canvas> element that is sized to fit the screen when the game starts. The <canvas> has 3 touch event handlers registered to it:

    • touchstart: register nearest unregistered player (if any) to the new touch and set its target
    • touchmove: update the target of the player that is registered to the moving touch
    • touchend: unregister the player

    Any touch events on the document itself are cancelled while the game is running in order to prevent any scrolling and zooming.

    Everything around the main game (the menus, the notifications, etc.) is just plain HTML. Menu navigation is done with HTML anchors and a hashchange event handler that hides or shows content relevant to the current URL. Note that this means you can use your browser’s back and forward buttons to navigate within the game.

    What was your biggest challenge in developing TouchCycle?

    Multitouch interaction was completely new to me and I had done very little work with the <canvas> element before, so it took me a while to read up on everything and get to work. I also had to spend some time tweaking the collision detection and the way a player follows your touch in order to not have players crash into their own trails easily.

    What makes the web an exciting platform for you?

    The openness of the platform, in several ways. Anyone with an internet connection can access the web using any device running a browser. Anyone can publish to the web–there’s no license required, no approval process and no being locked in to a specific vendor. Anyone can open up their browser’s developer tools to see how an app works and even tinker with it. Anyone can even contribute to the standards that make up the very platform itself!

    What new web technologies are you most excited about?

    I’m probably most excited about WebRTC. It opens up a lot of possibilities for web developers, especially when mobile support increases. For example, just think of how combining WebRTC, the Geolocation API and Device Orientation API would make for an awesome augmented reality app. The possibilities are limitless.

    If you could change one thing about the web, what would it be?

    I really like how the web is a collaborative effort. A problem of that collaboration, however, is conflicting interests leading to delayed or crippled new standards. A good example is the HTML <video> element, which I think is still not very usable today despite being such a basic feature.

    Browser vendors should be allowed some flexibility in the formats they support, but I think it would be a good thing if there was a minimum requirement of supporting at least 1 common open format.

    Do you have any advice for other ambitious web developers?

    As a developer I think I’ve learnt most from reading other people’s code. If you like a library or web app, it really pays to spend some time analysing its source code. It can be really inspiring to look at how other people solve problems; it can give you pointers on how to structure your own code and it can teach you about technologies you didn’t even know about.

    Also, don’t be afraid to read the specification of a standard every now and then to really learn about the technologies you’re using.

    Further reading

  10. Announcing an administrative change to the Dev Derby

    Today we would like to announce an administrative change to the Dev Derby, our monthly web development contest.

    Dev Derby

    The day-to-day operations of the Derby have historically been overseen by just one Mozilla staff member. This worked for a while, but the scope of the project has made the approach less and less realistic over time–keeping the lights on can alone require more time than any one person has to offer. Meanwhile, the wider Mozilla community has been doing an incredible job extending the contest in ways none could have imagined. They have started a blossoming online community of Dev Derby participants, have run several Derby-themed workshops in Toronto (complete with expert speakers, passionate attendees, and lots of pizza), and have even taken the first steps toward running these workshops around the world.

    Giving these wonderful volunteers ownership of the project just makes sense. They can do more than any one Mozilla staff member, and their creativity will undoubtedly lead to many exciting new contest improvements. As a result, we have decided to hand operations of the Dev Derby off to Kensie Connor–who has been leading many of these efforts–and others from the best community out there.

    The Dev Derby will take a short break from August to October so that we can prepare for this change. The July contest will run as usual, and the winners of previous contests will be announced and rewarded just as they always have, but a new contest will not begin until November. The November contest may bring some changes to the contest format, but the mission of providing a platform that helps web developers learn, share, and push the web forward will remain the same. Of course, we welcome your feedback in the comments section as we start to think about new opportunities.

    We also welcome you to leave a comment if you have any questions about this transition. We hope that this initial announcement will be the start of a longer discussion, one that will foster a bright future for this important project.