Mozilla

Multi-touch Articles

Sort by:

View:

  1. interact.js for drag and drop, resizing and multi-touch gestures

    interact.js is a JavaScript module for Drag and drop, resizing and multi-touch gestures with inertia and snapping for modern browsers (and also IE8+).

    Background

    I started it as part of my GSoC 2012 project for Biographer‘s network visualization tool. The tool was a web app which rendered to an SVG canvas and used jQuery UI for drag and drop, selection and resizing. Because jQuery UI has little support for SVG, heavy workarounds had to be used. I needed to make the web app more usable on smartphones and tablets and the largest chunk of this work was to replace jQuery UI with interact.js which:

    • is lightweight,
    • works well with SVG,
    • handles multi-touch input,
    • leaves the task of rendering/styling elements to the application and
    • allows the application to supply object dimensions instead of parsing element styles or getting DOMRects.

    What interact.js tries to do is present input data consistently across different browsers and devices and provide convenient ways to pretend that the user did something that they didn’t really do (snapping, inertia, etc.).

    Certain sequences of user input can lead to InteractEvents being fired. If you add event listeners for an event type, that function is given an InteractEvent object which provides pointer coordinates and speed and, in gesture events, scale, distance, angle, etc. The only time interact.js modifies the DOM is to style the cursor; making an element move while a drag happens has to be done from your own event listeners. This way you’re in control of everything that happens.

    Slider demo

    Here’s an example of how you could make a slider with interact.js. You can view and edit the complete HTML, CSS and JS of all the demos in this post on CodePen.

    See the Pen interact.js simple slider by Taye A (@taye) on CodePen.

    JavaScript rundown

    interact('.slider')                   // target the matches of that selector
      .origin('self')                     // (0, 0) will be the element's top-left
      .restrict({drag: 'self'})           // keep the drag within the element
      .inertia(true)                      // start inertial movement if thrown
      .draggable({                        // make the element fire drag events
        max: Infinity                     // allow drags on multiple elements
      })
      .on('dragmove', function (event) {  // call this function on every move
        var sliderWidth = interact.getElementRect(event.target.parentNode).width,
            value = event.pageX / sliderWidth;
     
        event.target.style.paddingLeft = (value * 100) + '%';
        event.target.setAttribute('data-value', value.toFixed(2));
      });
     
    interact.maxInteractions(Infinity);   // Allow multiple interactions
    • interact('.slider') [docs] creates an Interactable object which targets elements that match the '.slider' CSS selector. An HTML or SVG element object could also have been used as the target but using a selector lets you use the same settings for multiple elements.
    • .origin('self') [docs] tells interact.js to modify the reported coordinates so that an event at the top-left corner of the target element would be (0,0).
    • .restrict({drag: 'self'}) [docs] keeps the coordinates within the area of the target element.
    • .inertia(true) [docs] lets the user “throw” the target so that it keeps moving after the pointer is released.
    • Calling .draggable({max: Infinity}) [docs] on the object:
      • allows drag listeners to be called when the user drags from an element that matches the target and
      • allows multiple target elements to be dragged simultaneously
    • .on('dragmove', function (event) {...}) [docs] adds a listener for the dragmove event. Whenever a dragmove event occurs, all listeners for that event type that were added to the target Interactable are called. The listener function here calculates a value from 0 to 1 depending on which point along the width of the slider the drag happened. This value is used to position the handle.
    • interact.maxInteractions(Infinity) [docs] is needed to enable multiple interactions on any target. The default value is 1 for backwards compatibility.

    A lot of differences in browser implementations are resolved by interact.js. MouseEvents, TouchEvents and PointerEvents would produce identical drag event objects so this slider works on iOS, Android, Firefox OS and Windows RT as well as on desktop browsers as far back as IE8.

    Rainbow pixel canvas demo

    interact.js is useful for more than moving elements around a page. Here I use it for drawing onto a canvas element.

    See the Pen interact.js pixel rainbow canvas by Taye A (@taye) on CodePen.

    JavaScript rundown

    var pixelSize = 16;
     
    interact('.rainbow-pixel-canvas')
      .snap({
        // snap to the corners of a grid
        mode: 'grid',
        // specify the grid dimensions
        grid: { x: pixelSize, y: pixelSize }
      })
      .origin('self')
      .draggable({
        max: Infinity,
        maxPerElement: Infinity
      })
      // draw colored squares on move
      .on('dragmove', function (event) {
        var context = event.target.getContext('2d'),
            // calculate the angle of the drag direction
            dragAngle = 180 * Math.atan2(event.dx, event.dy) / Math.PI;
     
        // set color based on drag angle and speed
        context.fillStyle = 'hsl(' + dragAngle + ', 86%, '
                            + (30 + Math.min(event.speed / 1000, 1) * 50) + '%)';
     
        // draw squares
        context.fillRect(event.pageX - pixelSize / 2, event.pageY - pixelSize / 2,
                         pixelSize, pixelSize);
      })
      // clear the canvas on doubletap
      .on('doubletap', function (event) {
        var context = event.target.getContext('2d');
     
        context.clearRect(0, 0, context.canvas.width, context.canvas.height);
      });
     
      function resizeCanvases () {
        [].forEach.call(document.querySelectorAll('.rainbow-pixel-canvas'), function (canvas) {
          canvas.width = document.body.clientWidth;
          canvas.height = window.innerHeight * 0.7;
        });
      }
     
      // interact.js can also add DOM event listeners
      interact(document).on('DOMContentLoaded', resizeCanvases);
      interact(window).on('resize', resizeCanvases);
     
    interact.maxInteractions(Infinity);

    Snapping is used to modify the pointer coordinates so that they are always aligned to a grid.

      .snap({
        // snap to the corners of a grid
        mode: 'grid',
        // specify the grid dimensions
        grid: { x: pixelSize, y: pixelSize }
      })

    Like in the previous demo, multiple drags are enabled but an extra option, maxPerElement, needs to be changed to allow multiple drags on the same element.

      .draggable({
        max: Infinity,
        maxPerElement: Infinity
      })

    The movement angle is calculated with Math.atan2(event.dx, event.dy) and that’s used to set the hue of the paint color. event.speed is used to adjust the lightness.

    interact.js has tap and double tap events which are equivalent to click and double click but without the delay on mobile devices. Also, unlike regular click events, a tap isn’t fired if the mouse is moved before being released. (I’m working on adding more events like these).

      // clear the canvas on doubletap
      .on('doubletap', function (event) {
        ...

    It can also listen for regular DOM events. In the above demo it’s used to listen for window resize and document DOMContentLoaded.

      interact(document).on('DOMContentLoaded', resizeCanvases);
      interact(window).on('resize', resizeCanvases);

    Similar to jQuery, It can also be used for delegated events. For example:

    interact('input', { context: document.body })
      .on('keypress', function (event) {
        console.log(event.key);
      });

    Supplying element dimensions

    To get element dimensions interact.js normally uses:

    • Element#getBoundingClientRect() for SVGElements and
    • Element#getClientRects()[0] for HTMLElements (because it includes the element’s borders)

    and adds page scroll. This is done when checking which action to perform on an element, checking for drops, calculating 'self' origin and in a few other places. If your application keeps the dimensions of elements that are being interacted with, then it makes sense to use the application’s data instead of getting the DOMRect. To allow this, Interactables have a rectChecker() [docs] method to change how elements’ dimensions are gotten. The method takes a function as an argument. When interact.js needs an element’s dimensions, the element is passed to that function and the return value is used.

    Graphic Editor Demo

    The “SVG editor” below has a Rectangle class to represent <rect class="edit-rectangle"/> elements in the DOM. Each rectangle object has dimensions, the element that the user sees and a draw method.

    See the Pen Interactable#rectChecker demo by Taye A (@taye) on CodePen.

    JavaScript rundown

    var svgCanvas = document.querySelector('svg'),
        svgNS = 'http://www.w3.org/2000/svg',
        rectangles = [];
     
    function Rectangle (x, y, w, h, svgCanvas) {
      this.x = x;
      this.y = y;
      this.w = w;
      this.h = h;
      this.stroke = 5;
      this.el = document.createElementNS(svgNS, 'rect');
     
      this.el.setAttribute('data-index', rectangles.length);
      this.el.setAttribute('class', 'edit-rectangle');
      rectangles.push(this);
     
      this.draw();
      svgCanvas.appendChild(this.el);
    }
     
    Rectangle.prototype.draw = function () {
      this.el.setAttribute('x', this.x + this.stroke / 2);
      this.el.setAttribute('y', this.y + this.stroke / 2);
      this.el.setAttribute('width' , this.w - this.stroke);
      this.el.setAttribute('height', this.h - this.stroke);
      this.el.setAttribute('stroke-width', this.stroke);
    }
     
    interact('.edit-rectangle')
      // change how interact gets the
      // dimensions of '.edit-rectangle' elements
      .rectChecker(function (element) {
        // find the Rectangle object that the element belongs to
        var rectangle = rectangles[element.getAttribute('data-index')];
     
        // return a suitable object for interact.js
        return {
          left  : rectangle.x,
          top   : rectangle.y,
          right : rectangle.x + rectangle.w,
          bottom: rectangle.y + rectangle.h
        };
      })

    Whenever interact.js needs to get the dimensions of one of the '.edit-rectangle' elements, it calls the rectChecker function that was specified. The function finds the Rectangle object using the element argument then creates and returns an appropriate object with left, right, top and bottom properties.

    This object is used for restricting when the restrict elementRect option is set. In the slider demo from earlier, restriction used only the pointer coordinates. Here, restriction will try to prevent the element from being dragged out of the specified area.

      .inertia({
        // don't jump to the resume location
        // https://github.com/taye/interact.js/issues/13
        zeroResumeDelta: true
      })
      .restrict({
        // restrict to a parent element that matches this CSS selector
        drag: 'svg',
        // only restrict before ending the drag
        endOnly: true,
        // consider the element's dimensions when restricting
        elementRect: { top: 0, left: 0, bottom: 1, right: 1 }
      })

    The rectangles are made draggable and resizable.

      .draggable({
        max: Infinity,
        onmove: function (event) {
          var rectangle = rectangles[event.target.getAttribute('data-index')];
     
          rectangle.x += event.dx;
          rectangle.y += event.dy;
          rectangle.draw();
        }
      })
      .resizable({
        max: Infinity,
        onmove: function (event) {
          var rectangle = rectangles[event.target.getAttribute('data-index')];
     
          rectangle.w = Math.max(rectangle.w + event.dx, 10);
          rectangle.h = Math.max(rectangle.h + event.dy, 10);
          rectangle.draw();
        }
      });
     
    interact.maxInteractions(Infinity);

    Development and contributions

    I hope this article gives a good overview of how to use interact.js and the types of applications that I think it would be useful for. If not, there are more demos on the project homepage and you can throw questions or issues at Twitter or Github. I’d really like to make a comprehensive set of examples and documentation but I’ve been too busy with fixes and improvements. (I’ve also been too lazy :-P).

    Since the 1.0.0 release, user comments and contributions have led to loads of bug fixes and many new features including:

    So please use it, share it, break it and help to make it better!

  2. Interview with Koen Kivits, winner of the Multi-touch Dev Derby

    Koen Kivits won the Multi-touch Dev Derby with TouchCycle, his wonderful TRON-inspired mobile game. Recently, I had the chance to learn more about Koen: his work, his ambitions, and his thoughts on the future of web development.

    Koen Kivits

    The interview

    How did you become interested in web development?

    I’ve been creating websites since high school, but I didn’t really get serious about web development until I started working two and a half years ago. I wasn’t specifically hired for web development, but I kind of ended up there. I came in just as our company was launching a major new web based product, which has grown immensely since then. The challenges we faced during this ongoing growth and how we were able to solve them really made me view the web as a serious platform.

    Can you tell us a little about how TouchCycle works?

    The game itself basically consists of an arena with 2 or more players on it. Each player has a position and a target to which it is moving. As each player moves, it leaves a trail in the arena.

    Each segment in a player’s trail is defined as simple linear equation, which makes it really easy to calculate intersections between segments. Collision detection is then done by checking whether a player’s upcoming trail segment intersects with an already existing trail segment.

    The arena is drawn on a <canvas> element that is sized to fit the screen when the game starts. The <canvas> has 3 touch event handlers registered to it:

    • touchstart: register nearest unregistered player (if any) to the new touch and set its target
    • touchmove: update the target of the player that is registered to the moving touch
    • touchend: unregister the player

    Any touch events on the document itself are cancelled while the game is running in order to prevent any scrolling and zooming.

    Everything around the main game (the menus, the notifications, etc.) is just plain HTML. Menu navigation is done with HTML anchors and a hashchange event handler that hides or shows content relevant to the current URL. Note that this means you can use your browser’s back and forward buttons to navigate within the game.

    What was your biggest challenge in developing TouchCycle?

    Multitouch interaction was completely new to me and I had done very little work with the <canvas> element before, so it took me a while to read up on everything and get to work. I also had to spend some time tweaking the collision detection and the way a player follows your touch in order to not have players crash into their own trails easily.

    What makes the web an exciting platform for you?

    The openness of the platform, in several ways. Anyone with an internet connection can access the web using any device running a browser. Anyone can publish to the web–there’s no license required, no approval process and no being locked in to a specific vendor. Anyone can open up their browser’s developer tools to see how an app works and even tinker with it. Anyone can even contribute to the standards that make up the very platform itself!

    What new web technologies are you most excited about?

    I’m probably most excited about WebRTC. It opens up a lot of possibilities for web developers, especially when mobile support increases. For example, just think of how combining WebRTC, the Geolocation API and Device Orientation API would make for an awesome augmented reality app. The possibilities are limitless.

    If you could change one thing about the web, what would it be?

    I really like how the web is a collaborative effort. A problem of that collaboration, however, is conflicting interests leading to delayed or crippled new standards. A good example is the HTML <video> element, which I think is still not very usable today despite being such a basic feature.

    Browser vendors should be allowed some flexibility in the formats they support, but I think it would be a good thing if there was a minimum requirement of supporting at least 1 common open format.

    Do you have any advice for other ambitious web developers?

    As a developer I think I’ve learnt most from reading other people’s code. If you like a library or web app, it really pays to spend some time analysing its source code. It can be really inspiring to look at how other people solve problems; it can give you pointers on how to structure your own code and it can teach you about technologies you didn’t even know about.

    Also, don’t be afraid to read the specification of a standard every now and then to really learn about the technologies you’re using.

    Further reading

  3. Announcing the winners of the February 2013 Dev Derby!

    Last month, some of the most creative web developers out there pushed the limits of touch events and multi-touch interaction in the February Dev Derby contest. After looking through the entries, our three expert judges–Franck Lecollinet, Guillaume Lecollinet, and (filling in for Craig Cook this month) yours truly–decided on three winners and two runners-up.

    Not a contestant? There are other reasons to be excited. Most importantly, all of these demos are completely open-source, making them wonderful lessons in the exciting things you can do with multi-touch today.

    Dev Derby

    The Results

    Winners

    Runners-up

    Naming just a few winners is always difficult, but the task was especially hard this time around. Just about every demo I came across impressed me more than the last, and I know our other judges faced a similar dilemma. Our winners managed to take a relatively familiar form of interaction and create experiences that were more creative, more fun, and more visually stunning than we would have ever predicted. Congratulations to them and to everyone else who amazed us last month.

    Want to get a head start on an upcoming Derby? We are now accepting demos related to mobile web development in general (March), Web Workers (April), and getUserMedia (May). Head over to the Dev Derby to get started.

  4. Firefox 4 Beta: Latest Update is Here – Experimenting With Multi-touch

    The latest Firefox 4 Beta has just been released (get it here). This beta comes with hundreds of bug fixes, improvements and multi-touch support for Windows 7 (see the release notes here). This article is about multi-touch support.

    Felipe Gomes is working on bringing multi-touch support to web content. In this latest beta, we are experimenting with this new feature.

    Playing with MultiTouch, HTML5 and CSS3:

    This video is hosted by YouTube and uses the HTML5 video tag if you have enabled it (see here). YouTube video here.

    Multi-touch Events

    If you have a multi-touch capable display, touch events are sent to your web page, more or less like mouse events. Each input (created using your fingers) generates its own events:

    • MozTouchDown: Sent when the user begins a screen touch action.
    • MozTouchMove: Sent when the user moves his finger on the touch screen.
    • MozTouchUp: Sent when the user lifts his finger off the screen.

    Touch information

    Touch events provide several useful properties.

    • event.streamId: don’t forget, it’s multi-touch, which means that you have to deal with several events from several sources. So each event comes with an id to identify the input.
    • event.mozInputSource: the type of device used (mouse, pen, or finger, if the hardware supports it). This is a property of mouse events.
    • event.clientX/Y: the coordinates.

    Designing a touch UI

    You might want to have a specific UI for multi-touch capable devices. You can use the :-moz-system-metric(touch-enabled) pseudo class or the -moz-touch-enabled media query to design a more finger friendly UI.

    Note: For now, this feature only works with Windows 7. If you don’t have hardware that supports multi-touch, you can try Hernan’s multi-touch simulator.

    More joy:

    (This video is made by Felipe, see more here).

    At the beginning of the video, you see how a webpage can get data about multi-touch input, correctly track points of contact and differentiate between touch input and pen input.

    At the second part, you see a visual application of multi-touch input on a fluid simulator, where each point of contact adds a particle source, and the movement adds forces to the field.

    Both parts use HTML5’s canvas element to render their content.

    Like it?

    Edit: If you want more details, take a look at Felipe’s latest blog post.

  5. Beyond HTML5: experiments with interactive audio

    This is a re-post of an important post from David Humphrey who has been doing a lot of experiments on top of Mozilla’s extensible platform and doing experiments with multi-touch, sound, video, WebGL and all sorts of other goodies. It’s worth going through all of the demos below. You’ll find some stuff that will amaze and inspire you.

    David’s work is important because it’s showing where the web is going, and where Mozilla is helping to take it. It’s not enough that we’re working on HTML5, which we’re about finished with, but we’re trying to figure out what’s next. Mozilla’s platform, Gecko, is a huge part of why we’re able to experiment and learn as fast as we can. And that’s reflected with what’s possible here. It’s a web you can see, touch and interact with in new ways.

    David’s post follows:

    I’m working with an ever growing group of web, audio, and Mozilla developers on a project to expose audio spectrum data to JavaScript from Firefox’s audio and video elements. Today we show what we did at www2010.

    I’m in Raleigh, North Carolina, with Al MacDonald for the www2010 conference. We’re here to present our work on exposing audio data in the browser. Over the past month Corban, Charles, and a bunch of other friends have been working with us to refine the API and get new types of demos ready. We ended-up with 11 demos, some of which I’ve shown here before. Here are the others.

    The first was done by Jacob Seidelin, and shows many cool 2D visualizations of audio using our API. You can see the live version on his site, or check out this video:

    The second and third demos where done by Charles Cliffe, and show 3D visualizations using WebGL and his CubicVR engine. These also show off his JavaScript beat detection code. Is JavaScript fast enough to do real-time analysis of audio and synchronized 3D graphics? Yes, yes it is. The live versions are here and here, and here are some videos:

    The fourth demo was done by Corban Brook and shows how audio data can be mixed live using script. Here he mutes the main audio, plays it, passes the data through a low pass filter written in JavaScript, then dumps the modified frames into a second audio element to be played. It’s a technique we need to apply more widely, as it holds a lot of potential. Here’s the live version, and here’s a video (check out his updated JavaScript synthesizer, which we also presented):

    The fifth and sixth demos were done by Al (with the help of many). When I was last in Boston, for the Processing.js meetup at Bocoup, we met with Doug Schepers from the W3C. He loved our stuff, and was talking to us about ideas that would be cool to build. He pulled out his iPhone and showed us Brian Eno’s Bloom audio app. “It would be cool to do this in the browser.” Yeah, it is cool, and here it is, written in a few hundred lines of JavaScript and Processing.js (video 1, video 2):

    This demo also showcases the awesome work of Felipe Gomes, who has a patch to add multi-touch DOM events to Firefox. The method we’ve used here can be taken a lot further. Imagine being able to connect multiple browsers together for collaborative music creation, layering other audio underneath, mixing fragments vs. just oscillators, etc. We built this one in a week, and the web is capable of a lot more.

    One of the main points of our talk was to emphasize that what we’re talking about here isn’t just a concept, and it isn’t some far away future. This is real code, running in a real browser, and it’s all being done in HTML5 and JavaScript. The web is fast enough to do real-time audio processing now, powerful enough and expressive enough to create music. And the community of digital music and audio hackers, visualizers, etc. are hungry for it. So hungry that they are seeking us out, downloading our hacked builds and creating gorgeous web audio applications.

    We want to keep going, and we need help. We need help from those within Mozilla, the W3C, and other browsers to get this stuff into shipping browsers. We need the audio, digital music, accessibility, and web communities to come together in order to help us build js audio libraries and more sample applications. Yesterday Joe Hewitt was talking on twitter about how web browser vendors need to experiment more with non-standard APIs. I couldn’t agree more, and here’s a chance for people to put their money where their mouth is. Let’s make audio a scriptable part of the open web.

    I’m currently creating new builds of our updated patch for Firefox, and will post links to them here when I’m done. You can read more about the technical details of our work here, and get involved in the bug here. You can talk more with me on irc in the processing.js channel (I’m humph on moznet), or talk to me on twitter (@humphd) or by email. One way or another, get in touch so you can help us push this forward.

  6. a multi-touch drawing demo for Firefox 3.7

    Firefox Multitouch at MozChile – Drawing Canvas Experiment from Marcio Galli on Vimeo.

    A couple of months ago we featured a video that had some examples of multi-touch working in Firefox. At a recent event in South America, Marcio Galli put together a quick and fun drawing program based on the multi-touch code that we’ll have in a later release of Firefox. What’s really great is that he was able to put this together in just a couple of hours based on the web platform.

    There are three main components to the touch support in Firefox:

    1. Touch-based scrolling and panning for the browser. This allows you, as a user, to scroll web pages, select text, open menus, select buttons, etc. This is part of Firefox 3.6.

    2. Implement a new CSS selector that will tell you if you’re on a touch-enabled device. This is -moz-system-metric(touch-enabled). You can use this in your CSS to adjust the size of UI elements to fit people’s fingers. This is part of Firefox 3.6.

    3. Exposing multi-touch data to web pages. This takes the form of DOM events much like mouse events you can catch today. This isn’t part of Firefox 3.6, but is likely to be part of Firefox 3.7.

    Although not all of this will be in our next release we thought it would be fun for people to see what will be possible with the release after 3.6.

    Note: You can find the sample code on Marcio’s page for the demo.

  7. bringing multi-touch to Firefox and the web

    The ever-energetic Felipe Gomes was nice enough to intern with Mozilla this summer in between busy semesters in Brazil. During that time he’s been working on multi-touch support for Firefox on Windows 7. A nice result of that work is that he’s also found ways to bring multi-touch support to the web. He’s made a short video and written up some short technical information to go with it.

    This post has also been cross-posted to Felipe’s personal blog.

    Multitouch on Firefox from Felipe on Vimeo.

    I’ve been anxious to demonstrate the progress on our multi-touch support for Firefox, and this video showcases some possible interactions and use cases for what web pages and webapps can do with a multi-touch device.

    We’re working on exposing the multi-touch data from the system to regular web pages through DOM Events, and all of these demos are built on top of that. They are simple HTML pages that receive events for each touch point and use them to build a custom multi-touch experience.

    We’re also adding CSS support to detect when you’re running on an touchscreen device. Using the pseudo-selector :-moz-system-metric(touch-enabled) you can apply specific styles for your page if it’s being viewed on a touchscreen device. That, along with physical CSS units (cm or in), makes it possible to adjust your webapp for a touchscreen experience.

    Firefox 3.6 will include the CSS property, but is unlikely to include the DOM events described below.

    Here is an example of what the API looks like for now. We have three new DOM events (MozTouchDown, MozTouchMove and MozTouchRelease), which are similar to mouse events, except that they have a new attribute called streamId that can uniquely identify the same finger being tracked in a series of MozTouch events. The following snippet is the code for the first demo where we move independent <div>s under the X/Y position of each touch point.

    var assignedFingers = {};
    var lastused = 0;
     
    function touchMove(event) {
        var divId;
        if (lastused < = 4)
            return;
     
        if (assignedFingers[event.streamId]) {
            divId = assignedFingers[event.streamId];
        }
        else {
            divId = "trackingdiv" + (++lastused);
            assignedFingers[event.streamId] = divId;
        }
     
        document.getElementById(divId).style.left = event.clientX + 'px';
        document.getElementById(divId).style.top  = event.clientY + 'px';
    }
     
    document.addEventListener("MozTouchMove", touchMove, false);
    document.addEventListener("MozTouchRelease",
                              function () { lastused--; }, false);

    On the wiki page you can see code snippets for the other demos. Leave any comments regarding the demos or the API on my weblog post. We really welcome feedback and hope to start some good discussion on this area. Hopefully as touch devices (mobile and notebooks) are getting more and more popular we’ll see new and creative ways to use touch and multitouch on the web.