Mozilla

Multi-touch Articles

Sort by:

View:

  1. Firefox 4 Beta: Latest Update is Here – Experimenting With Multi-touch

    The latest Firefox 4 Beta has just been released (get it here). This beta comes with hundreds of bug fixes, improvements and multi-touch support for Windows 7 (see the release notes here). This article is about multi-touch support.

    Felipe Gomes is working on bringing multi-touch support to web content. In this latest beta, we are experimenting with this new feature.

    Playing with MultiTouch, HTML5 and CSS3:

    This video is hosted by YouTube and uses the HTML5 video tag if you have enabled it (see here). YouTube video here.

    Multi-touch Events

    If you have a multi-touch capable display, touch events are sent to your web page, more or less like mouse events. Each input (created using your fingers) generates its own events:

    • MozTouchDown: Sent when the user begins a screen touch action.
    • MozTouchMove: Sent when the user moves his finger on the touch screen.
    • MozTouchUp: Sent when the user lifts his finger off the screen.

    Touch information

    Touch events provide several useful properties.

    • event.streamId: don’t forget, it’s multi-touch, which means that you have to deal with several events from several sources. So each event comes with an id to identify the input.
    • event.mozInputSource: the type of device used (mouse, pen, or finger, if the hardware supports it). This is a property of mouse events.
    • event.clientX/Y: the coordinates.

    Designing a touch UI

    You might want to have a specific UI for multi-touch capable devices. You can use the :-moz-system-metric(touch-enabled) pseudo class or the -moz-touch-enabled media query to design a more finger friendly UI.

    Note: For now, this feature only works with Windows 7. If you don’t have hardware that supports multi-touch, you can try Hernan’s multi-touch simulator.

    More joy:

    (This video is made by Felipe, see more here).

    At the beginning of the video, you see how a webpage can get data about multi-touch input, correctly track points of contact and differentiate between touch input and pen input.

    At the second part, you see a visual application of multi-touch input on a fluid simulator, where each point of contact adds a particle source, and the movement adds forces to the field.

    Both parts use HTML5′s canvas element to render their content.

    Like it?

    Edit: If you want more details, take a look at Felipe’s latest blog post.

  2. a multi-touch drawing demo for Firefox 3.7

    Firefox Multitouch at MozChile – Drawing Canvas Experiment from Marcio Galli on Vimeo.

    A couple of months ago we featured a video that had some examples of multi-touch working in Firefox. At a recent event in South America, Marcio Galli put together a quick and fun drawing program based on the multi-touch code that we’ll have in a later release of Firefox. What’s really great is that he was able to put this together in just a couple of hours based on the web platform.

    There are three main components to the touch support in Firefox:

    1. Touch-based scrolling and panning for the browser. This allows you, as a user, to scroll web pages, select text, open menus, select buttons, etc. This is part of Firefox 3.6.

    2. Implement a new CSS selector that will tell you if you’re on a touch-enabled device. This is -moz-system-metric(touch-enabled). You can use this in your CSS to adjust the size of UI elements to fit people’s fingers. This is part of Firefox 3.6.

    3. Exposing multi-touch data to web pages. This takes the form of DOM events much like mouse events you can catch today. This isn’t part of Firefox 3.6, but is likely to be part of Firefox 3.7.

    Although not all of this will be in our next release we thought it would be fun for people to see what will be possible with the release after 3.6.

    Note: You can find the sample code on Marcio’s page for the demo.

  3. bringing multi-touch to Firefox and the web

    The ever-energetic Felipe Gomes was nice enough to intern with Mozilla this summer in between busy semesters in Brazil. During that time he’s been working on multi-touch support for Firefox on Windows 7. A nice result of that work is that he’s also found ways to bring multi-touch support to the web. He’s made a short video and written up some short technical information to go with it.

    This post has also been cross-posted to Felipe’s personal blog.

    Multitouch on Firefox from Felipe on Vimeo.

    I’ve been anxious to demonstrate the progress on our multi-touch support for Firefox, and this video showcases some possible interactions and use cases for what web pages and webapps can do with a multi-touch device.

    We’re working on exposing the multi-touch data from the system to regular web pages through DOM Events, and all of these demos are built on top of that. They are simple HTML pages that receive events for each touch point and use them to build a custom multi-touch experience.

    We’re also adding CSS support to detect when you’re running on an touchscreen device. Using the pseudo-selector :-moz-system-metric(touch-enabled) you can apply specific styles for your page if it’s being viewed on a touchscreen device. That, along with physical CSS units (cm or in), makes it possible to adjust your webapp for a touchscreen experience.

    Firefox 3.6 will include the CSS property, but is unlikely to include the DOM events described below.

    Here is an example of what the API looks like for now. We have three new DOM events (MozTouchDown, MozTouchMove and MozTouchRelease), which are similar to mouse events, except that they have a new attribute called streamId that can uniquely identify the same finger being tracked in a series of MozTouch events. The following snippet is the code for the first demo where we move independent <div>s under the X/Y position of each touch point.

    var assignedFingers = {};
    var lastused = 0;
     
    function touchMove(event) {
        var divId;
        if (lastused < = 4)
            return;
     
        if (assignedFingers[event.streamId]) {
            divId = assignedFingers[event.streamId];
        }
        else {
            divId = "trackingdiv" + (++lastused);
            assignedFingers[event.streamId] = divId;
        }
     
        document.getElementById(divId).style.left = event.clientX + 'px';
        document.getElementById(divId).style.top  = event.clientY + 'px';
    }
     
    document.addEventListener("MozTouchMove", touchMove, false);
    document.addEventListener("MozTouchRelease",
                              function () { lastused--; }, false);

    On the wiki page you can see code snippets for the other demos. Leave any comments regarding the demos or the API on my weblog post. We really welcome feedback and hope to start some good discussion on this area. Hopefully as touch devices (mobile and notebooks) are getting more and more popular we’ll see new and creative ways to use touch and multitouch on the web.

  4. Beyond HTML5: experiments with interactive audio

    This is a re-post of an important post from David Humphrey who has been doing a lot of experiments on top of Mozilla’s extensible platform and doing experiments with multi-touch, sound, video, WebGL and all sorts of other goodies. It’s worth going through all of the demos below. You’ll find some stuff that will amaze and inspire you.

    David’s work is important because it’s showing where the web is going, and where Mozilla is helping to take it. It’s not enough that we’re working on HTML5, which we’re about finished with, but we’re trying to figure out what’s next. Mozilla’s platform, Gecko, is a huge part of why we’re able to experiment and learn as fast as we can. And that’s reflected with what’s possible here. It’s a web you can see, touch and interact with in new ways.

    David’s post follows:

    I’m working with an ever growing group of web, audio, and Mozilla developers on a project to expose audio spectrum data to JavaScript from Firefox’s audio and video elements. Today we show what we did at www2010.

    I’m in Raleigh, North Carolina, with Al MacDonald for the www2010 conference. We’re here to present our work on exposing audio data in the browser. Over the past month Corban, Charles, and a bunch of other friends have been working with us to refine the API and get new types of demos ready. We ended-up with 11 demos, some of which I’ve shown here before. Here are the others.

    The first was done by Jacob Seidelin, and shows many cool 2D visualizations of audio using our API. You can see the live version on his site, or check out this video:

    The second and third demos where done by Charles Cliffe, and show 3D visualizations using WebGL and his CubicVR engine. These also show off his JavaScript beat detection code. Is JavaScript fast enough to do real-time analysis of audio and synchronized 3D graphics? Yes, yes it is. The live versions are here and here, and here are some videos:

    The fourth demo was done by Corban Brook and shows how audio data can be mixed live using script. Here he mutes the main audio, plays it, passes the data through a low pass filter written in JavaScript, then dumps the modified frames into a second audio element to be played. It’s a technique we need to apply more widely, as it holds a lot of potential. Here’s the live version, and here’s a video (check out his updated JavaScript synthesizer, which we also presented):

    The fifth and sixth demos were done by Al (with the help of many). When I was last in Boston, for the Processing.js meetup at Bocoup, we met with Doug Schepers from the W3C. He loved our stuff, and was talking to us about ideas that would be cool to build. He pulled out his iPhone and showed us Brian Eno’s Bloom audio app. “It would be cool to do this in the browser.” Yeah, it is cool, and here it is, written in a few hundred lines of JavaScript and Processing.js (video 1, video 2):

    This demo also showcases the awesome work of Felipe Gomes, who has a patch to add multi-touch DOM events to Firefox. The method we’ve used here can be taken a lot further. Imagine being able to connect multiple browsers together for collaborative music creation, layering other audio underneath, mixing fragments vs. just oscillators, etc. We built this one in a week, and the web is capable of a lot more.

    One of the main points of our talk was to emphasize that what we’re talking about here isn’t just a concept, and it isn’t some far away future. This is real code, running in a real browser, and it’s all being done in HTML5 and JavaScript. The web is fast enough to do real-time audio processing now, powerful enough and expressive enough to create music. And the community of digital music and audio hackers, visualizers, etc. are hungry for it. So hungry that they are seeking us out, downloading our hacked builds and creating gorgeous web audio applications.

    We want to keep going, and we need help. We need help from those within Mozilla, the W3C, and other browsers to get this stuff into shipping browsers. We need the audio, digital music, accessibility, and web communities to come together in order to help us build js audio libraries and more sample applications. Yesterday Joe Hewitt was talking on twitter about how web browser vendors need to experiment more with non-standard APIs. I couldn’t agree more, and here’s a chance for people to put their money where their mouth is. Let’s make audio a scriptable part of the open web.

    I’m currently creating new builds of our updated patch for Firefox, and will post links to them here when I’m done. You can read more about the technical details of our work here, and get involved in the bug here. You can talk more with me on irc in the processing.js channel (I’m humph on moznet), or talk to me on twitter (@humphd) or by email. One way or another, get in touch so you can help us push this forward.

  5. Interview with Koen Kivits, winner of the Multi-touch Dev Derby

    Koen Kivits won the Multi-touch Dev Derby with TouchCycle, his wonderful TRON-inspired mobile game. Recently, I had the chance to learn more about Koen: his work, his ambitions, and his thoughts on the future of web development.

    Koen Kivits

    The interview

    How did you become interested in web development?

    I’ve been creating websites since high school, but I didn’t really get serious about web development until I started working two and a half years ago. I wasn’t specifically hired for web development, but I kind of ended up there. I came in just as our company was launching a major new web based product, which has grown immensely since then. The challenges we faced during this ongoing growth and how we were able to solve them really made me view the web as a serious platform.

    Can you tell us a little about how TouchCycle works?

    The game itself basically consists of an arena with 2 or more players on it. Each player has a position and a target to which it is moving. As each player moves, it leaves a trail in the arena.

    Each segment in a player’s trail is defined as simple linear equation, which makes it really easy to calculate intersections between segments. Collision detection is then done by checking whether a player’s upcoming trail segment intersects with an already existing trail segment.

    The arena is drawn on a <canvas> element that is sized to fit the screen when the game starts. The <canvas> has 3 touch event handlers registered to it:

    • touchstart: register nearest unregistered player (if any) to the new touch and set its target
    • touchmove: update the target of the player that is registered to the moving touch
    • touchend: unregister the player

    Any touch events on the document itself are cancelled while the game is running in order to prevent any scrolling and zooming.

    Everything around the main game (the menus, the notifications, etc.) is just plain HTML. Menu navigation is done with HTML anchors and a hashchange event handler that hides or shows content relevant to the current URL. Note that this means you can use your browser’s back and forward buttons to navigate within the game.

    What was your biggest challenge in developing TouchCycle?

    Multitouch interaction was completely new to me and I had done very little work with the <canvas> element before, so it took me a while to read up on everything and get to work. I also had to spend some time tweaking the collision detection and the way a player follows your touch in order to not have players crash into their own trails easily.

    What makes the web an exciting platform for you?

    The openness of the platform, in several ways. Anyone with an internet connection can access the web using any device running a browser. Anyone can publish to the web–there’s no license required, no approval process and no being locked in to a specific vendor. Anyone can open up their browser’s developer tools to see how an app works and even tinker with it. Anyone can even contribute to the standards that make up the very platform itself!

    What new web technologies are you most excited about?

    I’m probably most excited about WebRTC. It opens up a lot of possibilities for web developers, especially when mobile support increases. For example, just think of how combining WebRTC, the Geolocation API and Device Orientation API would make for an awesome augmented reality app. The possibilities are limitless.

    If you could change one thing about the web, what would it be?

    I really like how the web is a collaborative effort. A problem of that collaboration, however, is conflicting interests leading to delayed or crippled new standards. A good example is the HTML <video> element, which I think is still not very usable today despite being such a basic feature.

    Browser vendors should be allowed some flexibility in the formats they support, but I think it would be a good thing if there was a minimum requirement of supporting at least 1 common open format.

    Do you have any advice for other ambitious web developers?

    As a developer I think I’ve learnt most from reading other people’s code. If you like a library or web app, it really pays to spend some time analysing its source code. It can be really inspiring to look at how other people solve problems; it can give you pointers on how to structure your own code and it can teach you about technologies you didn’t even know about.

    Also, don’t be afraid to read the specification of a standard every now and then to really learn about the technologies you’re using.

    Further reading

  6. Announcing the winners of the February 2013 Dev Derby!

    Last month, some of the most creative web developers out there pushed the limits of touch events and multi-touch interaction in the February Dev Derby contest. After looking through the entries, our three expert judges–Franck Lecollinet, Guillaume Lecollinet, and (filling in for Craig Cook this month) yours truly–decided on three winners and two runners-up.

    Not a contestant? There are other reasons to be excited. Most importantly, all of these demos are completely open-source, making them wonderful lessons in the exciting things you can do with multi-touch today.

    Dev Derby

    The Results

    Winners

    Runners-up

    Naming just a few winners is always difficult, but the task was especially hard this time around. Just about every demo I came across impressed me more than the last, and I know our other judges faced a similar dilemma. Our winners managed to take a relatively familiar form of interaction and create experiences that were more creative, more fun, and more visually stunning than we would have ever predicted. Congratulations to them and to everyone else who amazed us last month.

    Want to get a head start on an upcoming Derby? We are now accepting demos related to mobile web development in general (March), Web Workers (April), and getUserMedia (May). Head over to the Dev Derby to get started.