The latest Firefox 4 Beta has just been released (get it here). This beta comes with hundreds of bug fixes, improvements and multi-touch support for Windows 7 (see the release notes here). This article is about multi-touch support.
This video is hosted by YouTube and uses the HTML5 video tag if you have enabled it (see here). YouTube video here.
If you have a multi-touch capable display, touch events are sent to your web page, more or less like mouse events. Each input (created using your fingers) generates its own events:
MozTouchDown: Sent when the user begins a screen touch action.
MozTouchMove: Sent when the user moves his finger on the touch screen.
MozTouchUp: Sent when the user lifts his finger off the screen.
Touch events provide several useful properties.
event.streamId: don’t forget, it’s multi-touch, which means that you have to deal with several events from several sources. So each event comes with an id to identify the input.
event.mozInputSource: the type of device used (mouse, pen, or finger, if the hardware supports it). This is a property of mouse events.
event.clientX/Y: the coordinates.
Designing a touch UI
You might want to have a specific UI for multi-touch capable devices. You can use the :-moz-system-metric(touch-enabled) pseudo class or the -moz-touch-enabled media query to design a more finger friendly UI.
Note: For now, this feature only works with Windows 7. If you don’t have hardware that supports multi-touch, you can try Hernan’s multi-touch simulator.
There are three main components to the touch support in Firefox:
1. Touch-based scrolling and panning for the browser. This allows you, as a user, to scroll web pages, select text, open menus, select buttons, etc. This is part of Firefox 3.6.
2. Implement a new CSS selector that will tell you if you’re on a touch-enabled device. This is -moz-system-metric(touch-enabled). You can use this in your CSS to adjust the size of UI elements to fit people’s fingers. This is part of Firefox 3.6.
3. Exposing multi-touch data to web pages. This takes the form of DOM events much like mouse events you can catch today. This isn’t part of Firefox 3.6, but is likely to be part of Firefox 3.7.
Although not all of this will be in our next release we thought it would be fun for people to see what will be possible with the release after 3.6.
The ever-energetic Felipe Gomes was nice enough to intern with Mozilla this summer in between busy semesters in Brazil. During that time he’s been working on multi-touch support for Firefox on Windows 7. A nice result of that work is that he’s also found ways to bring multi-touch support to the web. He’s made a short video and written up some short technical information to go with it.
I’ve been anxious to demonstrate the progress on our multi-touch support for Firefox, and this video showcases some possible interactions and use cases for what web pages and webapps can do with a multi-touch device.
We’re working on exposing the multi-touch data from the system to regular web pages through DOM Events, and all of these demos are built on top of that. They are simple HTML pages that receive events for each touch point and use them to build a custom multi-touch experience.
We’re also adding CSS support to detect when you’re running on an touchscreen device. Using the pseudo-selector :-moz-system-metric(touch-enabled) you can apply specific styles for your page if it’s being viewed on a touchscreen device. That, along with physical CSS units (cm or in), makes it possible to adjust your webapp for a touchscreen experience.
Firefox 3.6 will include the CSS property, but is unlikely to include the DOM events described below.
Here is an example of what the API looks like for now. We have three new DOM events (MozTouchDown, MozTouchMove and MozTouchRelease), which are similar to mouse events, except that they have a new attribute called streamId that can uniquely identify the same finger being tracked in a series of MozTouch events. The following snippet is the code for the first demo where we move independent <div>s under the X/Y position of each touch point.
On the wiki page you can see code snippets for the other demos. Leave any comments regarding the demos or the API on my weblog post. We really welcome feedback and hope to start some good discussion on this area. Hopefully as touch devices (mobile and notebooks) are getting more and more popular we’ll see new and creative ways to use touch and multitouch on the web.
This is a re-post of an important post from David Humphrey who has been doing a lot of experiments on top of Mozilla’s extensible platform and doing experiments with multi-touch, sound, video, WebGL and all sorts of other goodies. It’s worth going through all of the demos below. You’ll find some stuff that will amaze and inspire you.
David’s work is important because it’s showing where the web is going, and where Mozilla is helping to take it. It’s not enough that we’re working on HTML5, which we’re about finished with, but we’re trying to figure out what’s next. Mozilla’s platform, Gecko, is a huge part of why we’re able to experiment and learn as fast as we can. And that’s reflected with what’s possible here. It’s a web you can see, touch and interact with in new ways.
David’s post follows:
I’m in Raleigh, North Carolina, with Al MacDonald for the www2010 conference. We’re here to present our work on exposing audio data in the browser. Over the past month Corban, Charles, and a bunch of other friends have been working with us to refine the API and get new types of demos ready. We ended-up with 11 demos, some of which I’ve shown here before. Here are the others.
This demo also showcases the awesome work of Felipe Gomes, who has a patch to add multi-touch DOM events to Firefox. The method we’ve used here can be taken a lot further. Imagine being able to connect multiple browsers together for collaborative music creation, layering other audio underneath, mixing fragments vs. just oscillators, etc. We built this one in a week, and the web is capable of a lot more.
We want to keep going, and we need help. We need help from those within Mozilla, the W3C, and other browsers to get this stuff into shipping browsers. We need the audio, digital music, accessibility, and web communities to come together in order to help us build js audio libraries and more sample applications. Yesterday Joe Hewitt was talking on twitter about how web browser vendors need to experiment more with non-standard APIs. I couldn’t agree more, and here’s a chance for people to put their money where their mouth is. Let’s make audio a scriptable part of the open web.
I’m currently creating new builds of our updated patch for Firefox, and will post links to them here when I’m done. You can read more about the technical details of our work here, and get involved in the bug here. You can talk more with me on irc in the processing.js channel (I’m humph on moznet), or talk to me on twitter (@humphd) or by email. One way or another, get in touch so you can help us push this forward.
Koen Kivits won the Multi-touch Dev Derby with TouchCycle, his wonderful TRON-inspired mobile game. Recently, I had the chance to learn more about Koen: his work, his ambitions, and his thoughts on the future of web development.
How did you become interested in web development?
I’ve been creating websites since high school, but I didn’t really get serious about web development until I started working two and a half years ago. I wasn’t specifically hired for web development, but I kind of ended up there. I came in just as our company was launching a major new web based product, which has grown immensely since then. The challenges we faced during this ongoing growth and how we were able to solve them really made me view the web as a serious platform.
Can you tell us a little about how TouchCycle works?
The game itself basically consists of an arena with 2 or more players on it. Each player has a position and a target to which it is moving. As each player moves, it leaves a trail in the arena.
Each segment in a player’s trail is defined as simple linear equation, which makes it really easy to calculate intersections between segments. Collision detection is then done by checking whether a player’s upcoming trail segment intersects with an already existing trail segment.
The arena is drawn on a <canvas> element that is sized to fit the screen when the game starts. The <canvas> has 3 touch event handlers registered to it:
touchstart: register nearest unregistered player (if any) to the new touch and set its target
touchmove: update the target of the player that is registered to the moving touch
Any touch events on the document itself are cancelled while the game is running in order to prevent any scrolling and zooming.
Everything around the main game (the menus, the notifications, etc.) is just plain HTML. Menu navigation is done with HTML anchors and a hashchange event handler that hides or shows content relevant to the current URL. Note that this means you can use your browser’s back and forward buttons to navigate within the game.
What was your biggest challenge in developing TouchCycle?
Multitouch interaction was completely new to me and I had done very little work with the <canvas> element before, so it took me a while to read up on everything and get to work. I also had to spend some time tweaking the collision detection and the way a player follows your touch in order to not have players crash into their own trails easily.
What makes the web an exciting platform for you?
The openness of the platform, in several ways. Anyone with an internet connection can access the web using any device running a browser. Anyone can publish to the web–there’s no license required, no approval process and no being locked in to a specific vendor. Anyone can open up their browser’s developer tools to see how an app works and even tinker with it. Anyone can even contribute to the standards that make up the very platform itself!
What new web technologies are you most excited about?
I’m probably most excited about WebRTC. It opens up a lot of possibilities for web developers, especially when mobile support increases. For example, just think of how combining WebRTC, the Geolocation API and Device Orientation API would make for an awesome augmented reality app. The possibilities are limitless.
If you could change one thing about the web, what would it be?
I really like how the web is a collaborative effort. A problem of that collaboration, however, is conflicting interests leading to delayed or crippled new standards. A good example is the HTML <video> element, which I think is still not very usable today despite being such a basic feature.
Browser vendors should be allowed some flexibility in the formats they support, but I think it would be a good thing if there was a minimum requirement of supporting at least 1 common open format.
Do you have any advice for other ambitious web developers?
As a developer I think I’ve learnt most from reading other people’s code. If you like a library or web app, it really pays to spend some time analysing its source code. It can be really inspiring to look at how other people solve problems; it can give you pointers on how to structure your own code and it can teach you about technologies you didn’t even know about.
Also, don’t be afraid to read the specification of a standard every now and then to really learn about the technologies you’re using.
Last month, some of the most creative web developers out there pushed the limits of touch events and multi-touch interaction in the February Dev Derby contest. After looking through the entries, our three expert judges–Franck Lecollinet, Guillaume Lecollinet, and (filling in for Craig Cook this month) yours truly–decided on three winners and two runners-up.
Not a contestant? There are other reasons to be excited. Most importantly, all of these demos are completely open-source, making them wonderful lessons in the exciting things you can do with multi-touch today.
Naming just a few winners is always difficult, but the task was especially hard this time around. Just about every demo I came across impressed me more than the last, and I know our other judges faced a similar dilemma. Our winners managed to take a relatively familiar form of interaction and create experiences that were more creative, more fun, and more visually stunning than we would have ever predicted. Congratulations to them and to everyone else who amazed us last month.