Mozilla

This is a re-post of an important post from David Humphrey who has been doing a lot of experiments on top of Mozilla’s extensible platform and doing experiments with multi-touch, sound, video, WebGL and all sorts of other goodies. It’s worth going through all of the demos below. You’ll find some stuff that will amaze and inspire you.

David’s work is important because it’s showing where the web is going, and where Mozilla is helping to take it. It’s not enough that we’re working on HTML5, which we’re about finished with, but we’re trying to figure out what’s next. Mozilla’s platform, Gecko, is a huge part of why we’re able to experiment and learn as fast as we can. And that’s reflected with what’s possible here. It’s a web you can see, touch and interact with in new ways.

David’s post follows:

I’m working with an ever growing group of web, audio, and Mozilla developers on a project to expose audio spectrum data to JavaScript from Firefox’s audio and video elements. Today we show what we did at www2010.

I’m in Raleigh, North Carolina, with Al MacDonald for the www2010 conference. We’re here to present our work on exposing audio data in the browser. Over the past month Corban, Charles, and a bunch of other friends have been working with us to refine the API and get new types of demos ready. We ended-up with 11 demos, some of which I’ve shown here before. Here are the others.

The first was done by Jacob Seidelin, and shows many cool 2D visualizations of audio using our API. You can see the live version on his site, or check out this video:

The second and third demos where done by Charles Cliffe, and show 3D visualizations using WebGL and his CubicVR engine. These also show off his JavaScript beat detection code. Is JavaScript fast enough to do real-time analysis of audio and synchronized 3D graphics? Yes, yes it is. The live versions are here and here, and here are some videos:

The fourth demo was done by Corban Brook and shows how audio data can be mixed live using script. Here he mutes the main audio, plays it, passes the data through a low pass filter written in JavaScript, then dumps the modified frames into a second audio element to be played. It’s a technique we need to apply more widely, as it holds a lot of potential. Here’s the live version, and here’s a video (check out his updated JavaScript synthesizer, which we also presented):

The fifth and sixth demos were done by Al (with the help of many). When I was last in Boston, for the Processing.js meetup at Bocoup, we met with Doug Schepers from the W3C. He loved our stuff, and was talking to us about ideas that would be cool to build. He pulled out his iPhone and showed us Brian Eno’s Bloom audio app. “It would be cool to do this in the browser.” Yeah, it is cool, and here it is, written in a few hundred lines of JavaScript and Processing.js (video 1, video 2):

This demo also showcases the awesome work of Felipe Gomes, who has a patch to add multi-touch DOM events to Firefox. The method we’ve used here can be taken a lot further. Imagine being able to connect multiple browsers together for collaborative music creation, layering other audio underneath, mixing fragments vs. just oscillators, etc. We built this one in a week, and the web is capable of a lot more.

One of the main points of our talk was to emphasize that what we’re talking about here isn’t just a concept, and it isn’t some far away future. This is real code, running in a real browser, and it’s all being done in HTML5 and JavaScript. The web is fast enough to do real-time audio processing now, powerful enough and expressive enough to create music. And the community of digital music and audio hackers, visualizers, etc. are hungry for it. So hungry that they are seeking us out, downloading our hacked builds and creating gorgeous web audio applications.

We want to keep going, and we need help. We need help from those within Mozilla, the W3C, and other browsers to get this stuff into shipping browsers. We need the audio, digital music, accessibility, and web communities to come together in order to help us build js audio libraries and more sample applications. Yesterday Joe Hewitt was talking on twitter about how web browser vendors need to experiment more with non-standard APIs. I couldn’t agree more, and here’s a chance for people to put their money where their mouth is. Let’s make audio a scriptable part of the open web.

I’m currently creating new builds of our updated patch for Firefox, and will post links to them here when I’m done. You can read more about the technical details of our work here, and get involved in the bug here. You can talk more with me on irc in the processing.js channel (I’m humph on moznet), or talk to me on twitter (@humphd) or by email. One way or another, get in touch so you can help us push this forward.

12 comments

Comments are now closed.

  1. discoleo wrote on April 30th, 2010 at 12:58:

    Speech Recognition? We want it!

  2. Andy wrote on May 1st, 2010 at 03:12:

    Great stuff coming!

  3. John Nash wrote on May 3rd, 2010 at 17:48:

    It’s a bit odd to see a post showing the benefits of proposed changes to the audio element but none of the videos using the HTML 5 video element…

  4. guapo wrote on May 4th, 2010 at 17:20:

    While it’s very technical these days, the concept certainly isn’t new. Those of you who remember the Joshua Light Show at the Filmore East in the late 60s & early 70s won’t be very impressed.

    For those who weren’t there, it was the same thing, images behind music. The music was live, provided by people such as the Jefferson Airplane, Jimi Hendrix, The Moody Blues & a host of others.

  5. QOAL wrote on May 5th, 2010 at 06:13:

    If the API exposed the spectrum and oscilloscope data in the same was a AVS does then this would be really sweet.

    Having a load of event hoops and such to jump through is great for certain situations but for music visualisation like these demos are it’s slowing the coder down.

    From the AVS help: (AVS is a winamp plugin btw)
    getosc(band,width,channel) = returns waveform data centered at ‘band’, (0..1), sampled ‘width’ (0..1) wide.
    ‘channel’ can be: 0=center, 1=left, 2=right. return value is (-1..1)

    getspec(band,width,channel) = returns spectrum data centered at ‘band’, (0..1), sampled ‘width’ (0..1) wide.
    ‘channel’ can be: 0=center, 1=left, 2=right. return value is (0..1)

    Having the band as a float avoids people having to know the amount of bands the audio file has, which makes things simpler – of course having the ability to get the audio as current proposed has benefits too.

    If it was like snd = new Audio(“blah.ogg”);
    function render() {
    for (i = 0; i <= i; i+= 0.01) {
    specHeight = snd.getspec(i, 0, 0); //Here getspec will return the current audio data for what is playing, if nothing is playing then 0 is returned.
    //draw/resize something
    }
    }

    I could be out of touch with this comment but there we go, I've said it anyway.

  6. Kenneth Arnold wrote on May 7th, 2010 at 08:50:

    I’ve designed a music comp app and I’d really like to build it in the browser — but I need recording, tightly synchronized with playback. Is that possible?

  7. carl wrote on May 10th, 2010 at 12:46:

    That is amazing indeed! And yes: “Speech Recognition? We want it!” but this would mean the next revolution…

  8. Pingback from Firefox, YouTube and WebM ✩ Mozilla Hacks – the Web developer blog on May 19th, 2010 at 09:24:

    [...] VP8 is one of those pieces, HTML5 is another. If you watch this weblog, you can start to see those other pieces starting to emerge as well. The web is creeping into more and more technologies, with Firefox leading the way. We intend to [...]

  9. Nicholas Bieber wrote on May 24th, 2010 at 00:45:

    I’ve been having a little play since I read about it on CDM… I only know a tiny little bit about audio programming from mucking around with the JUCE library…

    I managed to pull in a drum loop (amen!) as an *.ogg, and buffer it in memory and loop it back. It was choppy, but I can see how this could be fun!

  10. Pingback from 파이어폭스, 유튜브, 웹M(WebM) ✩ Mozilla 웹 기술 블로그 on June 13th, 2010 at 09:25:

    [...] HTML5은 또 다른 조각이다. 만일 당신이 이 블로그(weblog)를 본다면, 당신은 마찬가지로 드러나기 시작한 나머지 조각들(other pieces starting to emerge as well)을 보기 시작할 것이다. 웹은 점점 더 파이어폭스 주도하에 많은 기술을 [...]

  11. F1LT3R wrote on September 8th, 2010 at 15:33:

    If you want to learn how to get started with the Firefox interactive audio API, you can watch these 2-minute video tutorials…

    Reading Audio in JavaScript
    http://bit.ly/b05vvO

    Writing Audio from JavaScript
    http://bit.ly/9OU2va

  12. Thomas Thelliez wrote on October 10th, 2011 at 06:47:

    Maybe you already know, there is a very nice online service called eenox allowing to create CSS3 animations and interactives HTML5 webpages for mobiles, smartphones and computers. You can focus on design and create great web documents even if you are not a developper.

    If you are interested, there is the url : http://eenox.net/

Comments are closed for this article.