Audio Articles

Sort by:


  1. Birdsongs, Musique Concrète, and the Web Audio API

    In January 2015, my friend and collaborator Brian Belet and I presented Oiseaux de Même — an audio soundscape app created from recordings of birds — at the first Web Audio Conference. In this post I’d like to describe my experience of implementing this app using the Web Audio API, Twitter Bootstrap, Node.js, and REST APIs.

    Screenshot showing Birds of a Feather, a soundscape created with field recordings of birds that are being seen in your vicinity.

    Screenshot showing Birds of a Feather, a soundscape created with field recordings of birds that are being seen in your vicinity.

    What is it? Musique Concrète and citizen science

    We wanted to create a web-based Musique Concrète, building an artistic sound experience by processing field recordings. We decided to use xeno-canto — a library of over 200,000 recordings of 9,000 different bird species — as our source of recordings. Almost all the recordings are licensed under Creative Commons by their generous recordists. We select recordings from this library based on data from eBird, a database of tens of millions of bird sightings contributed by bird watchers everywhere. By using the Geolocation API to retrieve eBird sightings near to the listeners’ location, our soundscape can consist of recordings of bird species that bird watchers have reported recently near the listener — each user gets a personalized soundscape that changes daily.

    Use of the Web Audio API

    We use the browser’s Web Audio API to play back the sounds from xeno-canto. The Web Audio API allows developers to play back, record, analyze, and process sound by creating AudioNodes that are connected together, like an old modular synthesizer.

    Our soundscape is implemented using four AudioBuffer nodes, each of which plays a field recording in a loop. These loops are placed in a stereo field using Panner nodes, and mixed together before being sent to the listener’s speakers or headphones.


    After all the sounds have loaded and begin playing, we offer users several controls for manipulating the sounds as they play:

    • The Pan button randomizes the spatial location of the sound in 3D space.
    • The Rate button randomizes the playback rate.
    • The Reverse button reverses the direction of sound playback.
    • Finally, the Share button lets you capture the state of the soundscape and save that snapshot for later.

    The controls described above are implemented as typical JavaScript event handlers. When the Pan button is pressed, for example, we run this handler:

    // sets the X,Y,Z position of the Panner to random values between -1 and +1
    BirdSongPlayer.prototype.randomizePanner = function() {
      // NOTE: x = -1 is LEFT
      this.panPosition = { x: 2 * Math.random() - 1, y: 2 * Math.random() - 1, z: 2 * Math.random() - 1}
      this.panner.setPosition( this.panPosition.x, this.panPosition.y, this.panPosition.z);

    Some parts of the Web Audio API are write-only

    I had a few minor issues where I had to work around shortcomings in the Web Audio API. Other authors have already documented similar experiences; I’ll summarize mine briefly here:

    • Can’t read Panner position: In the event handler for the Share button, I want to retrieve and store the current Audio Buffer playback rate and Panner position. However, the current Panner node does not allow retrieval of the position after setting it. Hence, I store the new Panner position in an instance variable in addition to calling setPosition().

      This has had a minimal impact on my code so far. My longer-term concern is that I’d rather store the position in the Panner and retrieve it from there, instead of storing a copy elsewhere. In my experience, multiple copies of the same information becomes a readability and maintainability problem as code grows bigger and more complex.

    • Can’t read AudioBuffer’s playbackRate: The Rate button described above calls linearRampToValueAtTime() on the playbackRate AudioParam. As far as I can tell, AudioParams don’t let me retrieve their values after calling linearRampToValueAtTime(), so I’m obliged to keep a duplicate copy of this value in my JS object.
    • Can’t read AudioBuffer playback position: I’d like to show the user the current playback position for each of my sound loops, but the API doesn’t provide this information. Could I compute it myself? Unfortunately, after a few iterations of ramping an AudioBuffer’s playbackRate between random values, it is very difficult to compute the current playback position within the buffer. Unlike some API users, I don’t need a highly accurate position, I just want to visualize for my users when the current sound loop restarts.

    Debugging with the Web Audio inspector

    Firefox’s Web Audio inspector shows how Audio Nodes are connected to one another.

    Firefox’s Web Audio inspector shows how Audio Nodes are connected to one another.

    I had great success using Firefox’s Web Audio inspector to watch my Audio Nodes being created and interconnected as my code runs.

    In the screenshot above, you can see the four AudioBufferSources, each feeding through a GainNode and PannerNode before being summed by an AudioDestination. Note that each recording is also connected to an AnalyzerNode; the Analyzers are used to create the scrolling amplitude graphs for each loop.

    Visualizing sound loops

    As the soundscape evolves, users often want to know which bird species is responsible for a particular sound they hear in the mix. We use a scrolling visualization for each loop that shows instantaneous amplitude, creating distinctive shapes you can correlate with what you’re hearing. The visualization uses the Analyzer node to perform a fast Fourier transform (FFT) on the sound, which yields the amplitude of the sound at every frequency. We compute the average of all those amplitudes, and then draw that amplitude at the right edge of a Canvas. As the contents of the Canvas shift sideways on every animation frame, the result is a horizontally scrolling amplitude graph.

    BirdSongPlayer.prototype.initializeVUMeter = function() {
      // set up VU meter
      var myAnalyser = this.analyser;
      var volumeMeterCanvas = $(this.playerSelector).find('canvas')[0];
      var graphicsContext = volumeMeterCanvas.getContext('2d');
      var previousVolume = 0;
      requestAnimationFrame(function vuMeter() {
        // get the average, bincount is fftsize / 2
        var array =  new Uint8Array(myAnalyser.frequencyBinCount);
        var average = getAverageVolume(array);
        average = Math.max(Math.min(average, 128), 0);
        // draw the rightmost line in black right before shifting
        graphicsContext.fillStyle = 'rgb(0,0,0)'
        graphicsContext.fillRect(258, 128 - previousVolume, 2, previousVolume);
        // shift the drawing over one pixel
        graphicsContext.drawImage(volumeMeterCanvas, -1, 0);
        // clear the rightmost column state
        graphicsContext.fillStyle = 'rgb(245,245,245)'
        graphicsContext.fillRect(259, 0, 1, 130);
        // set the fill style for the last line (matches bootstrap button)
        graphicsContext.fillStyle = '#5BC0DE'
        graphicsContext.fillRect(258, 128 - average, 2, average);
        previousVolume = average;

    What’s next

    I’m continuing to work on cleaning up my JavaScript code for this project. I have several user interface improvements suggested by my Mozillia colleagues that I’d like to try. And Prof. Belet and I are considering what other sources of geotagged sounds we can use to make more soundscapes with. In the meantime, please try Oiseaux de Même for yourself and let us know what you think!

  2. What’s new in Web Audio


    It’s been a while since we said anything on Hacks about the Web Audio API. However, with Firefox 37/38 hitting our Developer Edition/Nightly browser channels, there are some interesting new features to talk about!

    This article presents you with some new Web Audio tricks to watch out for, such as the new StereoPannerNode, promise-based methods, and more.

    Simple stereo panning

    Firefox 37 introduces the StereoPannerNode interface, which allows you to add a stereo panning effect to an audio source simply and easily. It takes a single property: pan—an a-rate AudioParam that can accept numeric values between -1.0 (full left channel pan) and 1.0 (full right channel pan).

    But don’t we already have a PannerNode?

    You may have already used the older PannerNode interface, which allows you to position sounds in 3D. Connecting a sound source to a PannerNode causes it to be “spatialised”, meaning that it is placed into a 3D space and you can then specify the position of the listener inside. The browser then figures out how to make the sources sound, applying panning and doppler shift effects, and other nice 3D “artifacts” if the sounds are moving over time, etc:

    var audioContext = new AudioContext();
    var pannerNode = audioContext.createPanner();
    // The listener is 100 units to the right of the 3D origin
    audioContext.listener.setPosition(100, 0, 0);
    // The panner is in the 3D origin
    pannerNode.setPosition(0, 0, 0);

    This works well with WebGL-based games as both environments use similar units for positioning—an array of x, y, z values. So you could easily update the position, orientation, and velocity of the PannerNodes as you update the position of the entities in your 3D scene.

    But what if you are just building a conventional music player where the songs are already stereo tracks, and you actually don’t care at all about 3D? You have to go through a more complicated setup process than should be necessary, and it can also be computationally more expensive. With the increased usage of mobile devices, every operation you don’t perform is a bit more battery life you save, and users of your website will love you for it.

    Enter StereoPannerNode

    StereoPannerNode is a much better solution for simple stereo use cases, as described above. You don’t need to care about the listener’s position; you just need to connect source nodes that you want to spatialise to a StereoPannerNode instance, then use the pan parameter.

    To use a stereo panner, first create a StereoPannerNode using createStereoPanner(), and then connect it to your audio source. For example:

    var audioCtx = window.AudioContext();
    // You can use any type of source
    var source = audioCtx.createMediaElementSource(myAudio);
    var panNode = audioCtx.createStereoPanner();

    To change the amount of panning applied, you just update the pan property value:

    panNode.pan.value = 0.5; // places the sound halfway to the right
    panNode.pan.value = 0.0; // centers it
    panNode.pan.value = -0.5; // places the sound halfway to the left

    You can see for a complete example.

    Also, since pan is an a-rate AudioParam you can design nice smooth curves using parameter automation, and the values will be updated per sample. Trying to do this kind of change over time would sound weird and unnatural if you updated the value over multiple requestAnimationFrame calls. And you can’t automate PannerNode positions either.

    For example, this is how you could set up a panning transition from left to right that lasts two seconds:

    panNode.pan.setValueAtTime(-1, audioContext.currentTime);
    panNode.pan.linearRampToValueAtTime(1, audioContext.currentTime + 2);

    The browser will take care of updating the pan value for you. And now, as of recently, you can also visualise these curves using the Firefox Devtools Web Audio Editor.

    Detecting when StereoPannerNode is available

    It might be the case that the Web Audio implementation you’re using has not implemented this type of node yet. (At the time of this writing, it is supported in Firefox 37 and Chrome 42 only.) If you try to use StereoPannerNode in these cases, you’re going to generate a beautiful undefined is not a function error instead.

    To make sure StereoPannerNodes are available, just check whether the createStereoPanner() method exists in your AudioContext:

    if (audioContext.createStereoPanner) {
        // StereoPannerNode is supported!

    If it doesn’t, you will need to revert back to the older PannerNode.

    Changes to the default PannerNode panning algorithm

    The default panning algorithm type used in PannerNodes used to be HRTF, which is a high quality algorithm that rendered its output using a convolution with human-based data (thus it’s very realistic). However it is also very computationally expensive, requiring the processing to be run in additional threads to ensure smooth playback.

    Authors often don’t require such a high level of quality and just need something that is good enough, so the default PannerNode.type is now equalpower, which is much cheaper to compute. If you want to go back to using the high quality panning algorithm instead, you just need to change the type:

    pannerNodeInstance.type = 'HRTF';

    Incidentally, a PannerNode using type = 'equalpower' results in the same algorithm that StereoPannerNode uses.

    Promise-based methods

    Another interesting feature that has been recently added to the Web Audio spec is Promise-based versions of certain methods. These are OfflineAudioContext.startRendering() and AudioContext.decodeAudioData.

    The below sections show how the method calls look with and without Promises.


    Let’s suppose we want to generate a minute of audio at 44100 Hz. We’d first create the context:

    var offlineAudioContext = new OfflineAudioContext(2, 44100 * 60, 44100);

    Classic code

    offlineAudioContext.addEventListener('oncomplete', function(e) {
        // rendering complete, results are at `e.renderedBuffer`

    Promise-based code

    offlineAudioContext.startRendering().then(function(renderedBuffer) {
        // rendered results in `renderedBuffer`


    Likewise, when decoding an audio track we would create the context first:

    var audioContext = new AudioContext();

    Classic code

    audioContext.decodeAudioData(data, function onSuccess(decodedBuffer) {
        // decoded data is decodedBuffer
    }, function onError(e) {
        // guess what... something didn't work out well!

    Promise-based code

    audioContext.decodeAudioData(data).then(function(decodedBuffer) {
        // decoded data is decodedBuffer
    }, function onError(e) {
        // guess what... something didn't work out well!

    In both cases the differences don’t seem major, but if you’re composing the results of promises sequentially or if you’re waiting on an event to complete before calling several other methods, promises are really helpful to avoid callback hell.

    Detecting support for Promise-based methods

    Again, you don’t want to get the dreaded undefined is not a function error message if the browser you’re running your code on doesn’t support these new versions of the methods.

    A quick way to check for support: look at the returned type of these calls. If they return a Promise, we’re in luck. If they don’t, we have to keep using the old methods:

    if((new OfflineAudioContext(1, 1, 44100)).startRendering() != undefined) {
        // Promise with startRendering is supported
    if((new AudioContext()).decodeAudioData(new Uint8Array(1)) != undefined) {
        // Promise with decodeAudioData is supported

    Audio workers

    Although the spec has not been finalised and they are not implemented in any browser yet, it is also worth giving a mention to Audio Workers, which —you’ve guessed it— are a specialised type of web worker for use by Web Audio code.

    Audio Workers will replace the almost-obsolete ScriptProcessorNode. Originally, this was the way to run your own custom nodes inside the audio graph, but they actually run on the main thread causing all sorts of problems, from audio glitches (if the main thread becomes stalled) to unresponsive UI code (if the ScriptProcessorNodes aren’t fast enough to process their data).

    The biggest feature of audio workers is that they run in their own separate thread, just like any other Worker. This ensures that audio processing is prioritised and we avoid sound glitches, which human ears are very sensitive to.

    There is an ongoing discussion on the w3c web audio list; if you are interested in this and other Web Audio developments, you should go check it out.

    Exciting times for audio on the Web!

  3. Videos and Firefox OS

    Before HTML5

    Those were dark times Harry, dark times – Rubeus Hagrid

    Before HTML5, displaying video on the Web required browser plugins and Flash.

    Luckily, Firefox OS supports HTML5 video so we don’t need to support these older formats.

    Video support on the Web

    Even though modern browsers support HTML5, the video formats they support vary:

    In summary, to support the most browsers with the fewest formats you need the MP4 and WebM video formats (Firefox prefers WebM).

    Multiple sizes

    Now that you have seen what formats you can use, you need to decide on video resolutions, as desktop users on high speed wifi will expect better quality videos than mobile users on 3G.

    At Rormix we decided on 720p for desktop, 360p for mobile connections, and 180p specially for Firefox OS to reduce the cost in countries with high data charges.

    There are no hard and fast rules — it depends on who your market audience is.


    The best streaming solution would be to automatically serve the user different videos sizes depending on their connection status (adaptive streaming) but support for this technology is poor.

    HTTP live streaming works well on Apple devices, but has poor support on Android.

    At the time of writing, the most promising technology is MPEG DASH, which is an international standard.

    In summary, we are going to have to wait before we get an adaptive streaming technology that is widely accepted (Firefox does not support HLS or MPEG DASH).

    DIY Adaptive streaming

    In the absence of adaptive streaming we need to try to work out the best video quality to load at the outset. The following is a quick guide to help you decide:

    Wifi or 3G

    Using a certified Firefox OS app you can check to see if the user is on wifi or not.

    var lock    = navigator.mozSettings.createLock();
    var setting = lock.get('wifi.enabled');
    setting.onsuccess = function () {
      console.log('wifi.enabled: ' + setting.result);
    setting.onerror = function () {
      console.warn('An error occured: ' + setting.error);

    There is some more information at the W3C Device API.

    Detecting screen size

    There is no point sending a 720p video to a user with a screen smaller than 720p. There are many ways to get the different bounds of a user’s screen; innerWidth and width allow you to get a good idea:

    function getVidSize()
      //Get the width of the phone (rotation independent)
      var min = Math.min($(window).innerHeight(),$(window).innerWidth());
      //Return a video size we have
      if(min < 320)      return '180';
      else if(min < 550) return '360';
      else               return '720';

    Determining internet speed

    It is difficult to get an accurate read of a user’s internet speed using web technologies — usually they involve loading a large image onto the user’s device and timing it. This has the disadvantage of having to send more data to the user. Some services such as: exist, but still require data downloads to the user’s device. (Stackoverflow has some more options.)

    You can be slightly more clever by using HTML5, and checking the time it takes between the user starting the video and a set amount of the video loading. This way we do not need to load any extra data on the user’s device. A quick VideoJS example follows:

    var global_speedcount = 0;
    var global_video = null;
    global_video = videojs("video", {}, function(){
    //Set up video sources
      //User has clicked play
      global_speedcount = new Date().getTime();
    function timer()
      var diff = new Date().getTime() - global_speedcount;
      //Remove this handler as it is run multiple times per second!'timeupdate',timer);

    This code starts timing when the user clicks play, and when the browser starts to play the video it sends timing information to timeupdate. You can also use this function to detect if lots of buffering is happening.

    Detect high resolution devices

    One final thing to determine is whether or not a user has a high pixel density screen. In this case even if they have a small screen it can still have a large number of pixels (and therefore require a higher resolution video).

    Modernizr has a plugin for detecting hi-res screens.

    if (Modernizr.highresdisplay)
      alert('Your device has a high resolution screen');

    WebP Thumbnails

    Not to get embroiled in an argument, but at Rormix we have seen an average decrease of 30% in file size (WebP vs JPEG) with no loss of quality (in some cases up to 50% less). And in countries with expensive data plans, the less data the better.

    We encode all of our thumbnails in multiple resolutions of WebP and send them to every device that supports them to reduce the amount of data being sent to the user.

    Mobile considerations

    If you are playing HTML5 videos on mobile devices, their behavior differs. On iOS it automatically goes to full screen on iPhones/iPods, but not on tablets.

    Some libraries such as VideoJS have removed the controls from mobile devices until their stability increases.

    Useful libraries

    There are a few useful HTML5 video libraries:

    Mozilla links

    Mozilla has some great articles on web video:

    Other useful Links

  4. Blend4Web: the Open Source Solution for Online 3D

    Half year ago Blend4Web was first released publicly. In this article I’ll show what Blend4Web is, how it is evolved and and how it can be used for web development.

    What Is Blend4Web?

    In short, Blend4Web is an open source framework for creating 3D web applications. It uses Blender – the popular open source 3D modeling suite – as the primary authoring tool. 3D graphics is rendered by means of WebGL which is also an open standard technology. The two main keywords here – Blender and Web(GL) – explain the purpose of this engine perfectly.

    The full source code of Blend4Web together with some usage examples is available under GPLv3 on GitHub (there is also a commercial licensing option).

    The 3D Web

    On June the 2nd Apple presented their new operating systems – OS X Yosemite and iOS 8 – both featuring WebGL support in their Safari browser. That marked the end of a 5 year cycle during which the WebGL technology has been evolving, starting with the first unstable browser builds (if anybody remembers Firefox 3.7 alpha?). Now, all the major browsers on all desktop and mobile systems support this open standard for rendering 3D graphics, everywhere, without any plugins.

    That was a long and difficult road, along which Blend4Web development has been following WebGL development as a shadow. Broken rendering, tab crashes, security “warnings” from some big guys, unavailability in public browser builds, all sorts of fears, uncertainty and doubts. All this didn’t matter, because we have the opportunity to do 3D graphics (and sound) in browsers!


    The first Blender 2.5x builds appeared in summer 2010. At the time we, the programming geeks, were pushed to learn the basics of 3D modeling by the beautiful Sintel from the open source movie of the same name. After choosing Blender, we could be as independent as possible, with a full open source pipeline organized on a Linux platform. Blender gave us the power to make our own 3D scenes, and later helped to attract talanted artists from its wonderful community to join us.

    Blend4Web Evolution in Demos

    Our demo scenes matured together with the development of Blend4Web. The first one was a quite low-poly and almost non-interactive demo called The Island. It was created in 2011 and polished a bit before the public release. In this demo we introduced our Blender-based pipeline in which all the assets are stored in separate files and are linked into the main file for level design and further exporting (for this reason some of Blend4Web users call it “free Unity Pro”).

    In Fashion Show we developed cloth animation techniques. Some post-processing effects, dynamic reflection and particle systems were added later. After Blend4Web has gone public we summarized these cloth-releated tricks in one of our tutorials.

    The Farm is a huge scene (in the scale of a browser): over 25 hectares of land, buildings, animated animals and foliage. We added some gamedev elements into it, including the ability of first-person walking, interacting with objects, driving a vehicle. The demo features spatial audio (via Web Audio) and physics (via Bullet and Asm.js). The Freedesktop folks tried it as a benchmark while testing the Mesa drivers (and got “massive crashes” :).

    We also tried some visualization and created Nature Morte. In this scene we used carefully crafted textures and materials, as well as post-processing effects to improve realism. However, the technology used for this demo was
    quite simple and old-school, as we had no support for visual shader editing yet.

    Things have changed when Blender’s node materials have become available to our artists. They created over 40 different materials for the Sports Car model: chromed metal, painted metal, glass, rubber, leather etc.

    In our latest release we stepped even further by adding support for the animation control by the user. Now interactivity can be implemented without any coding. In order to demonstrate the new opening possibilities we presented interactive infographic of a light helicopter.

    Among the other possible applications of this simple yet effective tool (called NLA Script) we can list the following: interactive 3D web design, product promotions, learning materials, cartoons with the ability to choose between different story lines, point-and-click games and any other applications previously created with Flash.

    Using Blend4Web

    It is very easy to start using Blend4Web – just download and install the Blender addon as shown in this video tutorial:

    The most wonderful thing is that your Blender scene can be exported into a self-contained HTML file that can be emailed, uploaded to your own website or to a cloud – in short shared however you like. This freedom is a fundamental difference from numerous 3D web publishing services as we don’t lock our users to our technology by any means.

    For those who want to create highly interactive 3D web apps we offer the SDK. Some notable examples of what is possible with the Blend4Web API are demonstrated in our programming tutorials, ranging from web design to games.

    Programming 3D web apps with Blend4Web is not much harder than building average RIAs. Unlike some other WebGL frameworks in the wild we tried to offload all graphics, animation and audio tasks to respective professionals. The programmer just loads the scene…

    var m_data = require("data");
    m_data.load("example.json", load_cb);

    …and then writes the logic which triggers the 3D scene changes that are “hard-coded” by the artists, e.g. plays the animation for the user-selected object:

    var m_scenes = require("scenes");
    var m_anim = require("animation");
    var myobj = m_scenes.pick_object(event.clientX, event.clientY);

    As you can see the APIs are structured in a CommonJS way which we believe is important for creating compact and fast web apps.

    The Future

    There are many possible ways in which the Internet and IT will be going but there is no doubt that the strong and steady development of 3D Web is already here. We expect that more and more users will change their expectations about how web content should look and feel like. We’re gonna help the web developers meet these demands with plans to improve usability and performance and to implement new interesting graphics effects.

    We also follow the development of WebGL 2.0 (thanks Mozilla for your job) and expect to create even more nice things on top of it.

    Stay Tuned

    Read our blog, join us on Twitter, Google+, Facebook and Reddit, watch the demos and tutorials on our YouTube channel, fork Blend4Web at GitHub.

  5. Introducing the Web Audio Editor in Firefox Developer Tools

    In Firefox 32, the Web Audio Editor joins the Shader Editor and Canvas Debugger in Firefox Developer Tools for debugging media-rich content on the web. When developing HTML5 games or fun synthesizers using web audio, the Web Audio Editor assists in visualizing and modifying all of the audio nodes within the web audio AudioContext.

    Visualizing the Audio Context

    When working with the Web Audio API‘s modular routing, it can be difficult to translate how all of the audio nodes are connected just by listening to the audio output. Often, it is challenging to debug our AudioContext just by listening to the output and looking at the imperative code that creates audio nodes. With the Web Audio Editor, all of the AudioNodes are rendered in a directed graph, illustrating the hierarchy and connections of all audio nodes. With the rendered graph, a developer can ensure that all of the nodes are connected in a way that they expect. This can be especially useful when the context becomes complex, with a network of nodes dedicated to manipulating audio and another for analyzing the data, and we’ve seen some pretty impressive uses of Web Audio resulting in such graphs!

    To enable the Web Audio Editor, open up the options in the Developer Tools, and check the “Web Audio Editor” option. Once enabled, open up the tool and reload the page so that all web audio activity can be monitored by the tool. When new audio nodes are created, or when nodes are connected and disconnected from one another, the graph will update with the latest representation of the context.

    Modifying AudioNode Properties

    Once the graph is rendered, individual audio nodes can be inspected. Clicking on an AudioNode in the graph opens up the audio node inspector where AudioParam‘s and specific properties on the node can be viewed and modified.

    Future Work

    This is just our first shippable release of the Web Audio Editor, and we are looking forward to making this tool more powerful for all of our audio developers.

    • Visual feedback for nodes that are playing, and time/frequency domain visualizations.
    • Ability to create, connect and disconnect audio nodes from the editor.
    • Tools for debugging onaudioprocess events and audio glitches.
    • Display additional AudioContext information and support multiple contexts.
    • Modify more than just primitives in the node inspector, like adding an AudioBuffer.

    We have many dream features and ideas that we’re excited about, and you can view all open bugs for the Web Audio Editor or submit new bugs. Be sure to check out the MDN documentation on the Web Audio Editor and we would also love feedback and thoughts at our UserVoice feedback channel and on Twitter @firefoxdevtools.

  6. Easy audio capture with the MediaRecorder API

    The MediaRecorder API is a simple construct, used inside Navigator.getUserMedia(), which provides an easy way of recording media streams from the user’s input devices and instantly using them in web apps. This article provides a basic guide on how to use MediaRecorder, which is supported in Firefox Desktop/Mobile 25, and Firefox OS 2.0.

    What other options are available?

    Capturing media isn’t quite as simple as you’d think on Firefox OS. Using getUserMedia() alone yields raw PCM data, which is fine for a stream, but then if you want to capture some of the audio or video you start having to perform manual encoding operations on the PCM data, which can get complex very quickly.

    Then you’ve got the Camera API on Firefox OS, which until recently was a certified API, but has been downgraded to privileged recently.

    Web activities are also available to allow you to grab media via other applications (such as Camera).

    the only trouble with these last two options is that they would capture only video with an audio track, and you would still have separate the audio if you just wanted an audio track. MediaRecorder provides an easy way to capture just audio (with video coming later — it is _just_ audio for now.)

    A sample application: Web Dictaphone

    An image of the Web dictaphone sample app - a sine wave sound visualization, then record and stop buttons, then an audio jukebox of recorded tracks that can be played back.

    To demonstrate basic usage of the MediaRecorder API, we have built a web-based dictaphone. It allows you to record snippets of audio and then play them back. It even gives you a visualization of your device’s sound input, using the Web Audio API. We’ll concentrate on the recording and playback functionality for this article.

    You can see this demo running live, or grab the source code on Github (direct zip file download.)

    CSS goodies

    The HTML is pretty simple in this app, so we won’t go through it here; there are a couple of slightly more interesting bits of CSS worth mentioning, however, so we’ll discuss them below. If you are not interested in CSS and want to get straight to the JavaScript, skip to the “Basic app setup” section.

    Keeping the interface constrained to the viewport, regardless of device height, with calc()

    The calc function is one of those useful little utility features that’s cropped up in CSS that doesn’t look like much initially, but soon starts to make you think “Wow, why didn’t we have this before? Why was CSS2 layout so awkward?” It allows you do a calculation to determine the computed value of a CSS unit, mixing different units in the process.

    For example, in Web Dictaphone we have theee main UI areas, stacked vertically. We wanted to give the first two (the header and the controls) fixed heights:

    header {
      height: 70px;
    .main-controls {
      padding-bottom: 0.7rem;
      height: 170px;

    However, we wanted to make the third area (which contains the recorded samples you can play back) take up whatever space is left, regardless of the device height. Flexbox could be the answer here, but it’s a bit overkill for such a simple layout. Instead, the problem was solved by making the third container’s height equal to 100% of the parent height, minus the heights and padding of the other two:

    .sound-clips {
      box-shadow: inset 0 3px 4px rgba(0,0,0,0.7);
      background-color: rgba(0,0,0,0.1);
      height: calc(100% - 240px - 0.7rem);
      overflow: scroll;

    Note: calc() has good support across modern browsers too, even going back to Internet Explorer 9.

    Checkbox hack for showing/hiding

    This is fairly well documented already, but we thought we’d give a mention to the checkbox hack, which abuses the fact that you can click on the <label> of a checkbox to toggle it checked/unchecked. In Web Dictaphone this powers the Information screen, which is shown/hidden by clicking the question mark icon in the top right hand corner. First of all, we style the <label> how we want it, making sure that it has enough z-index to always sit above the other elements and therefore be focusable/clickable:

    label {
        font-family: 'NotoColorEmoji';
        font-size: 3rem;
        position: absolute;
        top: 2px;
        right: 3px;
        z-index: 5;
        cursor: pointer;

    Then we hide the actual checkbox, because we don’t want it cluttering up our UI:

    input[type=checkbox] {
       position: absolute;
       top: -100px;

    Next, we style the Information screen (wrapped in an <aside> element) how we want it, give it fixed position so that it doesn’t appear in the layout flow and affect the main UI, transform it to the position we want it to sit in by default, and give it a transition for smooth showing/hiding:

    aside {
       position: fixed;
       top: 0;
       left: 0;
       text-shadow: 1px 1px 1px black;
       width: 100%;
       height: 100%;
       transform: translateX(100%);
       transition: 0.6s all;
       background-color: #999;
        background-image: linear-gradient(to top right, rgba(0,0,0,0), rgba(0,0,0,0.5));

    Last, we write a rule to say that when the checkbox is checked (when we click/focus the label), the adjacent <aside> element will have it’s horizontal translation value changed and transition smoothly into view:

    input[type=checkbox]:checked ~ aside {
      transform: translateX(0);

    Basic app setup

    To grab the media stream we want to capture, we use getUserMedia() (gUM for short). We then use the MediaRecorder API to record the stream, and output each recorded snippet into the source of a generated <audio> element so it can be played back.

    First, we’ll add in a forking mechanism to make gUM work, regardless of browser prefixes, and so that getting the app working on other browsers once they start supporting MediaRecorder will be easier in the future.

    navigator.getUserMedia = ( navigator.getUserMedia ||
                           navigator.webkitGetUserMedia ||
                           navigator.mozGetUserMedia ||

    Then we’ll declare some variables for the record and stop buttons, and the <article> that will contain the generated audio players:

    var record = document.querySelector('.record');
    var stop = document.querySelector('.stop');
    var soundClips = document.querySelector('.sound-clips');

    Finally for this section, we set up the basic gUM structure:

    if (navigator.getUserMedia) {
       console.log('getUserMedia supported.');
       navigator.getUserMedia (
          // constraints - only audio needed for this app
             audio: true
          // Success callback
          function(stream) {
          // Error callback
          function(err) {
             console.log('The following gUM error occured: ' + err);
    } else {
       console.log('getUserMedia not supported on your browser!');

    The whole thing is wrapped in a test that checks whether gUM is supported before running anything else. Next, we call getUserMedia() and inside it define:

    • The constraints: Only audio is to be captured; MediaRecorder only supports audio currently anyway.
    • The success callback: This code is run once the gUM call has been completed successfully.
    • The error/failure callback: The code is run if the gUM call fails for whatever reason.

    Note: All of the code below is placed inside the gUM success callback.

    Capturing the media stream

    Once gUM has grabbed a media stream successfully, you create a new Media Recorder instance with the MediaRecorder() constructor and pass it the stream directly. This is your entry point into using the MediaRecorder API — the stream is now ready to be captured straight into a Blob, in the default encoding format of your browser.

    var mediaRecorder = new MediaRecorder(stream);

    There are a series of methods available in the MediaRecorder interface that allow you to control recording of the media stream; in Web Dictaphone we just make use of two. First of all, MediaRecorder.start() is used to start recording the stream into a Blob once the record button is pressed:

    record.onclick = function() {
      console.log("recorder started"); = "red"; = "black";

    When the MediaRecorder is recording, the MediaRecorder.state property will return a value of “recording”.

    Second, we use the MediaRecorder.stop() method to stop the recording when the stop button is pressed, and finalize the Blob ready for use somewhere else in our application.

    stop.onclick = function() {
      console.log("recorder stopped"); = ""; = "";

    When recording has been stopped, the state property returns a value of “inactive”.

    Note that there are other ways that a Blob can be finalized and ready for use:

    • If the media stream runs out (e.g. if you were grabbing a song track and the track ended), the Blob is finalized.
    • If the MediaRecorder.requestData() method is invoked, the Blob is finalized, but recording then continues in a new Blob.
    • If you include a timeslice property when invoking the start() method — for example start(10000) — then a new Blob will be finalized (and a new recording started) each time that number of milliseconds has passed.

    Grabbing and using the blob

    When the blob is finalized and ready for use as described above, a dataavailable event is fired, which can be handled using a mediaRecorder.ondataavailable handler:

    mediaRecorder.ondataavailable = function(e) {
      console.log("data available");
      var clipName = prompt('Enter a name for your sound clip');
      var clipContainer = document.createElement('article');
      var clipLabel = document.createElement('p');
      var audio = document.createElement('audio');
      var deleteButton = document.createElement('button');
      audio.setAttribute('controls', '');
      deleteButton.innerHTML = "Delete";
      clipLabel.innerHTML = clipName;
      var audioURL = window.URL.createObjectURL(;
      audio.src = audioURL;
      deleteButton.onclick = function(e) {
        evtTgt =;

    Let’s go through the above code and look at what’s happening.

    First, we display a prompt asking the user to name their clip.

    Next, we create an HTML structure like the following, inserting it into our clip container, which is a <section> element.

    <article class="clip">
      <audio controls></audio>
      <p><em>your clip name</em></p>

    After that, we create an object URL pointing to the event’s data attribute, using window.URL.createObjectURL( this attribute contains the Blob of the recorded audio. We then set the value of the <audio> element’s src attribute to the object URL, so that when the play button is pressed on the audio player, it will play the Blob.

    Finally, we set an onclick handler on the delete button to be a function that deletes the whole clip HTML structure.


    And there you have it; MediaRecorder should serve to make your app media recording needs easier. Have a play around with it and let us know what you think: we are looking forward to seeing what you’ll build!

  7. Audio Tags: Web Components + Web Audio = ♥

    Article written by Soledad Penadés, edited by Angelina Fabbro.

    Last week we released Brick 1.0, our carefully curated set of web components for rapid development. Using components makes it very easy to use and integrate these UI widgets with existing code and frameworks.

    And this week we bring you Audio Tags, an experiment building Web Components that represent Web Audio blocks that let us construct a complete instrument with an interface to play it. With reusable audio blocks, developers can experiment with Web Audio without having to write a lot of boilerplate code.

    Let’s build a simple synthesiser to demonstrate how the different tags work together!

    The Audio Context

    The first thing we need is an audio context. If you’ve ever done any Canvas programming, this will sound familiar. The context is akin to a toolbox: it’s got the functions (the tools) that you need and it’s also where everything happens. All other audio tags will be placed inside a context.

    This is how an audio context looks like when using Audio Tags:


    That’s it!


    While being able to create an audio context by typing just one tag declaration is great, it is not particularly exciting if we can’t get any audible output. We need something that generates a sound, and for this we’ll star with something simple and use an oscillator. As the name implies, the output is a signal that oscillates between two values: -1 and 1, generating a periodic waveform. We will place it inside an audio context to have its output automatically routed via the context’s output to the computer’s speakers:


    Context with oscillator

    (See it live).

    In the real world, oscillators can generate different signal shapes. Likewise, in the Web Audio world we have analogous wave types that we can use: sine, square, sawtooth, and triangle. Since web components are first class DOM elements, we can specify the desired wave type by using an attribute:

    <audio-oscillator type="square"></audio-oscillator>

    You could even change it live by opening the console and typing this:

    document.querySelector('audio-oscillator').type = 'square';

    Similarly, you can also change the frequency the oscillator is running at by setting the frequency attribute:

    <audio-oscillator frequency="220"></audio-oscillator>


    Having a running oscillator is just the first step. Most synthesisers available have more than one oscillator playing at the same time to make the sound more complex and nuanced. We need some way of playing two or more sounds in parallel, while combining them into a single output.

    This is commonly known as mixing audio, and therefore we need a mixer:

            <audio-oscillator frequency="220"></audio-oscillator>
            <audio-oscillator frequency="440"></audio-oscillator>

    The mixer will take the output of each of its children, and join them together to form its own output, which is then connected to the context’s output. Note also that since we’re dealing with DOM elements, when we say “children” we literally mean the mixer’s DOM children elements.



    Chain (and oscilloscope)

    When you start adding multiple sounds it’s useful to be able to see what is going on in the synthesiser. What if we could, somehow, plug a component between the output of one child and the input of another, and display what the sound wave looks like at that point?

    We can’t do that with the mixer, because it just joins all the outputs together. We need a new abstract structure: chains. An audio chain will connect the output of its first children to the input of the second children, and the output of the second children to the input of the third one, and so on, until we reach the last children and just connect its output to the chain output.

    Or in other words: while the mixer connects things in parallel, the chain connects them serially.

    Let’s connect a new element –the oscilloscope– to the output of an oscillator, using a chain. The oscilloscope will just display what is connected to its input, and the signal will pass through to its output without being modified at all. You can change the oscillator’s wave type to square, and see how the oscilloscope changes its display accordingly.

            <audio-oscillator frequency="220"></audio-oscillator>




    Synthesisers don’t limit themselves to just running several oscillators at the same time. They often add postprocessing units to this raw generated audio, which give the synthesiser its own distinctive sound.

    There are many types of postprocessing effects, and some of the most popular are filters, which roughly work by highlighting certain frequencies or removing others. For example, we can chain a low pass filter to the output of an oscillator, and that would only allow the lower frequencies to go through. This produces a sort of dampening effect, as if we had put on some ear muffs, because higher frequencies travel through the air and we hear them with our ear pavilions, while lower frequencies tend to travel through the earth and objects too. So rather than hearing them, we feel them with our body, and it doesn’t matter whether you have something over your ears or not.

            <audio-oscillator frequency="220"></audio-oscillator>
            <audio-filter type="lowpass"></audio-filter>



    Web Audio natively implements biquad pole filters, and as happens with the audio-oscillator tag, you can alter the filter behaviour by setting its type attribute. For example:

    <audio-filter type="highpass"></audio-filter>

    You could even insert several oscilloscopes: one before and another after a filter, to see the effect the filter has on the signal:

            <audio-oscillator frequency="220"></audio-oscillator>
            <audio-filter type="lowpass"></audio-filter>

    Filter with two oscilloscopes


    And finally, the minisynth

    We have enough components to build a synthesiser now! We want two oscillators playing together (one an octave higher than the other), and a filter to make the sound a little bit less harsh and more “self-contained”. So, without further ado, this is the structure for representing our minimal synth, the <mini-synth>, using the components we’ve introduced so far:

        <audio-filter type="lowpass"></audio-filter>

    For the sake of comparison, this is more or less how we would assemble a similar setup using raw Web Audio API objects and functions:

    var mixerGain = context.createGain();
    var osc1 = context.createOscillator();
    var osc2 = context.createOscillator();
    var filter = context.createBiquadFilter();
    // and the actual output is at *filter*

    It’s not that the code is particularly complicated, it just doesn’t have the nice visual hierarchy of the declarative syntax. The visual cues from the syntax make understanding the relationship between elements quick and easy.

    We still need a few lines of JavaScript to make the <mini-synth> component behave like a synthesiser: it has to start and stop both oscillators at the same time. We can take advantage from the fact that the AudioTag prototype has some common base methods that we can overload to get specific behaviours in our custom components.

    In this particular case we’ll overload the start and stop methods to make the oscillators start and stop playing respectively when we call those methods in the synth. This way we abstract the internals of the synthesiser from the world, while still exposing a consistent interface.

    start: function(when) {
        // We want to make sure we don't clip (i.e. go under -1 or over 1),
        // so we'll divide the gain by the number of oscillators in the synth
        var oscGain = this.oscillators.length > 0 ? 1.0 / this.oscillators.length : 1.0;
        this.oscillators.forEach(function(osc) {
            osc.gain = oscGain;
    stop: function(when) {
        this.oscillators.forEach(function(osc) {

    The implementation should be fairly easy to follow.

    You might be wondering about the when parameter. It is used to tell the browser when to actually start the action, so that you can schedule various events in the future with accurate timing. It means “execute this code at when milliseconds”. In our case we’re just using a value of 0, which means “do that immediately”. I advise you to read more about when in the Web Audio spec.

    We also need to implement a method for actually telling the synthesiser which note to play, or in other words, which frequency should each oscillator be running at. So let’s implement noteOn:

    noteOn: function(noteNumber) {
        this.oscillators.forEach(function(osc, index) {
            // Each oscillator should play in a higher octave
            // Each octave is composed of 12 notes
            var oscNoteNumber = noteNumber + 12 * index;
            // We're using a library to convert note numbers to frequencies
            var frequency = MIDIUtils.noteNumberToFrequency(oscNoteNumber);
            osc.frequency = frequency;

    You don’t need to use MIDIUtils, but it comes handy if you ever want to jam with an instrument in your browser and someone else using a more traditional MIDI instrument. By using standard frequencies you can be sure that both your instruments will be tuned in the same pitch, and that is GOOD.

    We also need a way for triggering notes in the synthesiser, so what better way than having an on screen keyboard component?

    <audio-keyboard octaves="2"></audio-keyboard>

    will insert a keyboard component with 2 octaves. Once the keyboard gets focus (by clicking on it) you can tap keys on your computer’s keyboard and it will emit noteon events. If we listen to those, we can then send them to the synthesiser. And the same goes for the noteoff events:

    keyboard.addEventListener('noteon', function(e) {
      var noteIndex = e.detail.index;
      // 48 is the base note here = C-3
      minisynth.noteOn(parseInt(noteIndex, 10) + 48);
    }, false);
    keyboard.addEventListener('noteoff', function(e) {
    }, false);

    So it is DEMO time!


    And now that we have a synthesiser we can say we’re rockstars! But rockstars need to look cool. Real-life rockstars have their own signature guitars and customised cabinets. And we have… CSS! We can go as wild as we want with CSS, so just press the Become a rockstar button on the demo and watch as the synthesiser becomes something else thanks to the magic of CSS.

    Minisynth, rockstarified

    Looking behind the curtains

    So far we’ve only talked about these fancy new audio tags and assumed that they are magically available in your browser, even though it is obvious they are non-standard elements. We haven’t explained where they come from. Well, if you’ve read this far already you deserve to be shown the secrets of the kingdom!

    If you look at the source code of any the examples, you’ll notice that we’re consistently including the AudioTags.bundle.js (line 18) and AudioTags.bundle.css (line 6) files. The CSS is not particularly exciting and the real magic happens in the JavaScript. This file includes a couple of utility libraries that give us the ability to define custom tags in the browser, and then the code for defining and making available these new tags in the browser.

    On the utility side, we first include AudioContext-MonkeyPatch, for unifying Web Audio API disparities between browsers and enabling us to use a modern, consistent syntax. If you want to know more about writing portable Web Audio code, you can have a look at this article.

    The second library we’re including is X-Tag, and more specifically, its very innermost core. X-Tag is a custom elements polyfill, and custom elements are a part of the emerging Web Components spec, meaning this stuff will be built right into the browser soon. X-Tag is the same library that Mozilla Brick uses. You can learn how to make your own custom elements with this article.

    That said, if you plan to use Brick and Audio Tags in the same project, a disaster might probably ensue, since both Brick and Audio Tags include X-Tag’s core in their distribution bundles. The authors of both libraries are discussing what’s the best way to proceed about this, but we haven’t settled on any action yet, because Audio Tags is such a newcomer to the X-Tag powered library-scene. In any case, the most likely outcome is that we’ll offer an option to build Brick and Audio Tags without including X-Tag core.

    Also, here is a video of this same material at CascadiaJS, so you can watch someone build it right in front of you. It may help your understanding of the topic:

    What’s next for Audio Tags?

    Many people have been asking me what’s next for Audio Tags. What are the upcoming features? What have I planned? How do you contribute? How do we go about adding new tags?

    To be honest? I have no idea! But that’s the beauty of it. This is just a starting point, an invitation to think, play with and discuss about this notion of declarative audio components. There’s, of course, a list of things that don’t work yet, some random ideas and maybe potential features in the Audio Tags’ README file. I will probably keep extending it and filling the gaps–it is a good playground for experimenting with audio without getting too messy, and also a good test for Web Components that go past the usual “encapsulated UI widgets on steroids” notion.

    Some people have found the project inspiring in itself; others thought that it would be useful for teaching signal processing, others mixed it with accelerometer data to create physically-controlled synthesisers, and others decided to ditch the audio side of it and just build custom components for WebRTC purposes. It’s up to each one of you to contribute if you feel like doing so!

  8. Monster Madness – creating games on the web with Emscripten

    When our engineering teams at Trendy Entertainment & Nom Nom Games decided on the strategy of developing one of our new Unreal Engine 3 games — Monster Madness Online — as a cross-platform title, we knew that a frictionless multiplayer web browser version would be central to this experience. The big question, however, was determining what essential technologies to utilize in order to bring our game onto the web. As a C++ oriented developer, we determined quickly that rewriting the game engine from the ground-up was out of the question. We’d need a solution that would allow us to port our existing code in an efficient manner into a format usable in the browser…

    TL;DR? Watch the video!

    Playing the Field

    We looked hard at the various options in front of us: FlasCC (a GCC Flash compiler), Google’s NaCl, a custom native C++ extension, or Mozilla’s Emscripten & asm.js.

    In our tests, Flash ran slowly and had inconsistent behaviors between Pepper (Chrome) and Adobe’s plugin version. Combined with the increasingly onerous plugin requirement, we opted to look elsewhere for a more seamless, forward-thinking approach.

    NaCl had the issue of requiring a walled-garden distribution site that would separate us from direct contact with our users, and also being processor-specific. pNaCL eliminated the walled-garden requirement and added dynamic code compilation support, but still had the issues of being processor-specific code necessitating in our view device-specific testing, and a potentially long startup time as the code would be linked on first run. Finally, only working in Chrome would be a dealbreaker for our desire to have our game run in all major browsers.

    A custom plugin/extension with C++ would require lots of testing & maintenance efforts on our part to run across different browsers, processor architectures, and operating systems, and such an installation requirement would likely scare away many potential players.

    As it turned out, for our team’s purposes the Emscripten compiler & asm.js proved to be the best solution to these challenges, and when combined with a set of other new-ish Web API’s, enabled the browser to become a fully featured platform for instant high-end 3D gaming. This just took a little trial & error to figure out exactly how we’d piece it together…and that’s what we’ll be reviewing here!

    First Steps into a Brave New World

    We Trendy game engineers are primarily old-school C++ programmers, so it was something of a revelation that Emscripten could compile our existing application (built on Epic Game’s Unreal Engine 3) into asm.js optimized Javascript with little to no changes.

    The primary Unreal Engine 3-specific code tweaks that were necessary to get the project to compile & run with Emscripten, were essentially… 1, 2, 3:

    // Esmcripten needs 4 byte alignment
    FNameEntry* Allocate( INT Size )
        #if EMSCRIPTEN
           Size = Align( Size, 4 );
    // Script execution: llvm needs aligned data
        #define XFER(T)
            T Temp;
            if (!Ar.IsLoading())
                appMemcpy(&Temp, &Script(iCode), sizeof(T));
            Ar << Temp;
            if (!Ar.IsSaving())
                appMemcpy(&Script(iCode), &Temp, sizeof(T));
            iCode += sizeof(T);
        #define XFER(T) { Ar << *(T*)&Script(iCode); iCode += sizeof(T); }
    // This function needs to synchronously complete IO requests for single-threaded Emscripten IO to work, so added a ServiceRequestsSynchronously() to the end of it which flushes & blocks till the IO request finishes.

    No really, that was about it! Within a day of fiddling with it, we had our game’s Javascript ‘executable’ compiled & running in the browser. Crashing, due to not having a graphics API implememented — but running with log output! Thankfully, we already had Unreal Engine3’s OpenGL ES2 version of the rendering subsystem ready to utilize, so porting the renderer to WebGL only took another day.

    WebGL appeared to essentially have a superset of features compared to OpenGL ES2, so the shaders and methods used matched up by simply changing some API calls. In fact, we were able to do improvements by making use of WebGL’s floating point render targets for certain postprocessing effects, such as edge outlining and dynamic shadows.

    Postprocessing makes everything prettier!

    But how’s it run?

    Now we had something rendering in the browser, and with a quick change to capture input, we could start playing the game and analyzing its performance. What we found was very encouraging: straight ‘out of the box’, in Firefox the asm.js version of the game was getting nearly 33% of the performance of the native executable. And this was comparing a single-threaded web application to the multi-threaded native executable (so really, not a fair comparison! ;). This was about 2x the performance we saw with our quick Flash port (which we still utilize as a fallback for older browsers that don’t yet support asm.js, though we eventually hope to deprecate entirely).

    Its performance in Chrome was less astonishing, more towards 20% of native performance, but still within our target margins: namely, can it run on a 2011-model Macbook Air at 45-60 FPS (with Vsync disabled)? The answer, thankfully, was yes. We hope Google will continue to improve the performance of asm.js on their browser over time. But as it currently stands, we believe unless you’re making the browser version of ‘Crysis’ with this tech (which may not be far off), it seems you have enough performance even in Chrome to do most kinds of web games.

    60 FPS on an old Macbook Air

    Putting the Pieces into Place

    So within a week from starting, we had turned our Unreal Engine 3 PC game into a well-running, graphically-rich web game. But where to take it from here? Well, it’d still need: Audio, Networking, Streaming, and Storage. Let’s discuss the various techniques used for each of these systems.


    This was a no-brainer, as there is only really one robust standardized web audio system apart from Flash: WebAudio. Again, this API matched up pretty well to its mobile cousin, OpenSL, for which we already had an integration. So once we switched out the various calls, we had .

    There was an apparent issue in Mac Chrome where sounds flagged “looping” would sometimes never become destroyed, so we implemented a Chrome-specific hack to manually loop the sound, and filed a bug report with Google. Ah well, one thing we’ve seen with browser API’s is there’s not a 100% guarantee that every browser will implement the functionality to perfect specification, but it gets the job done!


    This proved a little trickier. First, we investigated WebRTC as used in the Bananabread demo, but WebRTC of course is for browser-to-browser communication which is actually not what we were looking to do. Our online game service uses a server-and-client architecture with centralized infrastructure, and so WebSockets is the API to utilize in that case. The tricky part is that we have to handle all the WebSockets incoming and outgoing data in JavaScript buffers, and then pass that along to the “C++” Emscripten-compiled game.

    With some callbacks, this worked out, but we also had to take our UDP game server code and place the WebSockets TCP-style layer onto it — some iteration was necessary to get the packets to be formatted in exactly the way that WebSockets expects, but once we did that, our browser game was communicating with our backend-hosted Linux dedicated game servers with no problems!

    Streaming & Storage

    One advantage to being on the web is easy access to the browser’s asynchronous downloading functionalities to stream-in content. We certainly made use of this with our game, with the initial download clocking in at under 10 MB. Everything else streams in on-demand as you play using standard browser http download requests: Textures, Sound Effects, Music, even Skeletal Meshes and Level Packages. But the bigger question is how to reliably store this content. We don’t want to just rely on the browser cache, since it’s not good for guaranteed immediate gameplay loading as we can’t pre-query whether something exists on disk in the regular browser cache or not.

    For this, we used the IndexedDB API, which lets us asynchronously save and retrieve data objects from a secure abstracted storage location. It works in both Chrome and Firefox, though it’s still finicky as the database can occasionally become corrupted (perhaps if terminated during async writes) and has to be regenerated. In the worst case, this simply results in a re-download of content the user already had received.

    We’re currently looking into this issue, but that aside, IndexedDB certainly works well and has the advantage of providing our application standard file IO functionality, useful to store content that we download. (UPDATE: Firefox Nightly build as of 12/10 seems to automatically reset the IndexedDB storage if this happens and it may not recur.)

    Play it Now, and Embrace the Future!

    While we still have more profiling and tweaking to, as we’re just now starting to use Firefox’s VTune support to symbolically profile the asm.js performance within the browser. Even still, we’re pretty pleased with where the things currently stand. But don’t take our word for it, please try it yourselves right here, no installation or sign-up required:

    Try our demo test anonymously In-browser Here!
    (Please bear with us if our game servers limit access under load, we’re still testing our backend scalability!)

    We at Trendy envision a day when anybody can play any game no matter where they are or what device they happen to have, without friction or gateways or middlemen. With the right combination of these cutting-edge web technologies, that day can be today. We hope other enterprising game developers will join us in reaching players directly through the web, which thanks to Emscripten & asm.js, may well become the most powerful and far-reaching “game console” of all!

  9. Introducing the Whiteboard Drum – WebRTC and Web Audio API magic

    Browser functionality has expanded rapidly, way beyond merely “browsing” a document. Recently, Web browsers finally gained audio processing abilities with the Web Audio API. It is powerful to the point of building serious music applications.

    Not only that, but it is also very interesting when combined with other APIs. One of these APIs is getUserMedia(), which allows us to capture audio and/or video from the local PC’s microphone / camera devices. Whiteboard Drum (code on GitHub) is a music application, and a great example of what can be achieved using Web Audio API and getUserMedia().

    I demonstrated Whiteboard Drum at the Web Music Hackathon Tokyo in October. It was a very exciting event on the subject of Web Audio API and Web MIDI API. Many instruments can collaborate with the browser, and it can also create new interfaces to the real world.

    I believe this suggests further possibilities of Web-based music applications, especially using Web Audio API in conjunction with other APIs. Let’s explain how the key features of Whiteboard Drum work, showing relevant code fragments as we go.


    First of all, let me show you a picture from the Hackathon:

    And a easy movie demo:

    As you can see, Whiteboard Drum plays a rhythm according to the matrix pattern on the whiteboard. The whiteboard has no magic; it just needs to be pointed at by a WebCam. Though I used magnets in the demo, you can draw the markers using pen if you wish. Each row represents the corresponding instruments of Cymbal, Hi-Hat, Snare-Drum and Bass-Drum, and each column represents timing steps. In this implementation, the sequence has 8 steps. The color blue will activate the grid normally, and the red will activate with accent.

    The processing flow is:

    1. The whiteboard image is captured by the WebCam
    2. The matrix pattern is analysed
    3. This pattern is fed to the drum sound generators to create the corresponding sound patterns

    Although it uses nascent browser technologies, each process itself is not so complicated. Some key points are described below.

    Image capture by getUserMedia()

    getUserMedia() is a function for capturing video/audio from webcam/microphone devices. It is a part of WebRTC and a fairly new feature in web browsers. Note that the user’s permission is required to get the image from the WebCam. If we were just displaying the WebCam image on the screen, it would be trivially easy. However, we want to access the image’s raw pixel data in JavaScript for more processing, so we need to use canvas and the createImageData() function.

    Because pixel-by-pixel processing is needed later in this application, the captured image’s resolution is reduced to 400 x 200px; that means one rhythm grid is 50 x 50 px in the rhythm pattern matrix.

    Note: Though most recent laptops/notebooks have embedded WebCams, you will get the best results on Whiteboard Drum from an external camera, because the camera needs to be precisely aimed at the picture on the whiteboard. Also, the selection of input from multiple available devices/cameras is not standardized currently, and cannot be controlled in JavaScript. In Firefox, it is selectable in the permission dialog when connecting, and it can be set up from “contents setup” option of the setup screen in Google Chrome.

    Get the WebCam video

    We don’t want to show these parts of the processing on the screen, so first we hide the video:

    <video id="video" style="display:none"></video>

    Now to grab the video:

    video = document.getElementById("video");
        function(stream) {
            video.src= window.URL.createObjectURL(stream);
        function(err) {
            alert("Camera Error");

    Capture it and get pixel values

    We also hide the canvas:

    <canvas id="capture" width=400 height=200 style="display:none"></canvas>

    Then capture our video data on the canvas:

    function Capture() {

    The video from the WebCam will be drawn onto the canvas at periodic intervals.

    Image analyzing

    Next, we need to get the 400 x 200 pixel values with getImageData(). The analyzing phase analyses the 400 x 200 image data in an 8 x 4 matrix rhythm pattern, where a single matrix grid is 50 x 50 px. All necessary input data is stored in the array in RGBA format, 4 elements per pixel.

    var pixarray =;
    var step;
    for(var x = 0; x < 8; ++x) {
        var px = x * 50;
        for(var y = 0; y < 4; ++y) {
            var py = y * 50;
            var lum = 0;
            var red = 0;
            for(var dx = 0; dx < 50; ++dx) {
                for(var dy = 0; dy < 50; ++dy) {
                    var offset = ((py + dy) * 400 + px + dx)*4;
                    lum += pixarray[offset] * 3 + pixarray[offset+1] * 6 + pixarray[offset+2];
                    red += (pixarray[offset]-pixarray[offset+2]);
            if(lum < lumthresh) {
                if(red > redthresh)

    This is honest pixel-by-pixel analysis of grid-by-grid loops. In this implementation, the analysis is done for the luminance and redness. If the grid is “dark”, the grid is activated; if it is red, it should be accented.

    The luminance calculation uses a simplified matrix — R * 3 + G * 6 + B — that will get ten times the value – meaning – which means getting the value of range 0 to 2550 for each pixel. And the redness R – B is a experimental value because all that is required is a decision of Red or Blue. The result is stored in the rhythmpat array, with a value of 0 for nothing, 1 for blue or 2 for red.

    Sound generation through the Web Audio API

    Because the Web Audio API is a very cutting edge technology, it is not yet supported by every web browser. Currently, Google Chrome/Safari/Webkit-based Opera and Firefox (25 or later) support this API. Note: Firefox 25 is the latest version released at the end of October.

    For other web browsers, I have developed a polyfill that falls back to Flash: WAAPISim, available on GitHub. It provides almost all functions of the Web Audio API to unsupported browsers, for example Internet Explorer.

    Web Audio API is a large scale specification, but in our case, the sound generation part requires just a very simple use of the Web Audio API: load one sound for each instrument and trigger them at the right times. First we create an audio context, taking care of vendor prefixes in the process. Prefixes currently used are webkit or no prefix.

    audioctx = new (window.AudioContext||window.webkitAudioContext)();

    Next we load sounds to buffers via XMLHttpRequest. In this case, different sounds for each instrument (bd.wav / sd.wav / hh.wav / cy.wav) are loaded into the buffers array:

    var buffers = [];
    var req = new XMLHttpRequest();
    var loadidx = 0;
    var files = [
    function LoadBuffers() {"GET", files[loadidx], true);
        req.responseType = "arraybuffer";
        req.onload = function() {
            if(req.response) {
                    if(++loadidx < files.length)

    The Web Audio API generates sounds by routing graphs of nodes. Whiteboard Drum uses a simple graph that is accessed via AudioBufferSourceNode and GainNode. AudioBufferSourceNode play back the AudioBuffer and route to destination(output) directly (for normal *blue* sound), or route to destination via the GainNode (for accented *red* sound). Because the AudioBufferSourceNode can be used just once, it will be newly created for each trigger.

    Preparing the GainNode as the output point for accented sounds is done like this.


    And the trigger function looks like so:

    function Trigger(instrument,accent,when) {
        var src=audioctx.createBufferSource();

    All that is left to discuss is the accuracy of the playback timing, according to the rhythm pattern. Though it would be simple to keep creating the triggers with a setInterval() timer, it is not recommended. The timing can be easily messed up by any CPU load.

    To get accurate timing, using the time management system embedded in the Web Audio API is recommended. It calculates the when argument of the Trigger() function above.

    // console.log(nexttick-audioctx.currentTime);
    while(nexttick - audioctx.currentTime < 0.3) {
        var p = rhythmpat[step];
        for(var i = 0; i < 4; ++i)
            Trigger(i, p[i], nexttick);
        if(++step >= 8)
            step = 0;
        nexttick += deltatick;

    In Whiteboard Drum, this code controls the core of the functionality. nexttick contains the accurate time (in seconds) of the next step, while audioctx.currentTime is the accurate current time (again, in seconds). Thus, this routine is getting triggered every 300ms – meaning look ahead to 300ms in the future (triggered in advance while nextticktime – currenttime < 0.3). The commented console.log will print the timing margin. While this routine is called periodically, the timing is collapsed if this value is negative.

    For more detail, here is a helpful document: A Tale of Two Clocks – Scheduling Web Audio with Precision

    About the UI

    Especially in music production software like DAW or VST plugins, the UI is important. Web applications do not have to emulate this exactly, but something similar would be a good idea. Fortunately, the very handy WebComponent library webaudio-controls is available, allowing us to define knobs or sliders with just a single HTML tag.

    NOTE: webaudio-controls uses Polymer.js, which sometimes has stability issues, causing unexpected behavior once in a while, especially when combining it with complex APIs.

    Future work

    This is already an interesting application, but it can be improved further. Obviously the camera position adjustment is an issue. Analysis can be more smarter if it has an auto adjustment of the position (using some kind of marker?), and adaptive color detection. Sound generation could also be improved, with more instruments, more steps and more sound effects.

    How about a challenge?

    Whiteboard Drum is available at, and the code is on GitHub.

    Have a play with it and see what rhythms you can create!

  10. Make your Firefox OS app feel alive with video and audio

    Firefox OS applications aren’t just about text: there is no better way to make your app feel alive than adding some videos or audio to it. Let’s explore different ways we can use as developers to enhance our mobile masterpiece.

    Audio and video HTML tags

    Since we are talking about HTML, it makes total sense to think about using the <audio>, and <video> tag to play those media in your Firefox OS app. If you want to add a video in your application, just use this code.

    <video src="" controls>
      Your browser does not support the video element.

    In this code example, the user will see a video player with controls, and will have the opportunity to start the video. If your application is running in a browser not supporting the video tag, the user will see the text between the tag. It’s still a good practice to do so, even if your primary target is a Firefox OS app, because since it uses HTML5, someone may access it from another browser if it’s a hosted app. Note that you can use other attributes for this element.

    As for the audio tag, it’s basically the same.

    <audio id="demo" src="/music/audio.mp3" autoplay loop></audio>

    In this example, the audio will start automatically, and will play the audio file, in a loop, from the relative path: it’s perfect for background music if you are building a game. Note that you can add other attributes to this element too.

    Of course, using those elements without JavaScript give you basic features, but no worries, you can programmatically control them with code. Once you have your HTML element, like the audio example you just saw, you can use JavaScript to play, pause, change the volume, and more.

    document.querySelector("#demo").play(); //Play the Audio
    document.querySelector("#demo").pause(); //Pause the Audio
    document.querySelector("#demo").volume+=0.1; //Increase Volume
    document.querySelector("#demo").volume-=0.1; //Decrease Volume

    You can read more on what you can do with those two elements in the Mozilla Developer Network documentation. You also want to give a closer look to the supported format list.

    Use audio while the screen is locked

    Maybe you are building a podcast app, or at least you need to be able to play audio while the screen is locked? There is a way to do it by using the audio tag. You simply need to add the mozaudiochannel attribute with the value of content to your actual tag.

    <audio mozaudiochannel="content" preload="none"

    Actually, it’s not quite true as this code won’t work as is. You also need to add a permission to the manifest file.

    "permissions": {
        "description":"Use the audio channel for the music player"

    Having the manifest line above will authorize your application to use the audio channel to play music, even when the screen is locked. Having said that, you probably realize that this code is specific to Firefox OS for now. I intentionally put the end of the last sentence in bold as it’s one thing you need to understand about Firefox OS: we had to create some APIs, features or elements to give the power HTML deserve for developers, but we are working with the W3C to make those standards. In the case that the standards won’t be the same as what we created, we’ll change it to reflect it.

    Firefox OS Web activities

    Finally, something very handy for Firefox OS developers: the Web Activities. They define a way for applications to delegate an activity to another (usually user-chosen) application. They aren’t standardized, at the time of writing. In the case that will be interesting to us, we’ll use the Web Activity call open, to open music or video files. Note that for video, you can also use the view activity that basically does the same. Let’s say I want to open a remote video when someone clicks on a button with the id open-video: I’ll use the following code in my JavaScript to make it happen.

    var openVideo = document.querySelector("#open-video");
    if (openVideo) {
        openVideo.onclick = function () {
            var openingVideo = new MozActivity({
                name: "open",
                data: {
                    type: [
                    url: ""

    In that situation, the video player of Firefox OS will open, and play the video: it’s that easy!

    In the end…

    You may or may not need to use those tricks in your app, but adding videos or audio can enhance the quality of your application, and make it feel alive. At the end, you have to give a strong experience to your users, and it’s what will make the difference between a good and a great app!