Mozilla

WebAPI Articles

Sort by:

View:

  1. Birdsongs, Musique Concrète, and the Web Audio API

    In January 2015, my friend and collaborator Brian Belet and I presented Oiseaux de Même — an audio soundscape app created from recordings of birds — at the first Web Audio Conference. In this post I’d like to describe my experience of implementing this app using the Web Audio API, Twitter Bootstrap, Node.js, and REST APIs.

    Screenshot showing Birds of a Feather, a soundscape created with field recordings of birds that are being seen in your vicinity.

    Screenshot showing Birds of a Feather, a soundscape created with field recordings of birds that are being seen in your vicinity.

    What is it? Musique Concrète and citizen science

    We wanted to create a web-based Musique Concrète, building an artistic sound experience by processing field recordings. We decided to use xeno-canto — a library of over 200,000 recordings of 9,000 different bird species — as our source of recordings. Almost all the recordings are licensed under Creative Commons by their generous recordists. We select recordings from this library based on data from eBird, a database of tens of millions of bird sightings contributed by bird watchers everywhere. By using the Geolocation API to retrieve eBird sightings near to the listeners’ location, our soundscape can consist of recordings of bird species that bird watchers have reported recently near the listener — each user gets a personalized soundscape that changes daily.

    Use of the Web Audio API

    We use the browser’s Web Audio API to play back the sounds from xeno-canto. The Web Audio API allows developers to play back, record, analyze, and process sound by creating AudioNodes that are connected together, like an old modular synthesizer.

    Our soundscape is implemented using four AudioBuffer nodes, each of which plays a field recording in a loop. These loops are placed in a stereo field using Panner nodes, and mixed together before being sent to the listener’s speakers or headphones.

    Controls

    After all the sounds have loaded and begin playing, we offer users several controls for manipulating the sounds as they play:

    • The Pan button randomizes the spatial location of the sound in 3D space.
    • The Rate button randomizes the playback rate.
    • The Reverse button reverses the direction of sound playback.
    • Finally, the Share button lets you capture the state of the soundscape and save that snapshot for later.

    The controls described above are implemented as typical JavaScript event handlers. When the Pan button is pressed, for example, we run this handler:

    // sets the X,Y,Z position of the Panner to random values between -1 and +1
    BirdSongPlayer.prototype.randomizePanner = function() {
      this.resetLastActionTime();
      // NOTE: x = -1 is LEFT
      this.panPosition = { x: 2 * Math.random() - 1, y: 2 * Math.random() - 1, z: 2 * Math.random() - 1}
      this.panner.setPosition( this.panPosition.x, this.panPosition.y, this.panPosition.z);
    }

    Some parts of the Web Audio API are write-only

    I had a few minor issues where I had to work around shortcomings in the Web Audio API. Other authors have already documented similar experiences; I’ll summarize mine briefly here:

    • Can’t read Panner position: In the event handler for the Share button, I want to retrieve and store the current Audio Buffer playback rate and Panner position. However, the current Panner node does not allow retrieval of the position after setting it. Hence, I store the new Panner position in an instance variable in addition to calling setPosition().

      This has had a minimal impact on my code so far. My longer-term concern is that I’d rather store the position in the Panner and retrieve it from there, instead of storing a copy elsewhere. In my experience, multiple copies of the same information becomes a readability and maintainability problem as code grows bigger and more complex.

    • Can’t read AudioBuffer’s playbackRate: The Rate button described above calls linearRampToValueAtTime() on the playbackRate AudioParam. As far as I can tell, AudioParams don’t let me retrieve their values after calling linearRampToValueAtTime(), so I’m obliged to keep a duplicate copy of this value in my JS object.
    • Can’t read AudioBuffer playback position: I’d like to show the user the current playback position for each of my sound loops, but the API doesn’t provide this information. Could I compute it myself? Unfortunately, after a few iterations of ramping an AudioBuffer’s playbackRate between random values, it is very difficult to compute the current playback position within the buffer. Unlike some API users, I don’t need a highly accurate position, I just want to visualize for my users when the current sound loop restarts.

    Debugging with the Web Audio inspector

    Firefox’s Web Audio inspector shows how Audio Nodes are connected to one another.

    Firefox’s Web Audio inspector shows how Audio Nodes are connected to one another.



    I had great success using Firefox’s Web Audio inspector to watch my Audio Nodes being created and interconnected as my code runs.

    In the screenshot above, you can see the four AudioBufferSources, each feeding through a GainNode and PannerNode before being summed by an AudioDestination. Note that each recording is also connected to an AnalyzerNode; the Analyzers are used to create the scrolling amplitude graphs for each loop.

    Visualizing sound loops

    As the soundscape evolves, users often want to know which bird species is responsible for a particular sound they hear in the mix. We use a scrolling visualization for each loop that shows instantaneous amplitude, creating distinctive shapes you can correlate with what you’re hearing. The visualization uses the Analyzer node to perform a fast Fourier transform (FFT) on the sound, which yields the amplitude of the sound at every frequency. We compute the average of all those amplitudes, and then draw that amplitude at the right edge of a Canvas. As the contents of the Canvas shift sideways on every animation frame, the result is a horizontally scrolling amplitude graph.

    BirdSongPlayer.prototype.initializeVUMeter = function() {
      // set up VU meter
      var myAnalyser = this.analyser;
      var volumeMeterCanvas = $(this.playerSelector).find('canvas')[0];
      var graphicsContext = volumeMeterCanvas.getContext('2d');
      var previousVolume = 0;
     
      requestAnimationFrame(function vuMeter() {
        // get the average, bincount is fftsize / 2
        var array =  new Uint8Array(myAnalyser.frequencyBinCount);
        myAnalyser.getByteFrequencyData(array);
        var average = getAverageVolume(array);
        average = Math.max(Math.min(average, 128), 0);
     
        // draw the rightmost line in black right before shifting
        graphicsContext.fillStyle = 'rgb(0,0,0)'
        graphicsContext.fillRect(258, 128 - previousVolume, 2, previousVolume);
     
        // shift the drawing over one pixel
        graphicsContext.drawImage(volumeMeterCanvas, -1, 0);
     
        // clear the rightmost column state
        graphicsContext.fillStyle = 'rgb(245,245,245)'
        graphicsContext.fillRect(259, 0, 1, 130);
     
        // set the fill style for the last line (matches bootstrap button)
        graphicsContext.fillStyle = '#5BC0DE'
        graphicsContext.fillRect(258, 128 - average, 2, average);
     
        requestAnimationFrame(vuMeter);
        previousVolume = average;
      });
    }

    What’s next

    I’m continuing to work on cleaning up my JavaScript code for this project. I have several user interface improvements suggested by my Mozillia colleagues that I’d like to try. And Prof. Belet and I are considering what other sources of geotagged sounds we can use to make more soundscapes with. In the meantime, please try Oiseaux de Même for yourself and let us know what you think!

  2. What’s new in Web Audio

    Introduction

    It’s been a while since we said anything on Hacks about the Web Audio API. However, with Firefox 37/38 hitting our Developer Edition/Nightly browser channels, there are some interesting new features to talk about!

    This article presents you with some new Web Audio tricks to watch out for, such as the new StereoPannerNode, promise-based methods, and more.

    Simple stereo panning

    Firefox 37 introduces the StereoPannerNode interface, which allows you to add a stereo panning effect to an audio source simply and easily. It takes a single property: pan—an a-rate AudioParam that can accept numeric values between -1.0 (full left channel pan) and 1.0 (full right channel pan).

    But don’t we already have a PannerNode?

    You may have already used the older PannerNode interface, which allows you to position sounds in 3D. Connecting a sound source to a PannerNode causes it to be “spatialised”, meaning that it is placed into a 3D space and you can then specify the position of the listener inside. The browser then figures out how to make the sources sound, applying panning and doppler shift effects, and other nice 3D “artifacts” if the sounds are moving over time, etc:

    var audioContext = new AudioContext();
    var pannerNode = audioContext.createPanner();
     
    // The listener is 100 units to the right of the 3D origin
    audioContext.listener.setPosition(100, 0, 0);
     
    // The panner is in the 3D origin
    pannerNode.setPosition(0, 0, 0);

    This works well with WebGL-based games as both environments use similar units for positioning—an array of x, y, z values. So you could easily update the position, orientation, and velocity of the PannerNodes as you update the position of the entities in your 3D scene.

    But what if you are just building a conventional music player where the songs are already stereo tracks, and you actually don’t care at all about 3D? You have to go through a more complicated setup process than should be necessary, and it can also be computationally more expensive. With the increased usage of mobile devices, every operation you don’t perform is a bit more battery life you save, and users of your website will love you for it.

    Enter StereoPannerNode

    StereoPannerNode is a much better solution for simple stereo use cases, as described above. You don’t need to care about the listener’s position; you just need to connect source nodes that you want to spatialise to a StereoPannerNode instance, then use the pan parameter.

    To use a stereo panner, first create a StereoPannerNode using createStereoPanner(), and then connect it to your audio source. For example:

    var audioCtx = window.AudioContext();
    // You can use any type of source
    var source = audioCtx.createMediaElementSource(myAudio);
    var panNode = audioCtx.createStereoPanner();
     
    source.connect(panNode);
    panNode.connect(audioCtx.destination);

    To change the amount of panning applied, you just update the pan property value:

    panNode.pan.value = 0.5; // places the sound halfway to the right
    panNode.pan.value = 0.0; // centers it
    panNode.pan.value = -0.5; // places the sound halfway to the left

    You can see http://mdn.github.io/stereo-panner-node/ for a complete example.

    Also, since pan is an a-rate AudioParam you can design nice smooth curves using parameter automation, and the values will be updated per sample. Trying to do this kind of change over time would sound weird and unnatural if you updated the value over multiple requestAnimationFrame calls. And you can’t automate PannerNode positions either.

    For example, this is how you could set up a panning transition from left to right that lasts two seconds:

    panNode.pan.setValueAtTime(-1, audioContext.currentTime);
    panNode.pan.linearRampToValueAtTime(1, audioContext.currentTime + 2);

    The browser will take care of updating the pan value for you. And now, as of recently, you can also visualise these curves using the Firefox Devtools Web Audio Editor.

    Detecting when StereoPannerNode is available

    It might be the case that the Web Audio implementation you’re using has not implemented this type of node yet. (At the time of this writing, it is supported in Firefox 37 and Chrome 42 only.) If you try to use StereoPannerNode in these cases, you’re going to generate a beautiful undefined is not a function error instead.

    To make sure StereoPannerNodes are available, just check whether the createStereoPanner() method exists in your AudioContext:

    if (audioContext.createStereoPanner) {
        // StereoPannerNode is supported!
    }

    If it doesn’t, you will need to revert back to the older PannerNode.

    Changes to the default PannerNode panning algorithm

    The default panning algorithm type used in PannerNodes used to be HRTF, which is a high quality algorithm that rendered its output using a convolution with human-based data (thus it’s very realistic). However it is also very computationally expensive, requiring the processing to be run in additional threads to ensure smooth playback.

    Authors often don’t require such a high level of quality and just need something that is good enough, so the default PannerNode.type is now equalpower, which is much cheaper to compute. If you want to go back to using the high quality panning algorithm instead, you just need to change the type:

    pannerNodeInstance.type = 'HRTF';

    Incidentally, a PannerNode using type = 'equalpower' results in the same algorithm that StereoPannerNode uses.

    Promise-based methods

    Another interesting feature that has been recently added to the Web Audio spec is Promise-based versions of certain methods. These are OfflineAudioContext.startRendering() and AudioContext.decodeAudioData.

    The below sections show how the method calls look with and without Promises.

    OfflineAudioContext.startRendering()

    Let’s suppose we want to generate a minute of audio at 44100 Hz. We’d first create the context:

    var offlineAudioContext = new OfflineAudioContext(2, 44100 * 60, 44100);

    Classic code

    offlineAudioContext.addEventListener('oncomplete', function(e) {
        // rendering complete, results are at `e.renderedBuffer`
    });
    offlineAudioContext.startRendering();

    Promise-based code

    offlineAudioContext.startRendering().then(function(renderedBuffer) {
        // rendered results in `renderedBuffer`
    });

    AudioContext.decodeAudioData

    Likewise, when decoding an audio track we would create the context first:

    var audioContext = new AudioContext();

    Classic code

    audioContext.decodeAudioData(data, function onSuccess(decodedBuffer) {
        // decoded data is decodedBuffer
    }, function onError(e) {
        // guess what... something didn't work out well!
    });

    Promise-based code

    audioContext.decodeAudioData(data).then(function(decodedBuffer) {
        // decoded data is decodedBuffer
    }, function onError(e) {
        // guess what... something didn't work out well!
    });

    In both cases the differences don’t seem major, but if you’re composing the results of promises sequentially or if you’re waiting on an event to complete before calling several other methods, promises are really helpful to avoid callback hell.

    Detecting support for Promise-based methods

    Again, you don’t want to get the dreaded undefined is not a function error message if the browser you’re running your code on doesn’t support these new versions of the methods.

    A quick way to check for support: look at the returned type of these calls. If they return a Promise, we’re in luck. If they don’t, we have to keep using the old methods:

    if((new OfflineAudioContext(1, 1, 44100)).startRendering() != undefined) {
        // Promise with startRendering is supported
    }
     
    if((new AudioContext()).decodeAudioData(new Uint8Array(1)) != undefined) {
        // Promise with decodeAudioData is supported
    }

    Audio workers

    Although the spec has not been finalised and they are not implemented in any browser yet, it is also worth giving a mention to Audio Workers, which —you’ve guessed it— are a specialised type of web worker for use by Web Audio code.

    Audio Workers will replace the almost-obsolete ScriptProcessorNode. Originally, this was the way to run your own custom nodes inside the audio graph, but they actually run on the main thread causing all sorts of problems, from audio glitches (if the main thread becomes stalled) to unresponsive UI code (if the ScriptProcessorNodes aren’t fast enough to process their data).

    The biggest feature of audio workers is that they run in their own separate thread, just like any other Worker. This ensures that audio processing is prioritised and we avoid sound glitches, which human ears are very sensitive to.

    There is an ongoing discussion on the w3c web audio list; if you are interested in this and other Web Audio developments, you should go check it out.

    Exciting times for audio on the Web!

  3. Embedding an HTTP Web Server in Firefox OS

    Nearing the end of last year, Mozilla employees were gathered together for a week of collaboration and planning. During that week, a group was formed to envision what the future of Firefox OS might be surrounding a more P2P-focused Web. In particular, we’ve been looking at harnessing technologies to collectively enable offline P2P connections such as Bluetooth, NFC and WiFi Direct.

    Since these technologies only provide a means to communicate between devices, it became immediately clear that we would also need a protocol for apps to send and receive data. I quickly realized that we already have a standard protocol for transmitting data in web apps that we could leverage – HTTP.

    By utilizing HTTP, we would already have everything we’d need for apps to send and receive data on the client side, but we would still need a web server running in the browser to enable offline P2P communications. While this type of HTTP server functionality might be best suited as part of a standardized WebAPI to be baked into Gecko, we actually already have everything we need in Firefox OS to implement this in JavaScript today!

    navigator.mozTCPSocket

    Packaged apps have access to both raw TCP and UDP network sockets, but since we’re dealing with HTTP, we only need to work with TCP sockets. Access to the TCPSocket API is exposed through navigator.mozTCPSocket which is currently only exposed to “privileged” packaged apps with the tcp-socket permission:

    "type": "privileged",
    "permissions": {
      "tcp-socket": {}
    },

    In order to respond to incoming HTTP requests, we need to create a new TCPSocket that listens on a known port such as 8080:

    var socket = navigator.mozTCPSocket.listen(8080);

    When an incoming HTTP request is received, the TCPSocket needs to handle the request through the onconnect handler. The onconnect handler will receive a TCPSocket object used to service the request. The TCPSocket you receive will then call its own ondata handler each time additional HTTP request data is received:

    socket.onconnect = function(connection) {
      connection.ondata = function(evt) {
        console.log(evt.data);
      };
    };

    Typically, an HTTP request will result in a single calling of the ondata handler. However, in cases where the HTTP request payload is very large, such as for file uploads, the ondata handler will be triggered each time the buffer is filled, until the entire request payload is delivered.

    In order to respond to the HTTP request, we must send data to the TCPSocket we received from the onconnect handler:

    connection.ondata = function(evt) {
      var response = 'HTTP/1.1 200 OK\r\n';
      var body = 'Hello World!';
     
      response += 'Content-Length: ' + body.length + '\r\n';
      response += '\r\n';
      response += body;
     
      connection.send(response);
      connection.close();
    };

    The above example sends a proper HTTP response with “Hello World!” in the body. Valid HTTP responses must contain a status line which consists of the HTTP version HTTP/1.1, the response code 200 and the response reason OK terminated by a CR+LF \r\n character sequence. Immediately following the status line are the HTTP headers, one per line, separated by a CR+LF character sequence. After the headers, an additional CR+LF character sequence is required to separate the headers from the body of the HTTP response.

    FxOS Web Server

    Now, it is likely that we will want to go beyond simple static “Hello World!” responses to do things like parsing the URL path and extracting parameters from the HTTP request in order to respond with dynamic content. It just so happens that I’ve already implemented a basic-featured HTTP server library that you can include in your own Firefox OS apps!

    FxOS Web Server can parse all parts of the HTTP request for various content types including application/x-www-form-urlencoded and multipart/form-data. It can also gracefully handle large HTTP requests for file uploads and can send large binary responses for serving up content such as images and videos. You can either download the source code for FxOS Web Server on GitHub to include in your projects manually or you can utilize Bower to fetch the latest version:

    bower install justindarc/fxos-web-server --save

    Once you have the source code downloaded, you’ll need to include dist/fxos-web-server.js in your app using a <script> tag or a module loader like RequireJS.

    Simple File Storage App

    Next, I’m going to show you how to use FxOS Web Server to build a simple Firefox OS app that lets you use your mobile device like a portable flash drive for storing and retrieving files. You can see the source code for the finished product on GitHub.

    Before we get into the code, let’s set up our app manifest to get permission to access DeviceStorage and TCPSocket:

    {
      "version": "1.0.0",
      "name": "WebDrive",
      "description": "A Firefox OS app for storing files from a web browser",
      "launch_path": "/index.html",
      "icons": {
        "128": "/icons/icon_128.png"
      },
      "type": "privileged",
      "permissions": {
        "device-storage:sdcard": { "access": "readwrite" },
        "tcp-socket": {}
      }
    }

    Our app won’t need much UI, just a listing of files in the “WebDrive” folder on the device, so our HTML will be pretty simple:

    <!DOCTYPE html>
    <html>
    <head>
      <meta charset="utf-8">
      <title>WebDrive</title>
      <meta name="description" content="A Firefox OS app for storing files from a web browser">
      <meta name="viewport" content="width=device-width, user-scalable=no, initial-scale=1">
      <script src="bower_components/fxos-web-server/dist/fxos-web-server.js"></script>
      <script src="js/storage.js"></script>
      <script src="js/app.js"></script>
    </head>
    <body>
      <h1>WebDrive</h1>
      <hr>
      <h3>Files</h3>
      <ul id="list"></ul>
    </body>
    </html>

    As you can see, I’ve included fxos-web-server.js in addition to app.js. I’ve also included a DeviceStorage helper module called storage.js since enumerating files can get somewhat complex. This will help keep the focus on our code specific to the task at hand.

    The first thing we’ll need to do is create new instances of the HTTPServer and Storage objects:

    var httpServer = new HTTPServer(8080);
    var storage = new Storage('sdcard');

    This will initialize a new HTTPServer on port 8080 and a new instance of our Storage helper pointing to the device’s SD card. In order for our HTTPServer instance to be useful, we must listen for and handle the “request” event. When an incoming HTTP request is received, the HTTPServer will emit a “request” event that passes the parsed HTTP request as an HTTPRequest object to the event handler.

    The HTTPRequest object contains various properties of an HTTP request including the HTTP method, path, headers, query parameters and form data. In addition to the request data, an HTTPResponse object is also passed to the “request” event handler. The HTTPResponse object allows us to send our response as a file or string and set the response headers:

    httpServer.addEventListener('request', function(evt) {
      var request  = evt.request;
      var response = evt.response;
     
      // Handle request here...
    });

    When a user requests the root URL of our web server, we’ll want to present them with a listing of files stored in the “WebDrive” folder on the device along with a file input for uploading new files. For convenience, we’ll create two helper functions to generate the HTML string to send in our HTTP response. One will just generate the listing of files which we’ll reuse to display the files on the device locally and the other will generate the entire HTML document to send in the HTTP response:

    function generateListing(callback) {
      storage.list('WebDrive', function(directory) {
        if (!directory || Object.keys(directory).length === 0) {
          callback('<li>No files found</li>');
          return;
        }
     
        var html = '';
        for (var file in directory) {
          html += `<li><a href="/${encodeURIComponent(file)}" target="_blank">${file}</a></li>`;
        }
     
        callback(html);
      });
    }
     
    function generateHTML(callback) {
      generateListing(function(listing) {
        var html =
    `<!DOCTYPE html>
    <html>
    <head>
      <meta charset="utf-8">
      <title>WebDrive</title>
    </head>
    <body>
      <h1>WebDrive</h1>
      <form method="POST" enctype="multipart/form-data">
        <input type="file" name="file">
        <button type="submit">Upload</button>
      </form>
      <hr>
      <h3>Files</h3>
      <ul>${listing}</ul>
    </body>
    </html>`;
     
        callback(html);
      });
    }

    You’ll notice that we’re using ES6 Template Strings for generating our HTML. If you’re not familiar with Template Strings, they allow us to have multi-line strings that automatically include whitespace and new lines and we can do basic string interpolation that automatically inserts values inside the ${} syntax. This is especially useful for generating HTML because it allows us to span multiple lines so our template markup remains highly readable when embedded within JavaScript code.

    Now that we have our helper functions, let’s send our HTML response in our “request” event handler:

    httpServer.addEventListener('request', function(evt) {
      var request  = evt.request;
      var response = evt.response;
     
      generateHTML(function(html) {
        response.send(html);
      });
    });

    As of right now, our “request” event handler will always respond with an HTML page listing all the files in the device’s “WebDrive” folder. However, we must first start the HTTPServer before we can receive any requests. We’ll do this once the DOM is ready and while we’re at it, let’s also render the file listing locally:

    window.addEventListener('DOMContentLoaded', function(evt) {
      generateListing(function(listing) {
        list.innerHTML = listing;
      });
     
      httpServer.start();
    });

    We should also be sure to stop the HTTPServer when the app is terminated, otherwise the network socket may never be freed:

    window.addEventListener('beforeunload', function(evt) {
      httpServer.stop();
    });

    At this point, our web server should be up and running! Go ahead and install the app on your device or simulator using WebIDE. Once installed, launch the app and point your desktop browser to your device’s IP address at port 8080 (e.g.: http://10.0.1.12:8080).

    You should see our index page load in your desktop browser, but the upload form still isn’t wired up and if you have any files in your “WebDrive” folder on your device, they cannot be downloaded yet. Let’s first wire up the file upload by first creating another helper function to save files received in an HTTPRequest:

    function saveFile(file, callback) {
      var arrayBuffer = BinaryUtils.stringToArrayBuffer(file.value);
      var blob = new Blob([arrayBuffer]);
     
      storage.add(blob, 'WebDrive/' + file.metadata.filename, callback);
    }

    This function will first convert the file’s contents to an ArrayBuffer using the BinaryUtils utility that comes with fxos-web-server.js. We then create a Blob that we pass to our Storage helper to save it to the SD card in the “WebDrive” folder. Note that the filename can be extracted from the file’s metadata object since it gets passed to the server using ‘multipart/form-data’ encoding.

    Now that we have a helper for saving an uploaded file, let’s wire it up in our “request” event handler:

    httpServer.addEventListener('request', function(evt) {
      var request  = evt.request;
      var response = evt.response;
     
      if (request.method === 'POST' && request.body.file) {
        saveFile(request.body.file, function() {
          generateHTML(function(html) {
            response.send(html);
          });
     
          generateListing(function(html) {
            list.innerHTML = html;
          });
        });
     
        return;
      }
     
      generateHTML(function(html) {
        response.send(html);
      });
    });

    Now, anytime an HTTP POST request is received that contains a “file” parameter in the request body, we will save the file to the “WebDrive” folder on the SD card and respond with an updated file listing index page. At the same time, we’ll also update the file listing on the local device to display the newly-added file.

    The only remaining part of our app to wire up is the ability to download files. Once again, let’s update the “request” event handler to do this:

    httpServer.addEventListener('request', function(evt) {
      var request  = evt.request;
      var response = evt.response;
     
      if (request.method === 'POST' && request.body.file) {
        saveFile(request.body.file, function() {
          generateHTML(function(html) {
            response.send(html);
          });
     
          generateListing(function(html) {
            list.innerHTML = html;
          });
        });
     
        return;
      }
     
      var path = decodeURIComponent(request.path);
      if (path !== '/') {
        storage.get('WebDrive' + path, function(file) {
          if (!file) {
            response.send(null, 404);
            return;
          }
     
          response.headers['Content-Type'] = file.type;
          response.sendFile(file);
        });
     
        return;
      }
     
      generateHTML(function(html) {
        response.send(html);
      });
    });

    This time, our “request” event handler will check the requested path to see if a URL other than the root is being requested. If so, we assume that the user is requesting to download a file which we then proceed to get using our Storage helper. If the file cannot be found, we return an HTTP 404 error. Otherwise, we set the “Content-Type” in the response header to the file’s MIME type and send the file with the HTTPResponse object.

    You can now re-install the app to your device or simulator using WebIDE and once again point your desktop browser to your device’s IP address at port 8080. Now, you should be able to upload and download files from your device using your desktop browser!

    The possible use cases enabled by embedding a web server into Firefox OS apps are nearly limitless. Not only can you serve up web content from your device to a desktop browser, as we just did here, but you can also serve up content from one device to another. That also means that you can use HTTP to send and receive data between apps on the same device! Since its inception, FxOS Web Server has been used as a foundation for several exciting experiments at Mozilla:

    • wifi-columns

      Guillaume Marty has combined FxOS Web Server with his amazing jsSMS Master System/Game Gear emulator to enable multi-player gaming across two devices in conjunction with WiFi Direct.

    • sharing

      Several members of the Gaia team have used FxOS Web Server and dns-sd.js to create an app that allows users to discover and share apps with friends over WiFi.

    • firedrop

      I have personally used FxOS Web Server to build an app that lets you share files with nearby users without an Internet connection using WiFi Direct. You can see the app in action here:

    I look forward to seeing all the exciting things that are built next with FxOS Web Server!

  4. BroadcastChannel API in Firefox 38

    Recently the BroadcastChannel API landed in Firefox 38. This API can be used for simple messaging between browser contexts that have the same user agent and origin. This API is exposed to both Windows and Workers and allows communication between iframes, browser tabs, and worker threads.

    The intent of the BroadcastChannel API is to provide an API that facilitates communicating simple messages between browser contexts within a web app. For example, when a user logs into one page of an app it can update all the other contexts (e.g. tabs or separate windows) with the user’s information, or if a user uploads a photo to a browser window, the image can be passed around to other open pages within the app. This API is not intended to handle complex operations like shared resource locking or synchronizing multiple clients with a server. In those cases, a shared worker is more appropriate.

    API Overview

    A BroadcastChannel object can be created by using the following code:

    var myChannel = new BroadcastChannel("channelname");

    The channelname is case sensitive. The BroadcastChannel instance contains two methods, postMessage and close. The postMessage method is used for sending messages to the channel. The message contents can be any object, including complex objects such as images, arrays, and arrays of objects. When the message is posted, all browser contexts with the same origin, user agent, and a BroadcastChannel with the same channel name are notified of the message. For example, the following code posts a simple string:

    myChannel.postMessage("Simple String");

    The dispatched message can be handled by any page with the channel opened by using the onmessage event handler of the BroadcastChannel instance.

    myChannel.onmessage = function(event) {
        console.log(event.data);
    }

    Note that the data attribute of the event contains the data of the message. Other attributes of the event may be of interest as well. For example, you can check the web app origin using the event.origin attribute or the channel name is available in the event.target.name attribute.

    The close method is used to close the channel for the particular browser context and can be called by simply using:

    broadCastChannel.close();

    Blob Message Posting

    While posting strings and other primitive types is fairly simple, the BroadcastChannel API supports more complex objects as well. For example, to post an Image from one context to another the following code could be used.

    var broadcastExample = {};
    broadcastExample.setupChannel = function() {
        if ("BroadcastChannel" in window) {
            if (typeof broadcastExample.channel === 'undefined' || 
               !broadcastExample.channel) {
                //create channel
                broadcastExample.channel = new BroadcastChannel("foo");
            }
        }
    }
    broadcastExample.pMessage = function() {
        broadcastExample.setupChannel();
        //get image element
        var img = document.getElementById("ffimg");
        //create canvas with image 
        var canvas = document.createElement("canvas");
        canvas.width = img.width;
        canvas.height = img.height;
        var ctx = canvas.getContext("2d");
        ctx.drawImage(img, 0, 0);
        //get blob of canvas
        canvas.toBlob(function(blob) {
            //broadcast blob
            broadcastExample.channel.postMessage(blob);
        });
    }

    The above code contains two functions and represents the sender side of a BroadcastChannel client. The setupChannel method just opens the Broadcast Channel and the pMessage method retrieves an image element from the page and converts it to a Blob. Lastly the Blob is posted to the BroadcastChannel. The receiver page will need to already be listening to the BroadcastChannel in order to receive the message and can be coded similar to the following.

    var broadcastFrame = {};
     
    //setup broadcast channel
    broadcastFrame.setup = function() {
        if ("BroadcastChannel" in window) {
            if (typeof broadcastFrame.channel === 'undefined' || !broadcastFrame.channel) {
                broadcastFrame.channel = new BroadcastChannel("foo");
            }
            //function to process broadcast messages
            function func(e) {
                if (e.data instanceof Blob) {
                    //if message is a blob create a new image element and add to page
                    var blob = e.data;
                    var newImg = document.createElement("img"),
                        url = URL.createObjectURL(blob);
                    newImg.onload = function() {
                        // no longer need to read the blob so it's revoked
                        URL.revokeObjectURL(url);
                    };
                    newImg.src = url;
                    var content = document.getElementById("content");
                    content.innerHTML = "";
                    content.appendChild(newImg);
                }
            };
            //set broadcast channel message handler
            broadcastFrame.channel.onmessage = func;
        }
    }
    window.onload = broadcastFrame.setup();

    This code takes the blob from the broadcast message and creates an image element, which is added to the receiver page.

    Workers and BroadcastChannel messages

    The BroadcastChannel API also works with dedicated and shared workers. As a simple example, assume a worker is started and when a message is sent to the worker through the main script, the worker then broadcasts a message to the Broadcast Channel.

    var broadcastExample = {};
    broadcastExample.setupChannel = function() {
        if ("BroadcastChannel" in window) {
            if (typeof broadcastExample.channel === 'undefined' || 
               !broadcastExample.channel) {
                //create channel
                broadcastExample.channel = new BroadcastChannel("foo");
            }
        }
    }
     
    function callWorker() {
        broadcastExample.setupChannel();
        //verify compat
        if (!!window.Worker) {
            var myWorker = new Worker("worker.js");
            //ping worker to post to Broadcast Channel
            myWorker.postMessage("Broadcast from Worker");
        }
    }
     
     
    //Worker.js
    onmessage = function(e) {
        //Broadcast Channel Message with Worker
        if ("BroadcastChannel" in this) {
            var workerChannel = new BroadcastChannel("foo");
            workerChannel.postMessage("BroadcastChannel Message Sent from Worker");
            workerChannel.close();
        }
    }

    The message can be handled in the main page in the same way as described in previous sections of this post.
    workerbroadcast
    Using a Worker to Broadcast a message to an iframe

    More Information

    In addition to demonstrating how to use in an iframe, another example of using the code in this post is available on github. More information about the BroadcastChannel API is available in the specification, bugzilla entry, and on MDN. The code and test pages for the API are located in the gecko source code.

  5. The P2P Web: Wi-Fi Direct in Firefox OS

    At Mozilla, we foresee that the future of the Web lies in its ability to connect people directly with multiple devices, without using the Internet. Many different technologies exist and are already implemented to allow peer-to-peer connections. Today is the first in a series of articles presenting these technologies. Let me introduce you to Wi-Fi Direct.

    What is it about?

    Wi-Fi Direct is a communication standard from the Wi-Fi Alliance that allows several devices to connect without using a local network. It has the advantage of using Wi-Fi, a fast and commonly available technology.

    This standard allows devices to discover peers and connect to them by enabling a portable Wi-Fi access point on your device that others can connect to. In Wi-Fi Direct parlance, the access point is called a group owner. The devices connected to a group owner are called clients, or peers.

    Wi-Fi Direct diagram

    Enable Wi-Fi Direct on a Flame

    In order to be usable the device driver must support Wi-Fi Direct, and that’s the case on the Flame. But before starting, the first thing to do is to enable support. Run the code in this gist with a rooted device connected to your computer.

    The Wi-Fi Direct API in Firefox OS is only available to certified apps at this time. Don’t forget to request the wifi-manage permission in your manifest.webapp file. This also means that you cannot distribute such an app in the Firefox Marketplace, but you can still use it and play with the technology.

    The JavaScript code in this article uses promises and some of the ES6 structures supported in the latest release of Firefox OS. Make sure you are familiar with these!

    Let’s see who’s around

    The Wi-Fi Direct API is accessible via the navigator.mozWifiP2pManager object.
    The first thing to do: enable Wi-Fi Direct to get a list of the peers that are around. The methods to enable and get the peers are respectively setScanEnabled(true) and getPeerList() and both return promises.

    navigator.mozWifiP2pManager.setScanEnabled(true)
      .then(result => {
        if (!result) {
          throw(new Error('wifiP2pManager activation failed.'));
        }
     
        // Wi-Fi Direct is enabled! Now, let's check who's around.
        return navigator.mozWifiP2pManager.getPeerList();
      })
      .then(peers => {
        console.log(`Number of peers around: ${peers.length}!`);
      });

    Pass true to navigator.mozWifiP2pManager.setScanEnabled() to start Wi-Fi Direct and the promises will return true if activation was successful. Use this to check whether your device supports Wi-Fi Direct or not.

    The promise returned by navigator.mozWifiP2pManager.getPeerList() will return an array of objects representing the peers located around you with Wi-Fi Direct enabled:

    {
      name: "Guillaume's Flame",
      connectionStatus: "disconnected",
      address: "02:0a:f5:f7:38:44", // MAC address of the device
      isGroupOwner: false
    }

    The peer name can be set using navigator.mozWifiP2pManager.setDeviceName(`Guillaume's Flame`). It is a string that allows a human to identify a device.
    As you can guess, the connectionStatus property tells us whether the device is connected or not. It has 4 possible values:

    • ‘disconnected’
    • ‘connecting’
    • ‘connected’
    • ‘disconnecting’

    You can listen to the peerinfoupdate event fired by navigator.mozWifiP2pManager to know when any peer status changes (new peer appears or disappears ; a peer connection status changes). You need then to call getPeerList() to get the update.

    Knock knock. Who’s there?

    Now that we know the devices located around you, let’s connect with them. We’re going to use navigator.mozWifiP2pManager.connect(address, wps, intent). The 3 parameters stand for:

    1. The MAC address of the device
    2. The WPS method (e.g. ‘pbc’)
    3. A value for intent from 0 to 15

    The MAC address is given in the peer object retrieved with getPeerList().
    The WPS method can be safely ignored for now but keep an eye on MDN for more on this.
    For initial connections, the value of intent will decide who will be the group owner of the Wi-Fi Direct network. The higher the value, the more likely it is that the device issuing the connection is the group owner.

    // Given `peer`, a peer object.
    navigator.mozWifiP2pManager.connect(peer.address, 'pbc', 1)
      .then(result => {
        if (!result) {
          throw(new Error(`Cannot request a connection with ${peer.name}.`));
        }
     
        // The connect request was successfully issued.
      });

    At this stage, you’re still not connected to the peer. The value returned will tell you whether the connection could be initiated or not.
    To know the status of your connection, you need to listen to an event.

    Breaking the ice

    The Wi-Fi Direct object emits different types of events. When another device gets connected the statuschange event is fired.

    The details of the connection are available in navigator.mozWifiP2pManager.groupOwner. Use the information provided by the following properties:

    • isLocal: whether the device is a group owner or not.
    • ipAddress: the IP address of the group owner.

    More properties are also exposed, but the ones above are enough to build a basic application.

    statuschange is also triggered when the other peer is disconnected using navigator.mozWifiP2pManager.disconnect(address).

    Say hello

    Wi-Fi Direct is merely a connection protocol, it’s now up to you to decide how to communicate. Once connected, you get the IP address of the group owner. This can be used with fxos-web-server, a web server running on your device to request information. The group owner will then act as a server.

    You can also use p2p-helper.js, a convenient helper library developed by Justin D’Arcangelo to abstract away the complexity of the Wi-Fi Direct API and build an app with ease.

    I recommend you have a look at the following example apps:

    • Firedrop, a sample app to share media files between devices.
    • Wi-Fi Columns, a game working across multiple devices. See the demo below.

    Now, it’s your turn to create mind-blowing apps where multiple devices can interact together without cellular data or Internet connection!

  6. ServiceWorkers and Firefox

    Since early 2013, Mozillians have been involved with the design of the Service Worker. Thanks to work by Google, Samsung, Mozilla, and others, this exciting new feature of the web platform has evolved to the point that it is being implemented in various web browser engines.

    What are Service Workers?

    At their simplest, Service Workers are scripts that act as client-side proxies for web pages. JavaScript code can intercept network requests, deliver manufactured responses and perform granular caching based on the unique needs of the application, a feature that the web platform has lacked before now. This powerful capability being made available to web developers enables, among other things, the creation of fully-functioning offline experiences. Jake Archibald has summarized some of these features in his blog post.

    Since Service Workers run in the “background”, they open up several possibilities for the Web that were previously only available on native platforms. Apart from the networking capabilities provided by the base specification, Service Workers are intended to be used by the Push API and the Background Sync API to deliver messages from the user-agent to web applications.

    Service Workers in Firefox

    A number of Mozillians have been hard at work implementing Service Workers in Gecko while Anne van Kesteren and Jonas Sicking help with the design and specification. Members of the Necko team and others have provided input from networking and related perspectives. Nikhil Marathe recently published a blog post about the status of Service Workers in Gecko.

    The Service Worker implementation in Gecko is landing in pieces as soon as they are finished and reviewed. For the time being, as the specification continues toward stability and other implementations — notably Blink’s — progress, all functionality in Gecko is behind the dom.serviceWorkers.enabled preference which is set to false by default but can be toggled in about:config.

    Our plan is that web developers will soon be able to exercise most Service Worker functionality in Firefox Nightly with the above preference flipped to true. The best plans can always be waylaid but we hope for this to happen by the end of September 2014 at the latest.

    Status of Service Worker implementations

    The inimitable Jake Archibald has written a tool to easily see the status of Service Worker implementations. You can follow along with the gecko implementation via the meta bug.

  7. Easy audio capture with the MediaRecorder API

    The MediaRecorder API is a simple construct, used inside Navigator.getUserMedia(), which provides an easy way of recording media streams from the user’s input devices and instantly using them in web apps. This article provides a basic guide on how to use MediaRecorder, which is supported in Firefox Desktop/Mobile 25, and Firefox OS 2.0.

    What other options are available?

    Capturing media isn’t quite as simple as you’d think on Firefox OS. Using getUserMedia() alone yields raw PCM data, which is fine for a stream, but then if you want to capture some of the audio or video you start having to perform manual encoding operations on the PCM data, which can get complex very quickly.

    Then you’ve got the Camera API on Firefox OS, which until recently was a certified API, but has been downgraded to privileged recently.

    Web activities are also available to allow you to grab media via other applications (such as Camera).

    the only trouble with these last two options is that they would capture only video with an audio track, and you would still have separate the audio if you just wanted an audio track. MediaRecorder provides an easy way to capture just audio (with video coming later — it is _just_ audio for now.)

    A sample application: Web Dictaphone

    An image of the Web dictaphone sample app - a sine wave sound visualization, then record and stop buttons, then an audio jukebox of recorded tracks that can be played back.

    To demonstrate basic usage of the MediaRecorder API, we have built a web-based dictaphone. It allows you to record snippets of audio and then play them back. It even gives you a visualization of your device’s sound input, using the Web Audio API. We’ll concentrate on the recording and playback functionality for this article.

    You can see this demo running live, or grab the source code on Github (direct zip file download.)

    CSS goodies

    The HTML is pretty simple in this app, so we won’t go through it here; there are a couple of slightly more interesting bits of CSS worth mentioning, however, so we’ll discuss them below. If you are not interested in CSS and want to get straight to the JavaScript, skip to the “Basic app setup” section.

    Keeping the interface constrained to the viewport, regardless of device height, with calc()

    The calc function is one of those useful little utility features that’s cropped up in CSS that doesn’t look like much initially, but soon starts to make you think “Wow, why didn’t we have this before? Why was CSS2 layout so awkward?” It allows you do a calculation to determine the computed value of a CSS unit, mixing different units in the process.

    For example, in Web Dictaphone we have theee main UI areas, stacked vertically. We wanted to give the first two (the header and the controls) fixed heights:

    header {
      height: 70px;
    }
     
    .main-controls {
      padding-bottom: 0.7rem;
      height: 170px;
    }

    However, we wanted to make the third area (which contains the recorded samples you can play back) take up whatever space is left, regardless of the device height. Flexbox could be the answer here, but it’s a bit overkill for such a simple layout. Instead, the problem was solved by making the third container’s height equal to 100% of the parent height, minus the heights and padding of the other two:

    .sound-clips {
      box-shadow: inset 0 3px 4px rgba(0,0,0,0.7);
      background-color: rgba(0,0,0,0.1);
      height: calc(100% - 240px - 0.7rem);
      overflow: scroll;
    }

    Note: calc() has good support across modern browsers too, even going back to Internet Explorer 9.

    Checkbox hack for showing/hiding

    This is fairly well documented already, but we thought we’d give a mention to the checkbox hack, which abuses the fact that you can click on the <label> of a checkbox to toggle it checked/unchecked. In Web Dictaphone this powers the Information screen, which is shown/hidden by clicking the question mark icon in the top right hand corner. First of all, we style the <label> how we want it, making sure that it has enough z-index to always sit above the other elements and therefore be focusable/clickable:

    label {
        font-family: 'NotoColorEmoji';
        font-size: 3rem;
        position: absolute;
        top: 2px;
        right: 3px;
        z-index: 5;
        cursor: pointer;
    }

    Then we hide the actual checkbox, because we don’t want it cluttering up our UI:

    input[type=checkbox] {
       position: absolute;
       top: -100px;
    }

    Next, we style the Information screen (wrapped in an <aside> element) how we want it, give it fixed position so that it doesn’t appear in the layout flow and affect the main UI, transform it to the position we want it to sit in by default, and give it a transition for smooth showing/hiding:

    aside {
       position: fixed;
       top: 0;
       left: 0;
       text-shadow: 1px 1px 1px black;
       width: 100%;
       height: 100%;
       transform: translateX(100%);
       transition: 0.6s all;
       background-color: #999;
        background-image: linear-gradient(to top right, rgba(0,0,0,0), rgba(0,0,0,0.5));
    }

    Last, we write a rule to say that when the checkbox is checked (when we click/focus the label), the adjacent <aside> element will have it’s horizontal translation value changed and transition smoothly into view:

    input[type=checkbox]:checked ~ aside {
      transform: translateX(0);
    }

    Basic app setup

    To grab the media stream we want to capture, we use getUserMedia() (gUM for short). We then use the MediaRecorder API to record the stream, and output each recorded snippet into the source of a generated <audio> element so it can be played back.

    First, we’ll add in a forking mechanism to make gUM work, regardless of browser prefixes, and so that getting the app working on other browsers once they start supporting MediaRecorder will be easier in the future.

    navigator.getUserMedia = ( navigator.getUserMedia ||
                           navigator.webkitGetUserMedia ||
                           navigator.mozGetUserMedia ||
                           navigator.msGetUserMedia);

    Then we’ll declare some variables for the record and stop buttons, and the <article> that will contain the generated audio players:

    var record = document.querySelector('.record');
    var stop = document.querySelector('.stop');
    var soundClips = document.querySelector('.sound-clips');

    Finally for this section, we set up the basic gUM structure:

    if (navigator.getUserMedia) {
       console.log('getUserMedia supported.');
       navigator.getUserMedia (
          // constraints - only audio needed for this app
          {
             audio: true
          },
     
          // Success callback
          function(stream) {
     
     
          },
     
          // Error callback
          function(err) {
             console.log('The following gUM error occured: ' + err);
          }
       );
    } else {
       console.log('getUserMedia not supported on your browser!');
    }

    The whole thing is wrapped in a test that checks whether gUM is supported before running anything else. Next, we call getUserMedia() and inside it define:

    • The constraints: Only audio is to be captured; MediaRecorder only supports audio currently anyway.
    • The success callback: This code is run once the gUM call has been completed successfully.
    • The error/failure callback: The code is run if the gUM call fails for whatever reason.

    Note: All of the code below is placed inside the gUM success callback.

    Capturing the media stream

    Once gUM has grabbed a media stream successfully, you create a new Media Recorder instance with the MediaRecorder() constructor and pass it the stream directly. This is your entry point into using the MediaRecorder API — the stream is now ready to be captured straight into a Blob, in the default encoding format of your browser.

    var mediaRecorder = new MediaRecorder(stream);

    There are a series of methods available in the MediaRecorder interface that allow you to control recording of the media stream; in Web Dictaphone we just make use of two. First of all, MediaRecorder.start() is used to start recording the stream into a Blob once the record button is pressed:

    record.onclick = function() {
      mediaRecorder.start();
      console.log(mediaRecorder.state);
      console.log("recorder started");
      record.style.background = "red";
      record.style.color = "black";
    }

    When the MediaRecorder is recording, the MediaRecorder.state property will return a value of “recording”.

    Second, we use the MediaRecorder.stop() method to stop the recording when the stop button is pressed, and finalize the Blob ready for use somewhere else in our application.

    stop.onclick = function() {
      mediaRecorder.stop();
      console.log(mediaRecorder.state);
      console.log("recorder stopped");
      record.style.background = "";
      record.style.color = "";
    }

    When recording has been stopped, the state property returns a value of “inactive”.

    Note that there are other ways that a Blob can be finalized and ready for use:

    • If the media stream runs out (e.g. if you were grabbing a song track and the track ended), the Blob is finalized.
    • If the MediaRecorder.requestData() method is invoked, the Blob is finalized, but recording then continues in a new Blob.
    • If you include a timeslice property when invoking the start() method — for example start(10000) — then a new Blob will be finalized (and a new recording started) each time that number of milliseconds has passed.

    Grabbing and using the blob

    When the blob is finalized and ready for use as described above, a dataavailable event is fired, which can be handled using a mediaRecorder.ondataavailable handler:

    mediaRecorder.ondataavailable = function(e) {
      console.log("data available");
     
      var clipName = prompt('Enter a name for your sound clip');
     
      var clipContainer = document.createElement('article');
      var clipLabel = document.createElement('p');
      var audio = document.createElement('audio');
      var deleteButton = document.createElement('button');
     
      clipContainer.classList.add('clip');
      audio.setAttribute('controls', '');
      deleteButton.innerHTML = "Delete";
      clipLabel.innerHTML = clipName;
     
      clipContainer.appendChild(audio);
      clipContainer.appendChild(clipLabel);
      clipContainer.appendChild(deleteButton);
      soundClips.appendChild(clipContainer);
     
      var audioURL = window.URL.createObjectURL(e.data);
      audio.src = audioURL;
     
      deleteButton.onclick = function(e) {
        evtTgt = e.target;
        evtTgt.parentNode.parentNode.removeChild(evtTgt.parentNode);
      }
    }

    Let’s go through the above code and look at what’s happening.

    First, we display a prompt asking the user to name their clip.

    Next, we create an HTML structure like the following, inserting it into our clip container, which is a <section> element.

    <article class="clip">
      <audio controls></audio>
      <p><em>your clip name</em></p>
      <button>Delete</button>
    </article>

    After that, we create an object URL pointing to the event’s data attribute, using window.URL.createObjectURL(e.data): this attribute contains the Blob of the recorded audio. We then set the value of the <audio> element’s src attribute to the object URL, so that when the play button is pressed on the audio player, it will play the Blob.

    Finally, we set an onclick handler on the delete button to be a function that deletes the whole clip HTML structure.

    Conclusion

    And there you have it; MediaRecorder should serve to make your app media recording needs easier. Have a play around with it and let us know what you think: we are looking forward to seeing what you’ll build!

  8. WebAPIs – Firefox OS for developers: the platform HTML5 deserves

    In the fifth video of our “Firefox OS – the platform HTML5 deserves” series (part one, part two, part three and part four have already been published) we talk about how Firefox OS extends the capabilities of the Web by adding new APIs, called WebAPIs to the existing stack of technologies.

    Firefox OS - be the future

    Check out the video featuring Chris Heilmann (@codepo8) from Mozilla and Daniel Appelquist (@torgo) from Telefónica Digital/ W3C talking about the need for device APIs on the Web, how some of the existing APIs can be used and how the work on Firefox OS benefits the Web as a whole. You can watch the video here.

    The WebAPI work is important as it allows apps built with Web technologies to access the hardware. For the work on Firefox OS (which is fully built in HTML5 itself), we very much needed to know the status of the phone, how much battery is left, what the connectivity is like, the screen orientation and many more features. Thus we defined access to the various parts of the hardware as JavaScript APIs and sent these as proposals to the standard bodies.

    If you want to learn more about these new APIs the canonical place to go to is the WebAPI Wiki Page where you can find an up-to-date list of all the APIs, their implementation status in the different Firefox platforms, the standards bodies involved and where to file bugs for them. You can also click through to bugzilla to see demos of the APIs in action. We’ve blogged here about WebAPIs in detail before: Using WebAPIs to make the web layer more capable and you can see a lot of information and demos in that post.

    In general, all the APIs follow a simple model: you ask for access and you define a success and failure handler. You also get methods to ask for various properties in detail and some have Boolean values available to you. This makes it very easy to test for the support of a certain API before trying to access it.

    Not all APIs can be available on the open Web as we can not trust every server out there. That is why the APIs come in three flavours: regular, privileged and certified. Regular APIs can be used in any app, regardless of its location (you can self-host these apps). Examples for that are the geolocation or the battery API. Privileged and Certified APIs both need your app to have a content security policy and be hosted on Mozilla servers. That way we can give you access to the hardware but minimise the potential of abuse and malware at the same time.

    Take a look at the exhaustive list of blog posts here dealing with WebAPIs for more reading and we’ll be back with the next video covering WebActivities soon.

  9. The Proximity API

    Something that’s very nice with bringing the web to the mobile platform with Firefox OS and WebAPIs is the ability to connect more into the physical world. One part there is the Proximity API, which is also a W3C Working Draft – Proximity Events.

    What it is

    The API is about detecting how close the device is to any other physical object, by accessing the proximity sensor. There are two ways of working with proximity

    • Device proximity
    • User proximity

    From the spec:

    The DeviceProximityEvent interface provides web developers information about the distance between the hosting device and a nearby object.

    The UserProximityEvent interface provides web developers a user-agent- and platform-specific approximation that the hosting device has sensed a nearby object.

    Using it

    Working with proximity is as easy as adding a couple of event handlers, depending on what you want to do. The deviceproximity event works with the metric system, and you will get back the distance in centimeters (these examples take for granted that you have an existing proximityDisplay element to present the values in). You are also able to check the minimum and maximum distance the sensor in your device supports.

    window.ondeviceproximity = function (event) {
        // Check proximity, in centimeters
        var prox = "<strong>Proximity: </strong>" + event.value + " cm<br />";
            prox += "<strong>Min value supported: </strong>" + event.min + " cm<br />";
            prox += "<strong>Max value supported: </strong>" + event.max + " cm";
        proximityDisplay.innerHTML = prox;
    };

    It should be noted that the values sensors currently can return a quite small value, ranging from 0 up to 5 or 10 centimeters. In general, I believe the use case has been to detect if the user is holding the device up to the ear or not, but my personal opinion is that this could provide to be even more exciting if it supported, say, up to a meter!

    With userproximity, the value you get back is a near property on the event object, which is a boolean returning whether the user is near to the sensor or not.

    window.onuserproximity = function (event) {
        // Check user proximity
        var userProx = "<strong>User proximity - near: </strong>" + event.near + "<br />";
        userProximityDisplay.innerHTML = userProx;
    };

    I’ve also added device and proximity code and testing to the Firefox OS Boilerplate App so feel free to get the latest code there and play around with it.

    Web browser support

    The Proximity API is currently supported in:

    Conclusion

    Working with proximity is yet another example of crossing the gap between software and real physical context, and showing just how powerful the web can be. We hope to see you creating many interesting things based on it!

  10. Ambient Light Events and JavaScript detection

    I think that one of the most interesting things with all WebAPIs we’re working on, is to interact directly with the hardware through JavaScript, but also, as an extension to that, with the environment around us. Enter Ambient Light Events.

    The idea with an API for ambient light is to be able to detect the light level around the device – especially since there’s a vast difference between being outside in sunlight and sitting in a dim living room – and adapt the user experience based on that.

    One use case could be to change the CSS file/values for the page, offering a nicer reading experience under low light conditions, reducing the strong white of a background, and then something with more/better contrast for bright ambient light. Another could be to play certain music depending on the light available.

    Accessing device light

    Working with ambient light is quite simple. What you need to do is apply a listener for a devicelight event, and then read out the brightness value.

    It comes returned in the lux unity. The lux value ranges between low and high values, but a good point of reference is that dim values are under 30 lux, whereas really bright ones are 10,000 and over.

    window.addEventListener("devicelight", function (event) {
        // Read out the lux value
        var lux = event.value;
        console.log(lux);});

    Web browser support

    Ambient Light Events are currently supported in Firefox on Android, meaning both mobile phones and tablets, and it’s also supported in Firefox OS. On Android devices (the ones I’ve tested), the sensor is located just right to the camera facing the user.

    It is also a W3C Working Draft, following the type of other similar events, such as devicemotion, so we hope to see more implementations of this soon!

    Demo

    Dmitry Dragilev and Tim Wright recently wrote a blog post about the Ambient Light API, with this nice demo video:

    You can also access the demo example directly, and if you test in low light conditions, you’ll get a little music. Remember to try it out on a supported device/web browser.