Video Articles

Sort by:


  1. Firefox Development Highlights – H.264 & MP3 support on Windows, scoped stylesheets + more

    Time for the first look this year into the latest developments with Firefox. This is part of our Bleeding Edge and Firefox Development Highlights series, and most examples only work in Firefox Nightly (and could be subject to change).

    H.264 & MP3 support on Windows

    Firefox for Android and Firefox OS already support H.264 and MP3. We are also working on bringing these formats to Firefox Desktop. On Windows 7 and above, you can already test it by turning on the preference in about:config. Decoding is done on the OS side (no decoder included in Firefox source code, not like WebM or Ogg Theora). For Linux and Mac, work is in progress.

    The new Downloads panel has been enabled

    We have now enabled the new Downloads panel:

    Scoped style attribute

    It’s now possible to define scoped style elements. Usually, when we write a stylesheet, we use <style>...</style>, and CSS code is applied to the whole document. If the <style> tag is nested inside a node (let’s say a <div>), and the <style> tag includes the scoped attribute (<style scoped>), then the CSS code will apply only to a subtree of the document, starting with the parent node of the <style> element. The root of the subtree can also be referred via the :scope pseudo-class.


    Scoped style demo support on JS Bin.

    Our friends over at HTML5Rocks have also written about it in A New Experimental Feature: scoped stylesheets.

    @supports and CSS.supports

    In Firefox 17, we shipped the @supports CSS at-rule. This lets you define specific CSS code only if some features are supported. For example:

    @supports not (display: flex) {
      /* If flex box model is not supported, we use a different layout */
      #main {
          width: 90%;

    In Firefox 20, it’s now possible to do the same thing, but within JavaScript:

    if (CSS.supports("display", "flex")) {
      // do something relying on flexbox
  2. Linklib – lets film lovers and filmmakers send time synced links from videos to phones – a WebFWD project

    Have you ever googled films and TV-shows while you watch them? Do you think Youtube’s popup annotations in the middle of a video are distracting?

    Having shot a documentary about the Pirate Bay for 4 years I wanted to embed links directly into the film to give the audience more nuances in the complex and bizarre story about the Swedish file-sharing site. I constantly google videos I watch.

    What problem is Linklib solving?

    Linklib lets filmmakers, film fans, journalists and bloggers send time synched links from a full screen video directly to their audiences’ phones. Instead of googling an actor, fact checking an election video or feverishly trying to find the soundtrack to that TV-series, just pick up the phone and the information is right there.

    (video available in the Linklib web site)

    When I stumbled over popcorn.js in 2011 I realized the amazing potential of embedding time synched links into films, but I was still looking for a way to hide the hyperlinks from the fullscreen video.

    So we built Linklib, a system that sends time synced links from streamed video to phones. That way you can read up on that actress, fact check that election video or follow that rapper on twitter while you’re watching the video. Without obtrusive annotations in the frame.

    Linklib is a free and open library of time synced video commentary. Film directors, journalists and fan communities can add facts and comments to give films more depth and nuances. The system works just as well for feature films, documentaries, music videos, educational and commercical films. Linklib is an open source project that wants to tell better stories by using the open web.

    How it works

    The basic components of Linklib’s system are a remote, usually a smartphone or a tablet, and a video viewer, typically a computer. The remote shows synchronized links from the video viewer and can send and receive events such as play, pause, forward and rewind. To sync the phone with the video we show a QR code that you can scan with your phone. At the moment our video viewer can show youtube, vimeo and html5 video streams.

    A lobby server handles the communication between the remote and the video. The remote uses to communicate with the lobby server and is written in JavaScript.

    For users to be able to add videos and create link feeds of their own we’ve built a web based authoring tool focused on simplicity. The tool is built using Twitter bootstrap and jQuery. All links and account information are stored on a mysql database on Amazon AWS.


    • Remote – shows synchronized links from the Video Viwer and can send receive events such as play, pause, forward and rewind.
    • Video Viewer – Shows a youtube or vimeo stream.
    • Lobby server – handles communication between Remote and Video Viewer
    • Authoring Tool – Web based system that allows users to add videos and create link flows for them

    APIs and Libraries

    • Remote uses to communicate with Lobby server and is written in JavaScript.
    • Lobby server is based on node.js
    • Video viewer uses popcorn.js to play videos
    • All links and account information are store on a mysql database on Amazon AWS
    • Authoring Tool uses Twitter bootstrap and jQuery

    Help us test out Linklib

    Are you a filmmaker looking for a way to tie closer bonds to your audience? Or a Game of Thrones-fan looking for a way to fill your favorite episode with geeky references? Maybe you want to throw in a bunch of research that you couldn’t explain in your last conference video? Or you’re a fashion designer that wants to reveal the details of your catwalk? If you’re a film producer you could fill your youtube trailer with your characters’ social media and reviews, like the great documentary Indiegame: The Movie here beneath.

    You can also add mashups and remixes to that banging new Outkast music video! And help activists worldwide spread information that doesn’t make the mainstream news! Linklib puts the web into videos without ruining the traditional viewing experience.

    We just launched a beta and we’d love your feedback about bugs and features!

  3. Firefox Development Highlights: video.playbackRate and download attribute

    Here are the latest features in Firefox for developers, as part of our Bleeding Edge series, and most examples only work in Firefox Nightly (and could be subject to change).

    <video>: support for custom playbackRate

    Setting video.playbackRate changes the “video speed”. 1.0 is regular speed, 2.0 is 2 times faster. From the MDN documentation on HTMLMediaElement:

    The default playback rate for the media. 1.0 is “normal speed,” values lower than 1.0 make the media play slower than normal, higher values make it play faster.


    <video src="v.webm" id="v" controls autoplay></video>
    <button onclick="fastForward()">fast foward</button>
      fastFoward() {
        v.playbackRate = 2;

    Interactive demo:

    video playbackRate demo

    <a> “download” attribute

    From Downloading resources in the Living standard at

    In some cases, resources are intended for later use rather than immediate viewing. To indicate that a resource is intended to be downloaded for use later, rather than immediately used, the download attribute can be specified on the a or area element that creates the hyperlink to that resource.

    This attribute is particularly useful with blobs. With Blobs, You can create files in JavaScript. A binary blob can be an image built in a canvas element for example. Linking binary blobs to a <a> element (with a URL constructor) and marking this <a> element as downloadable with this new attribute, the user will be able to save the blob as a file on his hard-drive.

    Example from Tom Schuster’s blog post about his work on the HTML5 download attribute: ]

    var blob = new Blob(["Hello World"]);
    var a = document.createElement("a");
    a.href = window.URL.createObjectURL(blob); = "hello-world.txt";
    a.textContent = "Download Hello World!";

    It has also been covered on HTML5Rocks in Downloading resources in HTML5.

  4. H.264 video in Firefox for Android

    Firefox for Android
    has expanded its HTML5 video capabilities to include H.264 video playback. Web developers have been using Adobe Flash to play H.264 video on Firefox for Android, but Adobe no longer supports Flash for Android. Mozilla needed a new solution, so Firefox now uses Android’s “Stagefright” library to access hardware video decoders. The challenges posed by H.264 patents and royalties have been documented elsewhere.

    Supported devices

    Firefox currently supports H.264 playback on any device running Android 4.1 (Jelly Bean) and any Samsung device running Android 4.0 (Ice Cream Sandwich). We have temporarily blocked non-Samsung devices running Ice Cream Sandwich until we can fix or workaround some bugs. Support for Gingerbread and Honeycomb devices is planned for a later release (Bug 787228).

    To test whether Firefox supports H.264 on your device, try playing this “Big Buck Bunny” video.

    Testing H.264

    If your device is not supported yet, you can manually enable H.264 for testing. Enter about:config in Firefox for Android’s address bar, then search for “stagefright”. Toggle the “stagefright.force-enabled” preference to true. H.264 should work on most Ice Cream Sandwich devices, but Gingerbread and Honeycomb devices will probably crash.

    If Firefox does not recognize your hardware decoder, it will use a safer (but slower) software decoder. Daring users can manually enable hardware decoding. Enter about:config as described above and search for “stagefright”. To force hardware video decoding, change the “media.stagefright.omxcodec.flags” preference to 16. The default value is 0, which will try the hardware decoder and fall back to the software decoder if there are problems (Bug 797225). The most likely problems you will encounter are videos with green lines or crashes.

    Giving feedback/reporting bugs

    If you find any video bugs, please file a bug report here so we can fix it! Please include your device model, Android OS version, the URL of the video, and any about:config preferences you have changed. Log files collected from aLogcat or adb logcat are also very helpful.

  5. Popcorn Maker 1.0 released – how it works

    This week Mozilla is in London at the Mozilla Festival 2012. A year ago at last year’s Festival, we released Popcorn.js 1.0, and with it a way for filmmakers, journalists, artists, and bloggers to integrate audio and video into web experiences. Popcorn has since become one of the most popular ways to build time-based media experiences for the web. It has proven to be uniquely powerful for bespoke web demos, films, visualizations, etc. This year, we’ve come to the Mozilla Festival with an even bigger 1.0 release: Popcorn Maker 1.0.

    Popcorn.js and Popcorn Maker

    With Popcorn.js, and its plugin ecosystem, we created a tool for developers. Popcorn lets developers work dynamically using timing information from web media, using a simple API and plugin system. With Popcorn Maker we wanted to go further and give this same power to bloggers, educators, remixers, and artists — web makers.

    At its core, Popcorn Maker is an HTML5 web app for combining web media with images, text, maps, and other dynamic web content. Its appearance is unique, but not unfamiliar, providing a timeline-based video editing experience for the web. Once created, Popcorn Maker hosts remixes as simple HTML pages in the cloud, which can be shared or embedded in blogs or other sites. Furthermore, every remix provides a “Remix” button, allowing anyone watching to become a creator themselves by using the current remix as a base project for their own creation. This “view source” experience for web media is a key aspect of Popcorn Maker’s goals as part of the larger Mozilla Webmaker project.

    On a technical level, Popcorn Maker is the combination of a number of separate projects, each heavily influenced by the “12 factor” philosophy: a node.js-based web service called Cornfield; a JavaScript framework for creating highly-interactive web apps; and Popcorn.js, with a set of custom Popcorn.js plugins.

    Coming from Butter

    When we first started we thought we’d create a simple library named Butter, and that Popcorn Maker would be injectable — something you would add to existing web pages, like a toolbar. Many of the original ideas for Popcorn Maker were taken from an early prototype created by Rick Waldron, Boaz Sender, and Al MacDonald of Bocoup. Over time these ideas evolved, Seneca College and CDOT got involved — bringing their heavy experience with web media and Pocorn.js — and more prototypes ensued. Soon it became clear that what was really needed was a full web app: an infrastructure including client, server, and a community.

    This realization occurred as the Popcorn Maker team continued to grow. More developers joined our team some of whom brought much-needed design experience with them. Ocupop was consulted to enhance the design of the project, and gave us the same expertise that was used to produce the branding for HTML5.

    Using LESS for CSS

    Thanks to talented designer-hacker hybrids like Kate Hudson, Popcorn Maker’s design became modern and responsive. We worked with her to integrate LESS — a dynamic stylesheet language which compiles to CSS. LESS made challenges like cross-browser compatibility — something key to Popcorn Maker’s goals — less painful. It also allowed much more flexibility for a rapid prototype approach to our UI. Consequently, a significant amount of our CSS is constructed in functions and stored in variables, both of which are included in files throughout our style implementation. We can make changes that let us try new approaches very quickly, touching and creating as little boilerplate CSS as possible.

    Popcorn Media Wrappers

    Of course, in order to complete the features we desired, we needed to extend Popcorn.js with new features. One such feature is Popcorn Media Wrappers, which wrap non-HTML5 media with an interface and set of behaviours that allow them to be used exactly like HTML5 video and audio elements. By building wrappers for YouTube, Vimeo, and SoundCloud, we were able to standardize all our media code on the HTML5 media interface. More wrappers are in production, allowing more types of media to be included in future releases, and the current wrappers will ship as part of Popcorn.js 1.4 later this year.

    Evolving into a modern tool

    Eventually, Butter and its constituents allowed us to present a full-featured, responsive, end-user tool using nothing more than HTML, CSS, and JavaScript. Currently, Butter uses a modular development environment provided by require.js, and provides a UI toolkit with widgets that were used to build the timeline, scrubber, editors, dialogs and more. In cooperation with require.js’ text plugin, Document Fragments are used to allowed us to build all of our UI in static HTML, and load and render it dynamically at runtime. This approach lets us treat our HTML as it was meant to be treated (i.e., as layout), but also include it in our source tree to be referenced and linted like anything else.

    While building Butter was a large part of the execution of our app, it comprises only a portion of Popcorn Maker. Cornfield, Popcorn Maker’s server, allows users to save and publish their content, completing the Popcorn universe. It uses node.js, Express, Jade templates, and Amazon S3 to provide a RESTful API and hosting environment for Butter. To avoid the thorny issue of user authentication and credential management, Cornfield leverages Mozilla Persona. During development, attempting to use Persona exposed the need to build the express-persona node module to help make Persona work more easily in Express.

    Based on community input

    The Popcorn Maker team owes a great deal of gratitude to its community and users. Their feedback heavily influenced the direction of Popcorn Maker. The most successful and compelling Popcorn.js demos we saw used the video as a canvas, on which to place pieces of the web dynamically. Further, we found that our users wanted to embed their projects — not just host them. These insights were instrumental in moving our focus to creating embeddable content, rather than full-page content.

    Gathering feedback from our users also influenced our development. In Popcorn Maker we’ve experimented with user feedback collection and JavaScript crash reporting, both built directly into the web app. The app’s Feedback button lets users submit general feedback, which is then stored as JSON in S3. Through such feedback we’ve been able to get valuable information about issues users have encountered and ideas for new features.

    JavaScript crash reports

    Even more significant has been our experiment with JavaScript crash reports. This technique is used to great effect with Mozilla’s native development, for products like Firefox and it s crash stats. However, this technique is not often applied to JavaScript web apps. We use window.onerror to detect top-level exceptions being thrown in our code, and present the user with a dialog to allow them to submit an anonymous report, optionally with details about what they encountered. We log various types of edge cases (e.g., null node access in the DOM by wrapping DOM methods) and send this data with the report. This has been an incredible source of data for our team. Since our 0.9 release for the TED talk release we’ve fixed more than a dozen crash bugs, almost all of which our developers were never able to hit locally.

    Influenced by other Mozilla projects

    The development of Popcorn Maker was also heavily influenced by processes used on other Mozilla projects. In particular an emphasis on automation, tooling, and open source best practices. We accelerated our development by using linting in all of our code, from JavaScript (jshint), to CSS (csslint), to HTML, where we created a client-server solution based on the HTML5 parser. We use many automation systems, from botio, to TravisCI, to Jenkins to run our tests and help us trust the rapid pace of code changes. Most importantly, we also employed a two-stage code review process, similar to Firefox’s Peer-Review and Super-Review. By having every patch reviewed twice, we not only reduced regressions, but also helped grow our developer team by ensuring that multiple people understood every change.

    Popcorn Maker has already touched a large community and is the result of the hard work and collective influence of a growing team of dedicated developers, thinkers, and increasingly active community members. We’re incredibly proud to be releasing a year’s worth of work today with the 1.0 launch. The only question left is what you’ll make with it –!

    More information on the code and development process

    For more info about the code and development team/process:


    Bug Tracker






  6. Report from San Francisco Gigabit Hack Days with US Ignite

    This past weekend, the Internet Archive played host to a crew of futurist hackers for the San Francisco Gigabit Hack Days.

    The two-day event, organized by Mozilla and the City of San Francisco, was a space for hackers and civic innovators to do some experiments around the potential of community fiber and gigabit networks.


    The event kicked off Saturday morning with words from Ben Moskowitz and Will Barkis of Mozilla, followed by the Archive’s founder Brewster Kahle. Brewster talked a bit about how the Archive was imagined and built as a “temple to the Internet.”

    San Francisco’s Chief Innovation Officer, Jay Nath, talked about the growing practice of hackdays in the city and the untapped potential of the City’s community broadband network. The SF community broadband network is a 1Gbps network that provides Internet access for a wide range of community sites within San Francisco—including public housing sites, public libraries, city buildings and more.

    These partners are eager to engage with developers to use the network as a testbed for high bandwidth applications, so we quickly broke off to brainstorm possible hacks.

    Among the proposals: an app to aggregate and analyze campaign ads from battleground states; apps to distribute crisis response; local community archives; 3D teleconferencing for education and medicine; “macro-visualization” of video, and fast repositories of 3D content. Read on for more details.


    Ralf Muehlen, network engineer, community broadband instigator, and all-around handyman at the Archive prepared for the event in a few very cool ways—unspooling many meters of gigabit ethernet cable for hackers, and provisioning a special “10Gbps desktop.”

    The 10Gbps desktop in action


    The 10Gbps desktop was a server rack with an industrial strength network card, connected to raw fiber and running Ubuntu. While not a very sensible test machine, the 10Gbps desktop was an awesome way to stress the limits of networks, hardware, and software clients. Video hackers Kate Hudson, Michael Dale, and Jan Gerber created a video wall experiment to simultaneously load 100 videos from the Internet Archive, weighing in at about 5Mbps each. On this machine, unsurprisingly, the main bottleneck was the graphics card. Casual testing revealed that Firefox does a pretty good job of caching and loading tons of media where other browsers choked or crashed, though its codec support is not as broad, making these kinds of experiments difficult.


    Here are some of the results of the event:

    Macro-visualization of video

    Kate Hudson, Michael Dale, and Jan Gerber created an app that queues the most popular stories on Google News and generates a video wall.

    The wall is created by searching the Archive’s video collection by captions and finding matches. Imagined as a way of analyzing different types of coverage around the same issue, the app has a nice bonus feature: real-time webcam chat, in browser, using WebRTC. If two users are hovered over the same video, they’re dropped into an instant video chat room, ChatRoulette-style.

    The demo uses some special capabilities of the Archive and can’t be tested at home just yet, but we’re looking to get the code online as soon as possible.

    Scalable 3D content delivery

    As Jeff Terrace writes in his post-event blog: “3D models can be quite big. Games usually ship a big DVD full of content or make you download several gigabytes worth of content before you can start playing… [by] contrast, putting 3D applications on the web demands low-latency start times.”

    Jeff and Henrik Bennetsen, who work on federated 3D repositories, wanted to showcase the types of applications that can be built with online 3D repositories and fast connections. So they hacked an “import” button into ThreeFab, the three.js scene editor.

    Using Jeff’s hack, users can load models asynchronously in the background directly from repositories like Open3DHub (CORS headers are required for security reasons). The models are seamlessly loaded from across the web and added into the current scene.

    This made for an awesome and thought-provoking line of inquiry—what kind of apps and economies can we imagine using 3D modeling, manipulation, and printing across fast networks? Can 3D applications be as distributed as typical web applications tend to be?

    Bonus: as a result of the weekend, the Internet Archive is working on enabling CORS headers for its own content, so hopefully we will be able to load 3D/WebGL content directly from the Archive soon.

    3D videoconferencing using point cloud streams

    XB PointStream loads data from Radiohead's House of Cards music video

    Andor Salga, author of an excellent JS library called XB PointStream, wanted to see if fast networks could enable 3D videoconferencing.

    Point clouds are 3D objects represented through volumetric points rather than mesh polygons. They’re interesting to graphics professionals for a number of reasons—for one, they can have very, very high-resolution and appear very life-like.

    Interestingly, sensor arrays like the low-cost Microsoft Kinect can be used to generate point cloud meshes on the cheap, by taking steroscopic “depth images” along with infrared. (It may sound far out, but it’s the basis for a whole new wave of motion-controlled videogames).

    Using Kinect sensors and WebGL, it should be possible to create a 3D videoconferencing system on the cheap. Users on either end would be able to pan around a 3D model of a person they’re connected to, almost like a hologram.

    This type of 3D video conferencing would be able to communicate depth information in a way that traditional video calls can’t. Additionally, these kinds of meetings could be recorded, then played back with camera interaction, allowing users to get different perspectives of the meetings. Just imagine the applications in the health and education sectors.

    For his hack, Andor and a few others wanted to prototype a virtual classroom that would—for instance—enable scientists at San Francisco’s Exploratoreum to teach kids at community sites connected to San Francisco’s community broadband network.

    After looking at a few different ways of connecting the Kinect to the browser, it appeared that startup Zigfu offers the best available option: a browser plugin that provides an API to Kinect hardware. San-Francisco native Amir Hirsch, founder of Zigfu, caught word of the event and came by to help. The plan was to use Websockets to sync the data between two users of this theoretical system. The team didn’t get a chance to complete the prototype by the end of the weekend, but will keep hacking.

    Point clouds are typically very large data sets. Especially if they are dynamic, a huge amount of data must transfer from one system to another very quickly. Without very fast networks, this kind of application would not be possible.

    Other hacks

    Overall, this was a fantastic event for enabling the Internet Archive to become a neighborhood cloud for San Francisco, experimenting on the sharp edges of the internet, and building community in SF. A real highlight was to see 16-year-old Kevin Gil from BAVC’s Open Source track lead a group of teenaged hackers in creating an all-new campaign ad uploader interface for the Archive—quite impressive for any weekend hack, let alone one by a team of young webmakers.

    From left: Ralf Muehlen, Tim Pozar, and Mike McCarthy, the minds behind San Francisco's community broadband network

    Thank everyone for spending time with us on a beautiful SF weekend, and see you next time!

    Get Involved

    If you’re interested in the future of web applications, fast networks, and the Internet generally, check out Mozilla Ignite.

    Now through the end of summer, you can submit ideas for “apps from the future” that use edge web technologies and fast networks. The best ideas will earn awards from a $15k prize pool.

    Starting in September, you can apply into the Mozilla Ignite apps challenge, with $485,000 in prizes for apps that demonstrate the potential of fast networks.

    Check out the site, follow us @mozillaignite, and let us know where you see the web going in the next 10 years!

  7. FoxIE – a video training series on web standards with Microsoft

    About a month ago Chris Heilmann of Mozilla went to Miami, Florida to shoot a series of training videos with Rey Bango of Microsoft.

    Now these videos are available on Microsoft’s Channel9 as the FoxIE training series.

    In a very off-the-cuff discussion format between Rey and Chris we covered a few topics:

    If you want to present the Mozilla talks yourself, we uploaded the slides in various formats and the code examples with lots of explanations on how to present them to GitHub:

    All the videos are available in HTML5 format and to download as MP4 and WMV for offline viewing.

  8. getUserMedia is ready to roll!

    We blogged about some of our WebRTC efforts back in April. Today we have an exciting update for you on that front: getUserMedia has landed on mozilla-central! This means you will be able to use the API on the latest Nightly versions of Firefox, and it will eventually make its way to a release build.

    getUserMedia is a DOM API that allows web pages to obtain video and audio input, for instance, from a webcam or microphone. We hope this will open the possibility of building a whole new class of web pages and applications. This DOM API is one component of the WebRTC project, which also includes APIs for peer-to-peer communication channels that will enable exchange of video steams, audio streams and arbitrary data.

    We’re still working on the PeerConnection API, but getUserMedia is a great first step in the progression towards full WebRTC support in Firefox! We’ve certainly come a long way since the first image from a webcam appeared on a web page via a DOM API. (Not to mention audio recording support in Jetpack before that.)

    We’ve implemented a prefixed version of the “Media Capture and Streams” standard being developed at the W3C. Not all portions of the specification have been implemented yet; most notably, we do not support the Constraints API (which allows the caller to request certain types of audio and video based on various parameters).

    We have also implemented a Mozilla specific extension to the API: the first argument to mozGetUserMedia is a dictionary that will also accept the property {picture: true} in addition to {video: true} or {audio: true}. The picture API is an experiment to see if there is interest in a dedicated mechanism to obtain a single picture from the user’s camera, without having to set up a video stream. This could be useful in a profile picture upload page, or a photo sharing application, for example.

    Without further ado, let’s start with a simple example! Make sure to create a pref named “media.navigator.enabled” and set it to true via about:config first. We’ve put the pref in place because we haven’t implemented a permissions model or any UI for prompting the user to authorize access to the camera or microphone. This release of the API is aimed at developers, and we’ll enable the pref by default after we have a permission model and UI that we’re happy with.


    There’s also a demo page where you can test the audio, video and picture capabilities of the API. Give it a whirl, and let us know what you think! We’re especially interested in feedback from the web developer community about the API and whether it will meet your use cases. You can leave comments on this post, or on the dev-media mailing list or newsgroup.

    We encourage you to get involved with the project – there’s a lot of information about our ongoing efforts on the project wiki page. Posting on the mailing list with your questions, comments and suggestions is great way to get started. We also hang out on the #media IRC channel, feel free to drop in for an informal chat.

    Happy hacking!

  9. WebRTC efforts underway at Mozilla!

    Last week, a small team from Mozilla attended IETF 83 in Paris, and we showed an early demo of a simple video call between two BrowserID-authenticated parties in a special build of Firefox with WebRTC support. It is still very early days for WebRTC integration in Firefox, but we’re really excited to show you something that works!

    At Mozilla Labs, we’ve been experimenting with integrating social features in the browser, and it seemed like a cool idea to combine this with WebRTC to establish a video call between two users who are signed in using BrowserID (now called Persona). The SocialAPI add-on, once installed, provides a sidebar where web content from the social service provider is rendered. In our demo social service, we show a “buddy list” of people who are currently signed in using Persona.

    The video chat page that is served when the user initiates a video chat uses a custom API intended to simulate the getUserMedia and PeerConnection APIs currently being standardized at the W3C. A <canvas> is used to render both the remote and local videos, though it is also possible to render them in a <video>. We’re working very quickly to implement the standard APIs, and you can follow our progress on the tracker bug.

    A lot of folks burned the midnight oil to get this demo ready before the IETF event, and special thanks are due to Eric Rescorla, Michael Hanson, Suhas Nandakumar, Enda Mannion, Ethan Hugg, the folks behind Spacegoo, and Randell Jesup, in addition to the whole media team here at Mozilla.

    Current development is being done on a branch of mozilla-central called alder. It is going to be an exciting few months ahead as we work towards bringing WebRTC to Firefox. There is a lot of work to do, and if you are interested in contributing, please reach out! Maire Reavy, our product person and project lead for WebRTC would be happy to help you find ways to contribute. Many of us are also usually available in IRC at #media, and we have a mailing list.

    Transcript of screencast:

    Hi, I’m Anant from Mozilla Labs and I’m here at IETF where we are demonstrating a simple video call between two BrowserID-authenticated parties, using the new WebRTC APIs that we are working on.

    This is a special build of Firefox with WebRTC support, and also has the experimental SocialAPI add-on from Mozilla Labs installed. On the right hand side you can see web content served by, to which I will sign with BrowserID. Once I’m signed in, I can see all my online friends on the sidebar. I see my friend Enda is currently online, and so I’m going to click the video chat button to initiate a call.

    Here, I see a very early prototype of a video call window served up by our demo social service. Now, I can click the Start Call button to let Enda know that I want to speak with him. Once he accepts the call, a video stream is established between the two parties as you can see. So, that was a video call built entirely using JavaScript and HTML!

    You can check out the source code for this demo, as well as learn how to contribute to the ongoing WebRTC efforts at Mozilla in this blog post. Thanks for watching!