Mozilla

Video Articles

Sort by:

View:

  1. Firefox OS for developers – the platform HTML5 deserves

    Over the next few weeks we will publish a series of short videos here that explain what Firefox OS means for developers and how you can be part of the revolution it brings to the world.

    be-the-future

    In various conversations we’ve repeatedly heard from developers that they view Firefox OS as simply a third player in the mobile space next to Android and iOS. This is not exactly what we are trying to achieve. Firefox OS is unique in its approach as it doesn’t target the high-end markets already saturated by other platforms. Instead its goal is to replace feature phones in markets that have no access to high-end smartphones or aren’t even considered in closed application mechanisms like marketplaces. Furthermore we wanted to create the first truly open and web-standards based ecosystem and mobile operating system – nothing in Firefox OS is closed or alien to the Web.

    This is why we recorded this video series to explain in a short and simple fashion where you can go to get started as a developer for Firefox OS and how this means writing HTML5 without the limitations other platforms have in place for web-standards based applications.

    Today we start with a short video featuring Chris Heilmann (@codepo8) from Mozilla and Daniel Appelquist (@torgo) from Telefónica Digital/ W3C talking about the goals of Firefox OS. You can watch the video here.

    Resources mentioned in the video are:

    We hope you enjoy this and that it answers a few of the questions you had about Firefox OS. Watch this space for more videos in this series.

  2. open video codecs and quality

    This is a re-post (with permission) of a post that Greg Maxwell wrote in response to a comment by Chris DiBona from Google on a whatwg mailing list. The codecs being discussed are the same ones we’ll be including in Firefox 3.5 and are also the same codecs that Mozilla, Wikipedia and others have been investing in.

    Recent developer nightlies of Google Chrome support these codecs and a future version of Opera will also support them. Theora and Vorbis also work in Safari if you install the Xiph Qt component. We’re quickly reaching the point where all modern browsers support these open codecs with full support for the video tag.

    You’ll note that Greg’s post doesn’t have the tone of a marketing document – it’s not meant to. Nor is this a comparison against HD-sized, high-bitrate video. Instead it’s an attempt to give an honest comparison of how the open codecs fare against commonly-used formats and sizes used on the world’s largest video site. I think you’ll agree with Greg’s conclusions at the bottom of the document, especially with audio where Vorbis really shines.

    Greg’s post follows.

    Purpose

    On Jun 13th 2009 Chris DiBona of Google made a remarkable claim on the WhatWG mailing list:

    “If [youtube] were to switch to theora and maintain even a semblance of the current youtube quality it would take up most available bandwidth across the Internet.”

    Unfortunately, open video formats have been subjected to FUD so frequently that people are willing to believe bold claims like these without demanding substantiation.

    In this comparison I will demonstrate that this claim was unfair and unreasonable. Using a simple test case I show that Theora is competitive and even superior to some of the files that Google is distributing today on YouTube.

    Theora isn’t the most efficient video codec available right now. But it is by no means bad, and it is substantially better than many other widely used options. By conventional criteria Theora is competitive. It also has the substantial advantage of being unencumbered, reasonable in computational complexity, and entirely open source. People are often confused by the correct observation that Theora doesn’t provide the state of the art in bitrate vs quality, and take that to mean that Theora does poorly when in reality it does quite well. Also, the Theora encoder has improved a lot lately so some older problems no longer apply.

    While different files may produce different results, the allegation made on WhatWG was so expansive that I believe a simple comparison can reliably demonstrate its falsehood.

    I do not believe Chris intended to deceive anyone, only that he is a victim of the same outdated and/or simply inaccurate information that has fooled many others. Automotive enthusiasts may make a big deal about a 5 horsepower difference between two cars, but these kinds of raw performance differences are not relevant to most car buyers nor are they even the most important criteria to people who race. Likewise, videophiles nitpick the quality of compression formats and this nitpicking is important for the advancement of the art. But I believe that people are mistaking these kinds of small differences for something which is relevant to their own codec selection.

    Results

    A 499kbit/sec H.264+AAC output and a 327kbit/sec H.263(Sorensen Spark)+MP3 output were available via the download service. The YouTube-encoded files are available on the YouTube site. Because the files on YouTube may change and the web player does not disclose the underlying bitrate, I have made the two encoded files available.

    ~499kbit/sec comparison

    YouTube

    Download (H.264+AAC; 17MiB)

    Ogg/Theora+Vorbis

    Download / Watch (Ogg/Theora+Vorbis; 17MiB)

    ~327kbit/sec comparison

    YouTube

    Download (H.263+MP3; 12MiB)

    Ogg/Theora+Vorbis

    Download / Watch (Ogg/Theora+Vorbis; 12MiB)

    A slightly lower bitrate was used for the Theora+Vorbis test cases to avoid any question of quality improvement resulting from larger outputs.
    For a fair comparison you must compare the audio as well. Even without audio differences, still image comparisons are a poor proxy for video quality.
    I provided this random frame still image comparison only because I expect that people will not bother watching the examples without evidence that the results are interesting.

    Methodology

    In order to avoid any possible bias in the selection of H.264 encoders and encoding options, and to maximize the relevance for this particular issue, I’ve used YouTube itself as the H.264 encoder. This is less than ideal because YouTube does not accept lossless input, but it does accept arbitrarily high bitrate inputs.

    I utilized the Blender Foundation’s Big Buck Bunny as my test case because of its clear licensing status, because it’s a real world test case, and because I have it available in a lossless format. I am not aware of any reason why this particular clip would favor either Theora or H.264.

    I chose to use a test case with a soundtrack because most real usage has sound. No one implements HTML5 video without audio, and no one is implementing either of Theora or Vorbis without the other. Vorbis’s state-of-the-art performance is a contributor to the overall Ogg/Theora+Vorbis solution.

    • Obtain the lossless 640×360 Big Buck Bunny source PNGs and FLACs from media.xiph.org.
    • Resample the images to 480×270 using ImageMagick’s convert utility.
    • Use gstreamer’s jpegenc, produce a quality=100 mjpeg + PCM audio stream. The result is around 1.5Gbytes with a bitrate of around 20Mbit/sec.
    • Truncate the file to fit under the YouTube 1Gbyte limit, resulting in input_mjpeg.avi (706MiB).
    • Upload the file to YouTube and wait for it to transcode.
    • Download the FLV and H.264 files produced by YouTube using one of the many web downloading services. (I used keepvid)
    • Using libtheora 1.1a2 and Vorbis aoTuv 5.7 produce a file of comparable bitrate to the youtube 499kbit/sec from the same file uploaded to YouTube (input_mjpeg.avi).
    • Resample the file uploaded to YouTube to 400×226.
    • Using libtheora 1.1a2 and Vorbis aoTuv 5.7 produce a file of comparable bitrate to the youtube 327kbit/sec from the 400×226 downsampled copy of input_mjpeg.avi.

    I later discovered that YouTube sometimes offers additional sizes. I tried the youtube-dl utility and it indicated that these other sizes were not available for my file. Otherwise I would have also included them in this comparison.

    A keyframe interval of 250 frames was used for the Theora encoding. The theora 1.1a2 encoder software used is available from theora.org. The Vorbis encoder used is available from the aoTuV website. No software modifications were performed.

    My conclusions

    It can be difficult to compare video at low bitrates, and even YouTube’s higher bitrate option is not high enough to achieve good quality. The primary challenge is that all files at these rates will have problems, so the reviewer is often forced to decide which of two entirely distinct flaws is worse. Sometimes people come to different conclusions.

    That said, I believe that the Theora+Vorbis results are substantially better than the YouTube 327kbit/sec. Several other people have expressed the same view to me, and I expect you’ll also reach the same conclusion. This is unsurprising since we’ve been telling people that Theora is better than H.263, especially at lower bitrates, for some time now and YouTube only uses a subset of H.263.

    The low bitrate case is also helped by Vorbis’ considerable superiority over MP3. For example, the crickets at the beginning are inaudible in the low rate YouTube clip but sound fine in the Ogg/Theora+Vorbis version.

    In the case of the 499kbit/sec H.264 I believe that under careful comparison many people would prefer the H.264 video. However, the difference is not especially great. I expect that most casual users would be unlikely to express a preference or complain about quality if one was substituted for another and I’ve had several people perform a casual comparison of the files and express indifference. Since Theora+Vorbis is providing such comparable results, I think I can confidently state that reports of the internet’s impending demise are greatly exaggerated.

    Of course, YouTube may be using an inferior processing chain, or encoding options which trade off quality for some other desirable characteristic (like better seeking granularity, encoding speed, or a specific rate control pattern). But even if they are, we can conclude that adopting an an open unencumbered format in addition to or instead of their current offerings would not cause problems on the basis of quality or bitrate.

    But please— see and hear for yourself.

  3. H.264 video in Firefox for Android


    Firefox for Android
    has expanded its HTML5 video capabilities to include H.264 video playback. Web developers have been using Adobe Flash to play H.264 video on Firefox for Android, but Adobe no longer supports Flash for Android. Mozilla needed a new solution, so Firefox now uses Android’s “Stagefright” library to access hardware video decoders. The challenges posed by H.264 patents and royalties have been documented elsewhere.

    Supported devices

    Firefox currently supports H.264 playback on any device running Android 4.1 (Jelly Bean) and any Samsung device running Android 4.0 (Ice Cream Sandwich). We have temporarily blocked non-Samsung devices running Ice Cream Sandwich until we can fix or workaround some bugs. Support for Gingerbread and Honeycomb devices is planned for a later release (Bug 787228).

    To test whether Firefox supports H.264 on your device, try playing this “Big Buck Bunny” video.

    Testing H.264

    If your device is not supported yet, you can manually enable H.264 for testing. Enter about:config in Firefox for Android’s address bar, then search for “stagefright”. Toggle the “stagefright.force-enabled” preference to true. H.264 should work on most Ice Cream Sandwich devices, but Gingerbread and Honeycomb devices will probably crash.

    If Firefox does not recognize your hardware decoder, it will use a safer (but slower) software decoder. Daring users can manually enable hardware decoding. Enter about:config as described above and search for “stagefright”. To force hardware video decoding, change the “media.stagefright.omxcodec.flags” preference to 16. The default value is 0, which will try the hardware decoder and fall back to the software decoder if there are problems (Bug 797225). The most likely problems you will encounter are videos with green lines or crashes.

    Giving feedback/reporting bugs

    If you find any video bugs, please file a bug report here so we can fix it! Please include your device model, Android OS version, the URL of the video, and any about:config preferences you have changed. Log files collected from aLogcat or adb logcat are also very helpful.

  4. the tristan washing machine

    This is another demo from Paul Rouget. It’s a very simple demonstration of what you can do when you combine video, SVG and some fun transformations.

    View the Demo in Firefox 3.5
    tristan

    This puts a HTML5-based <video> element into an SVG document like so:

    <svg:svg xmlns:svg="http://www.w3.org/2000/svg"
                   width="194" height="194">
      <svg:foreignObject x="1" y="1" width="192" height="192">
        <html:video src="tristan3.ogv" ontimeupdate="rotateMePlease()"/>
      </svg:foreignObject>
    </svg:svg>
    Lots of code and many attributes removed for clarity. Please view the source for more details.

    If you look at the rest of the source code you will also see that the rest of the player is also defined and controlled from SVG. And we’re also using some of the CSS and SVG capabilities that are in Firefox 3.5 to define the player as well. For example if you look at the CSS you will see:

    #video {
        filter: url(#filter);
        clip-path: url(#circle);
    }

    The clip-path property lets you define a clip path for any element – html, svg or otherwise. This can be extremely powerful and can be any arbitrary path defined in SVG. Much like a cookie cutter this cuts out the center of the otherwise square video and uses that for display. Here’s what our clip path looks like:

    <svg:clipPath id="circle" clipPathUnits="objectBoundingBox">
       <svg:circle cx="0.5" cy="0.5" r="0.5"/>
    </svg:clipPath>

    The filter property lets you define an SVG filter to use in the same way as a clip path. In our case we use a feColorMatrix:

    <svg:filter id="filter">
      <svg:feColorMatrix values="0.3 0.3 0.3 0 0 0.3 0.3 0.3 0 0 0.3 0.3 0.3 0 0 0 0 0 1 0"/>
    </svg:filter>

    Although it is not used in this demo note that you can also use SVG to define gradient masks for any element in Firefox 3.5 as well.

    When the video runs it uses the transformation property to rotate the video while it’s playing:

    function rotateMePlease() {
        // Sure
        if (!video) return;
        video.style.MozTransform='rotate(' + i + 'deg)';
        i += 3;
    }

    You will notice that Paul built this into an XHTML document, carefully using namespaces and XML syntax. But most people would probably want to use SVG in plain-old-HTML. In Firefox 3.5 there is a way to do this using SVG External Document References. What this means is that instead of defining your SVG in the HTML file directly you can define it in its own file and refer to it through CSS like this:

    .target { clip-path: url(resources.svg#c1); }

    What this does is load the SVG from the resources.svg file and then use the object with the ID “c1″ as the clip path for the target. This is a very easy way to add SVG to your document without making the full switch to XHTML.

    I hope that this gives you a taste of what’s possible when mixing SVG, HTML, video and CSS all together. Enjoy!

  5. HTML5 video 'buffered' property available in Firefox 4

    This is a repost from Chris Pearce’s blog.

    Recently I landed support for the HTML5 video ‘buffered’ property in Firefox. This is cool because we can now accurately determine which time-segments of a video we can play and seek into without needing to pause playback to download more data. Previously you could only get the byte position the download had reached, which often doesn’t map to the time ranges which are playable very well, especially in a variable bit rate video. This also can’t tell you if there are chunks which we skipped downloading before the downloaded byte position. Once the video controls UI is updated, users will be able to know exactly which segments of their video are downloaded and playable and can be seeked into without pausing playback to download more data.

    To see this in action, download a current Firefox nightly build , and point your browser at my video ‘buffered’ property demo. You’ll see something like the screenshot below, including an extra progress bar (implemented using canvas) showing the time ranges which are buffered.

    I’ve implemented the ‘buffered’ property for the Ogg and WAV backends. Support for the ‘buffered’ property for WebM is being worked on by Matthew Gregan, and is well underway. At the moment we return empty ranges for the ‘buffered’ property on video elements playing WebM and raw video.

    My checkin just missed the cutoff for Firefox 4 Beta 3, so the first beta release that the video ‘buffered’ property will appear in is Firefox 4 Beta 4.

  6. App basics for Firefox OS – a screencast series to get you started

    Over the next few days we’ll release a series of screencasts explaining how to start your first Open Web App and develop for Firefox OS.

    Firefox OS - Intro and hello

    Each of the screencasts is terse enough to watch in a short break and the whole series should not take you more than an hour of your time. The series features Jan Jongboom (@janjongboom), Sergi Mansilla (@sergimansilla) of Telenor Digital and Chris Heilmann (@codepo8) of Mozilla and was shot in three days in Oslo, Norway at the offices of Telenor Digital in February 2014.

    Here are the three of us telling you about the series and what to expect:

    Firefox OS is an operating system that brings the web to mobile devices. Instead of being a new OS with new technologies and development environments it builds on standardised web technologies that have been in use for years now. If you are a web developer and you want to build a mobile app, Firefox OS gives you the tools to do so, without having to change your workflow or learn a totally new development environment. In this series of short videos, developers from Mozilla and Telenor met in Oslo, Norway to explain in a few steps how you can get started to build applications for FirefoxOS. You’ll learn:

    • how to build your first application for Firefox OS
    • how to debug and test your application both on the desktop and the real device
    • how to get it listed in the marketplace
    • how to use the APIs and special interfaces Firefox OS offers a JavaScript developer to take advantage of the hardware available in smartphones.

    In addition to the screencasts, you can download the accompanying code samples from GitHub . If you want to try the code examples out for yourself, you will need to set up a very simple development environment. All you need is:

    • A current version of Firefox (which comes out of the box with the developer tools you need) – we recommend getting Firefox Aurora or Nightly if you really want to play with the state-of-the-art technology.
    • A text editor – in the screencasts we used Sublime Text, but any will do. If you want to be really web native, you can try Adobe Brackets.
    • A local server or a server to push your demo files to. A few of the demo apps need HTTP connections instead of local ones.

    sergi and chris recording

    Over the next few days we’ll cover the following topics:

    In addition to the videos, you can also go to the Wiki page of the series to get extra information and links on the subjects covered.

    Come back here to see the links appear day by day or follow us on Twitter at @mozhacks to get information when the next video is out.

    jan recording his video

    Once the series is out, there’ll be a Wiki resource to get them all in one place. Telenor are also working on getting these videos dubbed in different languages. For now, stay tuned.

    Many thanks to Sergi, Jan, Jakob, Ketil, Nathalie and Anne from Telenor to make all of this possible.

  7. video – more than just a tag

    This article is written by Paul Rouget, Mozilla contributor and purveyor of extraordinary Open Web demos.

    Starting with Firefox 3.5, you can embed a video in a web page like an image. This means video is now a part of the document, and finally, a first class citizen of the Open Web. Like all other elements, you can use it with CSS and JavaScript. Let’s see what this all means …

    The Basics

    First, you need a video to play. Firefox supports the Theora codec (see here to know all media formats supported by the audio and video elements).

    Add the video to your document:

    <video id="myVideo" src="myFile.ogv"/>

    You might need to add some “fallback” code if the browser doesn’t support the video tag. Just include some HTML (which could be a warning, or even some Flash) inside the video tag.

    <video id="myVideo" src="myFile.ogv">
    <strong>Your browser is not awesome enough!</strong>
    </video>

    Here’s some more information about the fallback mechanism.

    HTML Attributes

    You can find all the available attributes here.

    Some important attributes:

    • autoplay: The video will be played just after the page loads.
    • autobuffer: By default (without this attribute), the video file is not downloaded unless you click on the play button. Adding this attribute starts downloading the video just after the page loads.
    • controls: by default (without this attribute), the video doesn’t include any controls (play/pause button, volume, etc.). Use this attribute if you want the default controls.
    • height/width: The size of the video

    Example:

    <video id="myVideo" src="myFile.ogv"
       autobuffer="true" controls="true"/>

    You don’t have to add the “true” value to some of these attributes in HTML5, but it’s neater to do so. If you’re not in an XML document, you can simply write:

    <video id="myVideo" src="myFile.ogv" autobuffer controls/>

    JavaScript API

    Like any other HTML element, you have access to the video element via the Document Object Model (DOM):

    var myVideo = document.getElementById("myVideo");

    Once you obtain a handle to the video element, you can use the JavaScript API for video.

    Here is a short list of some useful methods and properties (and see here for more of the DOM API for audio and video elements):

    • play() / pause(): Play and pause your video.
    • currentTime: The current playback time, in seconds. You can change this to seek.
    • duration: The duration of the video.
    • muted: Is the sound muted?
    • ended: Has the video ended?
    • paused: Is the video paused?
    • volume: To determine the volume, and to change it.

    Example:

    <button onclick="myVideo.play()">Play</button>
    <button onclick="myVideo.volume = 0.5">Set Volume</button>
    <button onclick="alert(myVideo.volume)">Volume?</button>

    Events

    You know how to control a video (play/pause, seek, change the volume, etc.). You have almost everything you need to create your own controls. But you need some feedback from the video, and for that, let’s see the different events you can listen to:

    • canplay: The video is ready to play
    • canplaythrough: The video is ready to play without interruption (if the download rate doesn’t change)
    • load: The video is ready to play without interruption (the video has been downloaded entirely)
    • ended: The video just ended
    • play: The video just started playing
    • pause: The video has been paused
    • seeking: The video is seeking (it can take some seconds)
    • seeked: The seeking process just finished
    • timeupdate: While the video is playing, the currentTime is updated. Every time the currentTime is updated, timeupdate is fired.

    Here’s a full list of events.

    For example, you can follow the percentage of the video that has just been played:

    function init()
    {
      var video = document.getElementById("myVideo");
      var textbox = document.getElementById("sometext");
      video.addEventListener("timeupdate", function() {
      textbox.value = Math.round(100 * (video.currentTime / video.duration)) + "%"; }
     
    }
    <video id="myVideo" src="myFile.ogv"
                autoplay="true" onplay="init()"/>
    <input id="sometext"/>

    Showing all this in action, here’s a nice open video player using the Video API.

    Now that you’re familiar with some of the broad concepts behind the Video API, let’s really delve into the video as a part of the Open Web, introducing video to CSS, SVG, and Canvas.

    CSS and SVG

    A video element is an HTML element. That means you can use CSS to style it.

    A simple example: using the CSS Image Border rule (a new CSS 3 feature introduced in Firefox 3.5). You can view how it works on the Mozilla Developer Wiki.

    And obviously, you can use it with the video tag:

     
    <video id="myVideo" src="myFile.ogv"
    style="-moz-border-image:
               url(tv-border.jpg) 25 31 37 31 stretch stretch;
               border-width: 20px;"/>

    One of my demos uses this very trick.

    Since Firefox 3.5 provides some new snazzy new CSS features, you can do some really fantastic things. Take a look at the infamous washing machine demo, in which I subject an esteemed colleague to some rotation.

    It uses some CSS rules:

    And some SVG:

    Because the video element is like any other HTML element, you can add some HTML content over the video itself, like I do in this demo. As you can see, there is a <div> element on top of the video (position: absolute;).

    Time for a Break

    Well, we’ve just seen how far we can go with the video element, both how to control it and how to style it. That’s great, and it’s powerful. I strongly encourage you to read about the new web features available in Firefox 3.5, and to think about what you can do with such features and the video element.

    You can do so much with the power of the Open Web. You can compute the pixels of the video. You can, for example, try to find some shapes in the video, follow the shapes, and draw something as an attachment to these shapes. That’s what I do here! Let’s see how it actually works.

    Canvas & Video

    Another HTML 5 element is canvas. With this element, you can draw bitmap data (see the canvas reference, and I strongly suggest this canvas overview). But something you might not know is that you can copy the content of an <img/> element, a <canvas/> element and a <video/> element.

    That’s a really important point for the video element. It gives you a way to play with the values of the pixels of the video frames.

    You can do a “screenshot” of the current frame of the video in a canvas.

    function screenshot() {
     var video = document.getElementById("myVideo");
     var canvas = document.getElementById("myCanvas");
     var ctx = canvas.getContext("2d");
     
     ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
    }
    <video id="myVideo" src="myFile.ogv" autoplay="true" with="600" height="400"/>
    <canvas id="myCanvas" with="600" height="400"/>
    <button onclick="screenshot()">Copy current frame to canvas</button>

    You can first apply a transformation to your canvas (see the documentation). You can also copy a thumbnail of the video.

    If you draw every frame in a canvas, your canvas will look like a video element. And you can draw what you want in this canvas, after drawing the frame. That’s what I do in this demo.

    Once you have a video frame in your canvas, you can compute the values of the pixels.

    Some things you should know if you want to compute the pixels values of a frame:

    • you can’t use this mechanism with a video from another domain.
    • you can’t use this mechanism with a video from a file:/// URL (which would be useful during the development of your web application). But you can change this behavior for testing: in about:config, change the value of “security.fileuri.strict_origin_policy” to “false”. But be very careful! editing about:config — that’s an expert feature!
    • There are two ways to display the result of your application on the top of the video:
      • use your canvas as a video (if you draw the frame every time), and then draw directly into the canvas
      • use a transparent canvas on the top of the video
    • the canvas element can be “display: none”
    • the video element can be “display: none”

    About JavaScript

    For the image processing, you will need to do a lot of computation. Here are some tricks:

    • copy your frame in a small canvas. If the canvas is three times smaller than the video, it means nine times fewer pixels to compute.
    • avoid recursion. In a recursion, the script engine doesn’t use the JIT optimization.
    • if you want to do a distance between colors, use the L.A.B colorspace.
    • if you want to find the center of an object, compute its centroid. See the “computeFrame” function that I use in this JavaScript snippet for my demo.
    • if the algorithm is really heavy, you can use a Worker thread, but take into account that you will need to send the content of the canvas to the thread. It’s a big array, and objects are automatically JSONified before being sent. It can take a while.

    Conclusion

    As you can see, you can do powerful things with the video element, the canvas element, CSS3, SVG and the new JavaScript engine. You have everything in your hands to create a completely new way to use Video on the web. It’s up to you now — upgrade the web!

    References

  8. Building Interactive HTML5 Videos

    The HTML5 <video> element makes embedding videos into your site as easy as embedding images. And since all major browsers support <video> since 2011, it’s also the most reliable way to get your moving pictures seen by people.

    A more recent addition to the HTML5 family is the <track> element. It’s a sub-element of <video>, intended to make the video timeline more accessible. Its main use case is adding closed captions. These captions are loaded from a separate text file (a WebVTT file) and printed over the bottom of the video display. Ian Devlin has written an excellent article on the subject.

    Beyond captions though, the <track> element can be used for any kind of interaction with the video timeline. This article explores 3 examples: chapter markers, preview thumbnails, and a timeline search. By the end, you will have sufficient understanding of the <track> element and its scripting API to build your own interactive video experiences.

    Chapter Markers

    Let’s start with an example made popular by DVD disks: chapter markers. These allow viewers to quickly jump to a specific section. It’s especially useful for longer movies like Sintel:

    The chapter markers in this example reside in an external VTT file and are loaded on the page through a <track> element with a kind of **chapters. The track is set to load by default:

    <video width="480" height="204" poster="assets/sintel.jpg" controls>
      <source src="assets/sintel.mp4" type="video/mp4">
      <track src="assets/chapters.vtt" kind="chapters" default>
    </video>

    Next, we use JavaScript to load the cues of the text track, format them, and print them in a controlbar below the video. Note we have to wait until the external VTT file is loaded:

    track.addEventListener('load',function() {
        var c = video.textTracks[0].cues;
        for (var i=0; i<c.length; i++) {
          var s = document.createElement("span");
          s.innerHTML = c[i].text;
          s.setAttribute('data-start',c[i].startTime);
          s.addEventListener("click",seek);
          controlbar.appendChild(s);
        }
    });

    In above code block, we’re adding 2 properties to the list entries to hook up interactivity. First, we set a data attribute to store the start position of the chapter, and second we add a click handler for an external seek function. This function will jump the video to the start position. If the video is not (yet) playing, we’ll make that so:

    function seek() {
      video.currentTime = this.getAttribute('data-start');
      if(video.paused){ video.play(); }
    };

    That’s it! You now have a visual chapter menu for your video, powered by a VTT track. Note the actual live Chapter Markers example has a little bit more logic than described, e.g. to toggle playback of the video on click, to update the controlbar with the video position, and to add some CSS styling.

    Preview Thumbnails

    This second example shows a cool feature made popular by Hulu and Netflix: preview thumbnails. When mousing over the controlbar (or dragging on mobile), a small preview of the position you’re about to seek to is displayed:

    This example is also powered by an external VTT file, loaded in a metadata track. Instead of texts, the cues in this VTT file contain links to a separate JPG image. Each cue could link to a separate image, but in this case we opted to use a single JPG sprite – to keep latency low and management easy. The cues link to the correct section of the sprite by using Media Fragment URIs.Example:

    http://example.com/assets/thumbs.jpg?xywh=0,0,160,90

    Next, all important logic to get the right thumbnail and display it lives in a mousemove listener for the controlbar:

    controlbar.addEventListener('mousemove',function(e) {
      // first we convert from mouse to time position ..
      var p = (e.pageX - controlbar.offsetLeft) * video.duration / 480;
     
      // ..then we find the matching cue..
      var c = video.textTracks[0].cues;
      for (var i=0; i<c.length; i++) {
          if(c[i].startTime <= p && c[i].endTime > p) {
              break;
          };
      }
     
      // ..next we unravel the JPG url and fragment query..
      var url =c[i].text.split('#')[0];
      var xywh = c[i].text.substr(c[i].text.indexOf("=")+1).split(',');
     
      // ..and last we style the thumbnail overlay
      thumbnail.style.backgroundImage = 'url('+c[i].text.split('#')[0]+')';
      thumbnail.style.backgroundPosition = '-'+xywh[0]+'px -'+xywh[1]+'px';
      thumbnail.style.left = e.pageX - xywh[2]/2+'px';
      thumbnail.style.top = controlbar.offsetTop - xywh[3]+8+'px';
      thumbnail.style.width = xywh[2]+'px';
      thumbnail.style.height = xywh[3]+'px';
    });

    All done! Again, the actual live Preview Thumbnails example contains some additional code. It includes the same logic for toggling playback and seeking, as well as logic to show/hide the thumbnail when mousing in/out of the controlbar.

    Timeline Search

    Our last example offers yet another way to unlock your content, this time though in-video search:

    This example re-uses an existing captions VTT file, which is loaded into a captions track. Below the video and controlbar, we print a basic search form:

    <form>
        <input type="search" />
        <button type="submit">Search</button>
    </form>

    Like with the thumbnails example, all key logic resides in a single function. This time, it’s the event handler for submitting the form:

    form.addEventListener('submit',function(e) {
      // First we’ll prevent page reload and grab the cues/query..
      e.preventDefault();
      var c = video.textTracks[0].cues;
      var q = document.querySelector("input").value.toLowerCase();
     
      // ..then we find all matching cues..
      var a = [];
      for(var j=0; j<c.length; j++) {
        if(c[j].text.toLowerCase().indexOf(q) > -1) {
          a.push(c[j]);
        }
      }
     
      // ..and last we highlight matching cues on the controlbar.
      for (var i=0; i<a.length; i++) {
        var s = document.createElement("span");
        s.style.left = (a[i].startTime/video.duration*480-2)+"px";
        bar.appendChild(s);
      }
    });

    Three time’s a charm! Like with the other ones, the actual live Timeline Search example contains additional code for toggling playback and seeking, as well as a snippet to update the controlbar help text.

    Wrapping Up

    Above examples should provide you with enough knowledge to build your own interactive videos. For some more inspiration, see our experiments around clickable hot spots, interactive transcripts, or timeline interaction.

    Overall, the HTML5 <track> element provides an easy to use, cross-platform way to add interactivity to your videos. And while it definitely takes time to author VTT files and build similar experiences, you will see higher accessibility of and engagement with your videos. Good luck!

  9. WebRTC efforts underway at Mozilla!

    Last week, a small team from Mozilla attended IETF 83 in Paris, and we showed an early demo of a simple video call between two BrowserID-authenticated parties in a special build of Firefox with WebRTC support. It is still very early days for WebRTC integration in Firefox, but we’re really excited to show you something that works!

    At Mozilla Labs, we’ve been experimenting with integrating social features in the browser, and it seemed like a cool idea to combine this with WebRTC to establish a video call between two users who are signed in using BrowserID (now called Persona). The SocialAPI add-on, once installed, provides a sidebar where web content from the social service provider is rendered. In our demo social service, we show a “buddy list” of people who are currently signed in using Persona.

    The video chat page that is served when the user initiates a video chat uses a custom API intended to simulate the getUserMedia and PeerConnection APIs currently being standardized at the W3C. A <canvas> is used to render both the remote and local videos, though it is also possible to render them in a <video>. We’re working very quickly to implement the standard APIs, and you can follow our progress on the tracker bug.

    A lot of folks burned the midnight oil to get this demo ready before the IETF event, and special thanks are due to Eric Rescorla, Michael Hanson, Suhas Nandakumar, Enda Mannion, Ethan Hugg, the folks behind Spacegoo, and Randell Jesup, in addition to the whole media team here at Mozilla.

    Current development is being done on a branch of mozilla-central called alder. It is going to be an exciting few months ahead as we work towards bringing WebRTC to Firefox. There is a lot of work to do, and if you are interested in contributing, please reach out! Maire Reavy, our product person and project lead for WebRTC would be happy to help you find ways to contribute. Many of us are also usually available in IRC at #media, and we have a mailing list.

    Transcript of screencast:

    Hi, I’m Anant from Mozilla Labs and I’m here at IETF where we are demonstrating a simple video call between two BrowserID-authenticated parties, using the new WebRTC APIs that we are working on.

    This is a special build of Firefox with WebRTC support, and also has the experimental SocialAPI add-on from Mozilla Labs installed. On the right hand side you can see web content served by demosocialservice.org, to which I will sign with BrowserID. Once I’m signed in, I can see all my online friends on the sidebar. I see my friend Enda is currently online, and so I’m going to click the video chat button to initiate a call.

    Here, I see a very early prototype of a video call window served up by our demo social service. Now, I can click the Start Call button to let Enda know that I want to speak with him. Once he accepts the call, a video stream is established between the two parties as you can see. So, that was a video call built entirely using JavaScript and HTML!

    You can check out the source code for this demo, as well as learn how to contribute to the ongoing WebRTC efforts at Mozilla in this blog post. Thanks for watching!