Mozilla

Audio Articles

Sort by:

View:

  1. Defending Opus

    On January 18th, France Telecom filed an IPR disclosure against Opus citing a single patent under non-royalty free terms. This raises a key question – what impact does this have on Opus? A close evaluation indicates that it has no impact on the Opus specification in any way.

    Summary:

    A careful reading of the FT patent reveals that:

    1. The FT patent does not cover the Opus reference implementation because critical limitations of the claim are absent;
    2. The patent is directed to encoders, therefore it cannot affect the Opus specification, which only includes conformance tests for the decoder, and
    3. With a simple change, we can make non-infringement even more obvious.

    Let’s expand on those points a bit. If you don’t want to hear about patent claims, you should stop reading this article now.

    Details:

    IETF IPR disclosures are a safe course of action for patent holders: they prevent unclean hands arguments or implied license grants. However, because the IETF requires specific patent numbers in these disclosures, we can analyze the claims. The patent in question is EP0743634B1, and the corresponding U.S. and other related foreign patents: “Method of adapting the noise masking level in an analysis-by-synthesis speech coder employing a short-term perceptual weighting filter”. It has a single independent claim, Claim 1. All of the other claims are “dependent claims” built on top of Claim 1. If Opus does not infringe Claim 1, it cannot infringe any other claim.

    The FT patent doesn’t cover Opus

    To establish infringement, all of the elements of a claim must be present in an implementation. Key elements of Claim 1 are not present in the Opus reference implementation, including, among others

    • The way the bandwidth expansion coefficients are used. In Claim 1, two parameters γ1 and γ2 are used to shape the quantization noise added by the lossy compression by “minimizing the energy of an error signal resulting from the filtering of the difference between the speech signal and the synthetic signal.” Opus doesn’t do this. Instead, the Opus encoder uses a single parameter BWExp2 to shape the noise, and uses a different parameter BWExp1 to shape the input signal, and also applies an additional gain to the filtered input to match the volume of the original.
    • The optimization criterion. Opus doesn’t compute the “difference between the speech signal and the synthetic signal”. We want to code a signal that differs from the original speech, so we don’t compare what we code to the original speech. This is actually one of the main innovations in Opus: it’s the reason the SILK layer doesn’t need a post-filter like many other codecs do.

    Thus Opus doesn’t perform the steps of the claim and cannot infringe the FT patent by definition. Of course this is not a legal opinion, but it doesn’t take a lawyer to figure this out. While we don’t know why FT disclosed this patent, we welcome the opportunity to evaluate such disclosures and remove any real or perceived encumbrances. This is one of the benefits of the IETF process.

    The FT patent cannot threaten the specification

    The FT patent covers perceptual noise weighting, which is specific to an encoder. The claim is about the “difference between the speech signal and the synthetic signal”, when a decoder — by definition — doesn’t have access to the input speech signal.

    The Opus specification only demands specific behavior from decoders, leaving the encoder largely unspecified. Even if France Telecom were to continue to assert its patent against Opus, there’s no limit to what we could change in the encoder to avoid whatever theory they have. No deployed systems break. There’s no threat to the Opus standard. We can safely say that the FT patent doesn’t encumber Opus for this reason alone.

    We can always make things even safer if needed

    While we don’t believe that the Opus encoder ever infringed on this patent, we quickly realized there is a simple way to make non-infringement obvious even without analyzing complex DSP filters.

    This can be done with a simple change (patch file) to the code in silk/float/noise_shape_analysis_FLP.c (an equivalent change can be made to the fixed-point version).

    Original code:

    strength = FIND_PITCH_WHITE_NOISE_FRACTION * psEncCtrl->predGain;
    BWExp1 = BWExp2 = BANDWIDTH_EXPANSION / ( 1.0f + strength * strength );
    delta  = LOW_RATE_BANDWIDTH_EXPANSION_DELTA
           * ( 1.0f - 0.75f * psEncCtrl->coding_quality );
    BWExp1 -= delta;
    BWExp2 += delta;

    New code:

    BWExp1 = BWExp2 = BANDWIDTH_EXPANSION;
    delta  = LOW_RATE_BANDWIDTH_EXPANSION_DELTA
           * ( 1.0f - 0.75f * psEncCtrl->coding_quality );
    BWExp1 -= delta;
    BWExp2 += delta;

    Yup, that’s all of two lines changed. This makes the filter parameters depend only on the encoder’s bit-rate, which is clearly not, “spectral parameters obtained in the linear prediction analysis step,” as required by Claim 1. Below is the quality comparison between the original encoder and the modified encoder (using PESQ). As you can see, the difference is so small that it’s not worth worrying about.

  2. It’s Opus, it rocks and now it’s an audio codec standard!

    In a great victory for open standards, the Internet Engineering Task Force (IETF) has just standardized Opus as RFC 6716.

    Opus is the first state of the art, free audio codec to be standardized. We think this will help us achieve wider adoption than prior royalty-free codecs like Speex and Vorbis. This spells the beginning of the end for proprietary formats, and we are now working on doing the same thing for video.

    There was both skepticism and outright opposition to this work when it was first proposed in the IETF over 3 years ago. However, the results have shown that we can create a better codec through collaboration, rather than competition between patented technologies. Open standards benefit both open source organizations and proprietary companies, and we have been successful working together to create one. Opus is the result of a collaboration between many organizations, including the IETF, Mozilla, Microsoft (through Skype), Xiph.Org, Octasic, Broadcom, and Google.

    A highly flexible codec

    Unlike previous audio codecs, which have typically focused on a narrow set of applications (either voice or music, in a narrow range of bitrates, for either real-time or storage applications), Opus is highly flexible. It can adaptively switch among:

    • Bitrates from 6 kb/s to 512 kb/s
    • Voice and music
    • Mono and stereo
    • Narrowband (8 kHz) to Fullband (48 kHz)
    • Frame sizes from 2.5 ms to 60 ms

    Most importantly, it can adapt seamlessly within these operating points. Doing all of this with proprietary codecs would require at least six different codecs. Opus replaces all of them, with better quality.
    Illustration of the quality of different codecs
    The specification is available in RFC 6716, which includes the reference implementation. Up-to-date software releases are also available.

    Some audio standards define a normative encoder, which cannot be improved after it is standardized. Others allow for flexibility in the encoder, but release an intentionally hobbled reference implementation to force you to license their proprietary encoders. For Opus, we chose to allow flexibility for future encoders, but we also made the best one we knew how and released that as the reference implementation, so everyone could use it. We will continue to improve it, and keep releasing those improvements as open source.

    Use cases

    Opus is primarily designed for use in interactive applications on the Internet, including voice over IP (VoIP), teleconferencing, in-game chatting, and even live, distributed music performances. The IETF recently decided with “strong consensus” to adopt Opus as a mandatory-to-implement (MTI) codec for WebRTC, an upcoming standard for real-time communication on the web. Despite the focus on low latency, Opus also excels at streaming and storage applications, beating existing high-delay codecs like Vorbis and HE-AAC. It’s great for internet radio, adaptive streaming, game sound effects, and much more.

    Although Opus is just out, it is already supported in many applications, such as Firefox, GStreamer, FFMpeg, foobar2000, K-Lite Codec Pack, and lavfilters, with upcoming support in VLC, rockbox and Mumble.

    For more information, visit the Opus website.

  3. Opus Support for WebRTC

    Opus audio codec logo
    As we announced during the beta cycle, Firefox now supports the new Opus audio format. We expect Opus to be published as RFC 6716 any day now, and we’re starting to see Opus support pop up in more and more places. Momentum is really building.

    What does this mean for the web?

    Keeping the Internet an open platform is part of Mozilla’s mission. When the technology the Web needs doesn’t exist, we will invest the resources to create it, and release it royalty-free, just as we ask of others. Opus is one of these technologies.

    Mozilla employs two of the key authors and developers, and has invested significant legal resources into avoiding known patent thickets. It uses processes and methods that have been long known in the field and which are considered patent-free. As a result, Opus is available on a royalty-free basis and can be deployed by anyone, including other open-source projects. Everyone knows this is an incredibly challenging legal environment to operate in, but we think we’ve succeeded.

    Why Opus is important?

    The Opus support in the <audio> tag we’re shipping today is great. We think it’s as good or better than all the other codecs people use there, particularly in the voice modes, which people have been asking for for a long time. But our goals extend far beyond building a great codec for the <audio> tag.

    Mozilla is heavily involved in the new WebRTC standards to bring real-time communication to the Web. This is the real reason we made Opus, and why its low-delay features are so important. At the recent IETF meeting in Vancouver we achieved “strong consensus” to make Opus Mandatory To Implement (MTI) in WebRTC. Interoperability is even more important here than in the <audio> tag. If two browsers ship without any codecs in common, a website still has the option of encoding their content twice to be compatible with both. But that option isn’t available when the browsers are trying to talk to each other directly. So our success here is a big step in bringing interoperable real-time communication to the Web, using native Web technologies, without plug-ins.

    Illustration of the quality of different codecs

    Opus’s flexibility to scale to both very low bitrates and very high quality, and do all of it with very low delay, were instrumental in achieving this consensus. It would take at least six other codecs to satisfy all the use-cases Opus does. So try out Opus today for your podcasts, music broadcasts, games, and more. But look out for Opus in WebRTC coming soon.

  4. Firefox Beta 15 supports the new Opus audio format

    Firefox 15 (now in the Beta channel) supports the Opus audio format, via the Opus reference implementation.

    What is it?

    Opus is a completely free audio format that was recently approved for publication as a standards-track RFC by the IETF. Opus files can play in Firefox Beta today.

    Opus offers these benefits:

    • Better compression than MP3, Ogg, or AAC formats
    • Good for both music and speech
    • Dynamically adjustable bitrate, audio bandwidth, and coding delay
    • Support for both interactive and pre-recorded applications

    Why Should I care?

    First, Opus is free software, free for everyone, for any purpose. It’s also an IETF standard. Both the encoder and decoder are free, including the fixed-point implementation (for mobile devices). These aren’t toy demos. They’re the best we could make, ready for serious use.

    We think Opus is an incredible new format for web audio. We’re working hard to convince other browsers to adopt it, to break the logjam over a common <audio> format.

    The codec is a collaboration between members of the IETF Internet Wideband Audio Codec working group, including Mozilla, Microsoft, Xiph.Org, Broadcom, Octasic, and others.

    We designed it for high-quality, interactive audio (VoIP, teleconference) and will use it in the upcoming WebRTC standard. Opus is also best-in-class for live streaming and static file playback. In fact, it is the first audio codec to be well-suited for both interactive and non-interactive applications.

    Opus is as good or better than basically all existing lossy audio codecs, when competing against them in their sweet spots, including:

    General audio codecs (high latency, high quality)
    • MP3
    • AAC (all flavors)
    • Vorbis
    Speech codecs (low latency, low quality)
    • G.729
    • AMR-NB
    • AMR-WB (G.722.2)
    • Speex
    • iSAC
    • iLBC
    • G.722.1 (all variants)
    • G.719

    And none of those codecs have the versatility to support all the use cases that Opus does.

    Listening tests show that:

    That’s a lot of bandwidth saved. It’s also much more flexible.

    Opus can stream:

    • narrowband speech at bitrates as low as 6 kbps
    • fullband music at rates of 256 kbps per channel

    At the higher of those rates, it is perceptually lossless. It also scales between these two extremes dynamically, depending on the network bandwidth available.

    Opus compresses speech especially well. Those same test results (slide 19) show that for fullband mono speech, Opus is almost transparent at 32 kbps. For audio books and podcasts, it’s a real win.

    Opus is also great for short files (like game sound effects) and startup latency, because unlike Vorbis, it doesn’t require several kilobytes of codebooks at the start of each file. This makes streaming easier, too, since the server doesn’t have to keep extra data around to send to clients who join mid-stream. Instead, it can send them a tiny, generic header constructed on the fly.

    How do I use it in a web page?

    Opus works with the <audio> element just like any other audio format.

    For example:

     <audio src="ehren-paper_lights-64.opus" controls>

    This code in a web page displays an embedded player like this:


    Paper Lights by Ehren Starks Creative Commons License

     
    (Requires Firefox 15 or later)

    Encoding files

    For now, the best way to create Opus files is to use the opusenc tool. You can get source code, along with Mac and Windows binaries, from:

    http://www.opus-codec.org/downloads/

    While Firefox 15 is the first browser with native Opus support, playback is coming to gstreamer, libavcodec, foobar2000, and other media players.

    Streaming

    Live streaming applications benefit greatly from Opus’s flexibility. You don’t have to decide up front whether you want low bandwidth or high quality, to optimize for voice or music, etc. Streaming servers can adapt the encoding as conditions change—without breaking the stream to the player.

    Pre-encoded files can stream from a normal web server. The popular Icecast streaming media server can relay a single, live Opus stream, generated on the fly, to thousands of connected listeners. Opus is supported by the current development version of Icecast.

    More Information

    To learn more visit opus-codec.org, or join us in #opus on irc.freenode.net.

  5. Interview: Jay Salvat, Audio Dev Derby winner

    Jay SalvatJay Salvat won the Audio Dev Derby with Buzz demo, his wonderful children’s game powered by the open web. Using a JavaScript library that he wrote himself, Jay demonstrated that web audio can be not only useful, but also practical and even engaging.

    Recently, I had the opportunity to learn more about Jay: his work, his history, and his thoughts on the future of web development. In our chat, Jay shared insight and advice that should be useful to all web developers, newcomers and veterans alike.

    How did you become interested in web development?

    I am totally self taught. I come from sales and marketing schools. I quickly realized that I was not done for this life. I tried some stuff, first working for free as designer and then as a layout artist in print press and magazines. At the time internet barely existed.

    With the 1997/8 internet big bang, I naturally passed from print design to web design to work in one of the first local web agencies. The agency was sold to a big international company and I then worked on ergonomics and interface designs for key accounts and managed a team of developers on these interfaces.

    Seeing them work gave me the taste of development, so I starting to develop some personal projects. My skills as marketing guy, designer, developer allowed me to get some interesting results by myself.

    Tell us about developing your Buzz demo. Was anything especially exciting, challenging, or rewarding?

    The idea behind the Buzz library was to allow developers to creatively manage sounds on their websites. My fear was to see Buzz used to add sounds on button clicks or some unbearable music background loops. Everything I hate as a user.

    I wanted to be clear and create a demo to show my vision of how sounds should be used on the web in 2012. This educational HTML5 game is inspired by games used by my 5 year old daughter on iPad.

    What makes the web an exciting platform for you?

    What is interesting is being able to quickly test ideas, share them with the world and see them used, improved, distributed and discussed by others. It’s invaluable to get hundreds of comments worldwide. It taught me a lot.

    What up-and-coming web technologies are you most excited about?

    HTML5/CSS3/JavaScript are really exiting and now make everything possible in a browser. I’m really interested by node.js as well allowing full JavaScript client/server side applications.

    If you could change one thing about the web, what would it be?

    Clearly, cross-browser compatibility (I’m looking at you Internet Explorer). It is very frustrating to work a few weeks on ideas, to finally get the desired result and then move to the testing phase on different browsers to see that everything is skewed or unusable. This is what happened to me on the markitup! 2.0 development, which I have never actually found the energy and time to correct.

    I dream to not worry about vendors prefixes, hacks and ridiculous compatibility barriers.

    What advice would you give to aspiring web developers?

    Be curious, be a sharer. Whenever possible do not hesitate to expose your work as open source projects. This is a great challenge to make your code public and have it judged by peers. It’s exciting and rewarding.

    Further reading

  6. getUserMedia is ready to roll!

    We blogged about some of our WebRTC efforts back in April. Today we have an exciting update for you on that front: getUserMedia has landed on mozilla-central! This means you will be able to use the API on the latest Nightly versions of Firefox, and it will eventually make its way to a release build.

    getUserMedia is a DOM API that allows web pages to obtain video and audio input, for instance, from a webcam or microphone. We hope this will open the possibility of building a whole new class of web pages and applications. This DOM API is one component of the WebRTC project, which also includes APIs for peer-to-peer communication channels that will enable exchange of video steams, audio streams and arbitrary data.

    We’re still working on the PeerConnection API, but getUserMedia is a great first step in the progression towards full WebRTC support in Firefox! We’ve certainly come a long way since the first image from a webcam appeared on a web page via a DOM API. (Not to mention audio recording support in Jetpack before that.)

    We’ve implemented a prefixed version of the “Media Capture and Streams” standard being developed at the W3C. Not all portions of the specification have been implemented yet; most notably, we do not support the Constraints API (which allows the caller to request certain types of audio and video based on various parameters).

    We have also implemented a Mozilla specific extension to the API: the first argument to mozGetUserMedia is a dictionary that will also accept the property {picture: true} in addition to {video: true} or {audio: true}. The picture API is an experiment to see if there is interest in a dedicated mechanism to obtain a single picture from the user’s camera, without having to set up a video stream. This could be useful in a profile picture upload page, or a photo sharing application, for example.

    Without further ado, let’s start with a simple example! Make sure to create a pref named “media.navigator.enabled” and set it to true via about:config first. We’ve put the pref in place because we haven’t implemented a permissions model or any UI for prompting the user to authorize access to the camera or microphone. This release of the API is aimed at developers, and we’ll enable the pref by default after we have a permission model and UI that we’re happy with.

    There’s also a demo page where you can test the audio, video and picture capabilities of the API. Give it a whirl, and let us know what you think! We’re especially interested in feedback from the web developer community about the API and whether it will meet your use cases. You can leave comments on this post, or on the dev-media mailing list or newsgroup.

    We encourage you to get involved with the project – there’s a lot of information about our ongoing efforts on the project wiki page. Posting on the mailing list with your questions, comments and suggestions is great way to get started. We also hang out on the #media IRC channel, feel free to drop in for an informal chat.

    Happy hacking!

  7. HTML5 audio and audio sprites – this should be simple

    As we’re having a HTML5 Audio developer derby this month, I thought it fun to play with audio again. And I found it sadly enough pretty frustrating.

    One thing I proposed in a lot of talks is using the idea of CSS sprites and apply them to HTML5 audio. You’ll get the same benefits – loading one file in one HTTP request instead of many, avoiding failure as files might not get loaded and so on.

    To test this out I wrote the following small demo using the awesome Music Non Stop by Kraftwerk.

    Clicking the different buttons should play the part of the music file and nothing more. This works fine in Firefox, Chrome and Opera on my computer here. Safari, however, fails to preload the audio and the setting of the current time is off. The code is simple enough that this should work:

    <div id="buttons"></div>
    <audio preload controls>
      <source src="boing-boomchack-peng.mp3" type="audio/mp3"></source>
      <source src="boing-boomchack-peng.ogg" type="audio/ogg"></source>
    </audio>
    // get the audio element and the buttons container
    // define a sprite object with the names and the start and end times 
    // of the different sounds.
    var a = document.querySelector('audio'),
        buttoncontainer = document.querySelector('#buttons'),
        audiosprite = {
          'all': [ 0, 5 ],
          'boing': [ 0, 1.3 ],
          'boomtchack': [ 2, 2.5 ],
          'peng': [ 4, 5 ]
        },
        end = 0;
     
    // when the audio data is loaded, create the buttons 
    // this way non-HTML5 browsers don't get any buttons 
    a.addEventListener('loadeddata', function(ev) {
      for (var i in audiosprite) {
        buttoncontainer.innerHTML += '<button onclick="play(\'' +
                                      i + '\')">' + i + '</button>';
      }
    }, false);
     
    // If the time of the file playing is updated, compare it 
    // to the current end time and stop playing when this one 
    // is reached
    a.addEventListener('timeupdate', function(ev) {
      if (a.currentTime > end) {
        a.pause();
      }
    },false);
     
    // Play the current audio sprite by setting the currentTime
    function play(sound) {
      if ( audiosprite[sound] ) {
        a.currentTime = audiosprite[sound][0];
        end = audiosprite[sound][1];
        a.play();
      }
    }

    Now, this is nothing new, Remy Sharp wrote about audio sprites in 2010 and lamented especially the buggy support in iOS (audio won’t load at all until you activate it with a touch – something that sounds horribly like the “click to active” Flash has on IE).

    Other issues are looping and latency of HTML5 audio. As reported by Robert O’Callahan there is a work-around by cloning the audio element before playing it (with an incredibly annoying test) and this fix has been used in the Gladius HTML5 game engine.

    All in all it seems HTML5 audio still needs a lot of work which is why a lot of Games released lately under the banner of HTML5 use Flash audio or no audio at all. This is sad and needs fixing.

    Interestingly enough there are some great projects that you could be part of. Are we playing yet? by Soundcloud and others for example is a test suite for audio support in browsers. You can write own tests on GitHub and report results to the browser makers.

    The jPlayer team has a great HTML5 Media Event Inspector showing just how many of the HTML5 media events are supported in your current browser.

    If you want to be safe, you can use SoundManager 2 by Scott Schiller to have an API that uses HTML5 when possible and falls back to Flash when the browser doesn’t have any support. It also fixes a few issues for you.

    Speaking of Scott Schiller, he continually gives good insight on the state of audio. There is a 51 minute video of his article on 24 ways “Probably, Maybe, No: The State of HTML5 Audio“.

    A shorter and more recent talk on the same subject is also available:

    All in all it would be interesting to hear what you think of the state of HTML5 audio:

    • Did the companies that heralded HTML5 as the end of plugins drop the ball?
    • Is it really sensible to have an API that returns probably or maybe or ” when you ask it if the browser can play a certain type of media?
    • What could be done to work around these issues?

    Let’s re-ignite the discussion on HTML5 audio, after all we need it for the future of messaging in the browser and telephony, too.

    Oh and another thing. Of course there is the Audio Data API of Firefox and the web audio proposal from Webkit available but getting those running in mobile devices will be a much bigger change. If you want to know more about those and libraries to work around their differences, there is a great overview post available on Happyworm.

  8. Making the Dino roar – syncing audio and CSS transitions

    It started with Brian King setting up our Google+ page using this round MDN logo by John Slater. I thought this looks cool and reminded me of the famous MGM intro so I wondered if I could turn it into an intro for our video tutorials (not sure if we will do that though). And, some photoshop and sound work later and with a sprinkle of HTML5 audio and CSS transitions, here we are (source on GitHub):

    I started with the sound. If you need Creative Commons licensed sounds, Freesound is a good resource. So I took Chinese Fanfare by Nick-Nack and Roar by CGEffex and put them together in Audacity.

    Saving them as OGG and MP3 gave me an audio element that I could tie into. All I needed was to listen to the timeupdate event and compare the currentTime to trigger the animations. The animations (rotation of the dino and opening and closing of the jaw) are CSS transitions triggered by classes on the parent element. The main trick was to store both the dino and the jaw inside a div and transition them separately. The jaw animation also needed a change in transformation origin as we don’t rotate the image around its center.

    If you got seven minutes to spare, here is a blow-by-blow screencast explaining what is going on:

  9. speak.js: Text-to-Speech on the Web

    Text-to-Speech (TTS) can make content more accessible, but there is so far no simple and universal way to do that on the web. One possible approach is shown in this demo, which is powered by speak.js, a new 100% pure JavaScript/HTML5 TTS implementation. speak.js is a port of eSpeak, an open source speech synthesizer, from C++ to JavaScript using Emscripten.

    Compiling an existing speech synthesis engine to JavaScript is a good way to avoid writing a complicated project like eSpeak from scratch. Once compiled, the eSpeak code in speak.js doesn’t know it’s running on the web: speak.js uses the Emscripten emulated filesystem to ‘fake’ the normal file reading and writing calls that the eSpeak C++ code has (fopen, fread, etc.). This allows the normal eSpeak datafiles to be used (either through an xhr, or by converting them to JSON and bundling them with the script file). The result of running the compiled eSpeak code is that it ‘writes’ a .wav file with the generated audio to the emulated filesystem. speak.js then takes that data, encodes it using base64, and creates a data URL. That URL is then loaded in an HTML5 audio element, letting the browser handle playback. (Note that while that is a very simple way to do things, it isn’t the most efficient. speak.js has not yet focused on speed, but with some additional work it could be much faster, if that turns out to be an issue.)

    Why would you want TTS in JavaScript? Well, with speak.js you can bundle a single .js file in your website, and then generating speech is about as simple as writing

    speak("hello world")

    (see the speak.js website for instructions). The generated speech will be exactly the same on all platforms, unlike if your users each did TTS in their own way (using an OS capability, or a separate program). speak.js can also be used to build browser addons in a straightforward way, since it’s pure JavaScript – no need for platform dependent binaries, and the addon will work the same on all OSes.

    A few more comments:

    • JavaScript is getting more and more capable all the time. The development versions of the top JavaScript engines today can run code compiled from C++ only 3-5X slower than a fast C++ compiler, and getting even better. As a consequence, expanding the capabilities of the web platform can in many cases be done in JavaScript or by compiling to JavaScript, instead of adding new code to the browsers themselves, which inevitably takes longer – especially if you wait for all browsers to implement a particular feature.
    • While speak.js uses only standards-based APIs, due to browser limitations it can’t work everywhere yet. It won’t work in IE, Safari or Opera since they don’t support typed arrays, nor in Chrome since it doesn’t support WAV data URLs. So currently speak.js only works properly in Firefox. However, the missing features just mentioned are not huge and hopefully those browser makers will implement them soon. It is also possible to implement workarounds in speak.js for these issues (see next comment).
    • Help with improving speak.js is very welcome! One important thing we need is to implement workarounds for the issues that prevent speak.js from running on the browsers it currently can’t run on. Another goal is to build browser addons using speak.js. Please get in touch on github if you want to help out.
    • eSpeak supports multiple languages so speak.js can too. You do need to include the additional language files though. Here is an experimental build where you can switch between English and French support (note that it is an unoptimized build, so it will run slower).