Using mozjpeg to Create Efficient JPEGs

The mozjpeg project recently released version 2.1. It aims to improve the efficiency of JPEG encoding over existing encoders while maintaining compatibility with the vast majority of existing decoders.

I’d like to explain how you can use this software to reduce the size of your JPEGs. Specifically, I’m going to go over usage of mozjpeg’s cjpeg command line utility.

Building mozjpeg

There are no official mozjpeg binaries available, so you’ll have to build the project from source. Do this by downloading the source code for the latest release and following the instructions in the file BUILDING.txt.

Building on Linux and OS X is quite easy. Things are a little more complicated on Windows, but it’s still relatively easy. The mozjpeg project is considering providing binary releases in the future.

When you build mozjpeg you’ll produce a command line tool called cjpeg. This is the encoding program that comes with mozjpeg. The decoder is called djpeg.

Input Image Formats

The cjpeg tool is capable of handling the following input file types: targa, bmp, ppm, and jpg.

The ability to take JPEG files as input is a new feature in mozjpeg which doesn’t yet exists in the cjpeg tools provided by other projects. It was added to make re-compression workflows easier. Instead of converting JPEGs to BMP or something else and then re-encoding them, you can just pass the JPEG directly to the cjpeg encoder.

If you want to create a JPEG from a file type not supported by cjpeg, take a look at the ImageMagick tools. They include a command-line utility called convert that can convert back and forth from many types of images.

Basic Usage of the cjpeg Command Line Tool

Most people who use mozjpeg’s cjpeg use it in its simplest form:

$ cjpeg -quality 80 foo.bmp > bar.jpg

This will create a JPEG called “bar.jpg” from the “foo.bmp” input file at quality level 80. cjpeg directly emits the contents of the resulting JPEG file, so if you want to write it to disk, which you probably do, you’ll need to pipe the results to a file with ‘>‘.

Selecting a Quality Level

Quality can range from 0 at the low end to 100 at the high end. Most people will want to pick quality a value somewhere between 60 and 90. You’ll want to use the lowest value that still produces and image at a quality you’re happy with, because lower values will produce images with smaller file sizes.

The following image shows an original image as well as the same image encoded at five different quality levels with mozjpeg’s cjpeg. Click on it to enlarge.

JPEG Quality Comparison

Here are the image files one-by-one:

(Image courtesy of Soulmatesphotography, via Wikimedia, Creative Commons Attribution-Share Alike 3.0 Unported License)

Do some experimenting here. Lots of people don’t, and they miss out on significant reductions in file size. Their thinking is often something along the lines of “80 seems like a good compromise, and I hear that’s what most people do, so I’ll do that.” If you can’t tell the difference between an image at quality 77 vs 80, and you’re using 80, you’re missing out on significant file size savings at no cost to you in terms of quality.

Progressive vs. Baseline JPEGs

The mozjpeg version of cjpeg produces progressive JPEGs by default because their file sizes tend to be about 4% smaller than baseline JPEGs. Progressive JPEGs can appear at full size but blurry at first, and then progress to full resolution as the image downloads. Baseline JPEGs simply load from top to bottom.


(Image courtesy of Soulmatesphotography, via Wikimedia, Creative Commons Attribution-Share Alike 3.0 Unported License)

If you want to produce a baseline JPEG, you can do so like this:

$ cjpeg -baseline -quality 80 foo.bmp > bar.jpg

Targeting Metrics

A cool feature of mozjpeg’s cjpeg is that you can optimize for any of four specific quality metrics: PSNR, PSNR-HVS-M, SSIM, and MS-SSIM. These metrics are algorithms that calculate the quality of an image compared to the original. More scientifically, this is often referred to as a distortion rate.

These algorithms differ in how they define quality, so optimizing for one metric can hurt performance on another. See the July 2014 lossy compressed images study from Mozilla for many example of what this looks like.

mozjpeg tunes for PSNR-HVS-M by default, because tuning for this does pretty well on all metrics.

If you or your organization have decided that you trust a particular metric more than the others, you can tell mozjpeg’s cjpeg tool about your preference and it will tune for that metric.


Hopefully at this point you know enough to confidently start making use of mozjpeg to optimize your JPEGs. If you run into any issues, please report them on mozjpeg’s github issue tracker.

About Robert Nyman [Editor emeritus]

Technical Evangelist & Editor of Mozilla Hacks. Gives talks & blogs about HTML5, JavaScript & the Open Web. Robert is a strong believer in HTML5 and the Open Web and has been working since 1999 with Front End development for the web - in Sweden and in New York City. He regularly also blogs at and loves to travel and meet people.

More articles by Robert Nyman [Editor emeritus]…

About robnymantest

More articles by robnymantest…


  1. Eric Johnson

    What is the file size savings you’re seeing so far?

    August 6th, 2014 at 08:53

    1. Josh Aas

      5% is the approximate average improvement over all metrics. You can see significantly more or less based on the image you’re compressing and the metric you care about. See the following study:

      August 6th, 2014 at 11:54

  2. Omega

    “The mozjpeg project is considering providing binary releases in the future.”

    Compiling binaries should be child’s play for an organization like Mozilla. Not sure why the song and dance.

    August 6th, 2014 at 08:55

    1. Adam Tajti

      Agreed, though developers should have the skills to quickly compile and use the tool.

      August 6th, 2014 at 09:23

    2. Robert Nyman [Editor]

      I assume song and dance is referring to source code and building? In that case, I like the term. :-)
      We hope to be providing binaries later on, also depending on request level, but it’s not something in place just right now.

      August 6th, 2014 at 13:37

    3. Adam

      There is a binary release of mozjpeg to download from

      August 7th, 2014 at 12:30

  3. Robson

    Even with quality 90 isn’t that good. The noise created by the compression is too visible.

    It looks like still do a better job.

    Let’s see what the future of the project can bring.

    August 6th, 2014 at 09:10

    1. Robert Nyman [Editor]

      Try a number of different images, see what you think. And yes, please follow along with the future progress as well.

      August 6th, 2014 at 13:38

  4. Signez

    I think you made a typo.

    cjpeg -quality 80 foo.jpg > bar.jpg

    should be

    cjpeg -quality 80 foo.bmp > bar.jpg

    if I understand correctly.

    August 6th, 2014 at 09:28

    1. Robert Nyman [Editor]

      We discussed a bit whether BMP or JPG would be the most common format people have, and missed to match the command and text at that place. Updated, thanks!

      August 6th, 2014 at 11:24

  5. André

    Is there a way to run mozjpeg on Windows Server?

    August 6th, 2014 at 09:31

    1. Josh Aas

      You can compile mozjpeg on Windows, from there it’s up to you to figure out how to make use of the library or cjpeg in your toolchain.

      August 6th, 2014 at 11:56

      1. André

        I can’t locate the “ALL_BUILD.vcproj” in the current mozjpeg v2.1 Release. Where to find it?

        August 6th, 2014 at 23:47

      2. André

        August 6th, 2014 at 23:49

        1. Robert Nyman [Editor]

          To my knowledge, the “ALL_BUILD.vcproj” file should be generated when you run the cmake command.

          August 7th, 2014 at 03:39

  6. Steve Souders

    I’m excited to start using mozjpeg. This article is really helpful to get me going. You suggest doing some experimenting with the quality level. I’m hoping to integrate this into a system that will compress millions of images. It would be time consuming to experiment over this large number of images, but if I experiment over a more feasible subset the exact quality level is likely to vary depending on the image. Do you have any suggestions for choosing a quality level in this scenario? Thanks!

    August 6th, 2014 at 09:35

    1. Josh Aas

      What qualities to test depends on what you’re interested in. I’d suggest testing a few single images at various qualities to get an idea of what seems best before you do a run with millions of images.

      You should also take a look at this study which we just released:

      This has quite a bit of information about quality in it, and could inform your decision. The data was collected with this software:

      It has parallel processing for images. For the study I linked to I was using a 32-core RHEL AWS instance and it chewed through images pretty quickly.
      Once you collect data with this software you can easily graph the results.

      August 6th, 2014 at 14:42

  7. Phil

    I like the ability tp pass JPEG directly to the cjpeg encoder. Great article , thanks.

    August 6th, 2014 at 10:05

    1. Robert Nyman [Editor]

      Good to hear, and thanks!

      August 6th, 2014 at 13:39

  8. Josiah Carlson

    The command-line invocations of cjpeg listed in the article are incorrect.

    The commands should read:
    cjpeg -quality 80 foo.bmp > foo.jpg
    cjpeg -baseline -quality 80 foo.bmp > foo.jpg

    August 6th, 2014 at 10:29

    1. Robert Nyman [Editor]

      We talked about using BMP or JPG as examples, and missed matching commands and text there. Thanks, updated the article!

      August 6th, 2014 at 11:26

  9. Mark Ransom

    Since you appear to be measuring the quality metric already, would it be feasible to specify the desired metric value rather than the quality level? This would make the results much less dependent on image content and get the highest practical compression in every case.

    August 6th, 2014 at 12:11

    1. Josh Aas

      We aren’t really measuring per the metric, we’re just using configuration options that are known to do well for the currently configured metric. Specifying quality via a metric value is an interesting idea, but it’s complicated and I doubt it would be used very often.

      August 6th, 2014 at 14:35

  10. Volker E.

    A great development! I’d also like to see not only development on the library itself, but also –as you write– on binaries or implementations in task runners to target a broad developer audience.
    Current implementation into grunt-contrib-imagemin for example failed pitifully.

    Btw, there’s still an error in the description: ‘This will create a JPEG called “foo.jpg”’ -> should become ‘This will create a JPEG called “bar.jpg”‘.

    August 7th, 2014 at 03:49

    1. Robert Nyman [Editor]

      Thanks for the input! We’re constantly evaluating what the next steps will be. Also, description fixed – appreciated.

      August 7th, 2014 at 07:53

  11. Stephan B.

    I would have a use case but this goes beyond 100.000 small images. Any plans for batch processing?

    August 7th, 2014 at 11:09

  12. Frédéric Kayser

    You can fine tune the chroma quality by setting a second quality value, for instance -quality 80,60 will apply quality 80 to the luma (Y) component and 60 to the chroma components (Cr & Cb). To enforce 4:2:0 chroma-subsampling add a -sample 2×2 option (WebP always does this). In my experience basic settings keep to much chroma signal and lowering its quality allows to save even more bytes.

    August 7th, 2014 at 11:53

Comments are closed for this article.