Clever JPEG Optimization Techniques

Advertisement

When people talk about image optimization, they consider only the limited parameters offered by popular image editors, like the “Quality” slider, the number of colors in the palette, dithering and so on. Also, a few utilities, such as OptiPNG1 and jpegtran2, manage to squeeze extra bytes out of image files. All of these are pretty well-known tools that provide Web developers and designers with straightforward techniques of image optimization.

In this article, we’ll show you a different approach to image optimization, based on how image data is stored in different formats. Let’s start with the JPEG format and a simple technique called the eight-pixel grid.

Eight-Pixel Grid

As you already know, a JPEG image consists of a series of 8×8 pixel blocks, which can be especially visible if you set the JPEG “Quality” parameter too low. How does this help us with image optimization? Consider the following example:


32×32 pixels, Quality: 10 (in Photoshop), 396 bytes.

Both white squares are the same size: 8×8 pixels. Although the Quality is set low, the lower-right corner looks fuzzy (as you might expect) and the upper-left corner looks nice and clean. How did that happen? To answer this, we need to look at this image under a grid:

As you can see, the upper-left square is aligned into an eight-pixel grid, which ensures that the square looks sharp.

When saved, the image is divided into blocks of 8×8 pixels, and each block is optimized independently. Because the lower-right square does not match the grid cell, the optimizer looks for color indexes averaged between black and white (in JPEG, each 8×8 block is encoded as a sine wave). This explains the fuzz. Many advanced utilities for JPEG optimization have this feature, which is called selective optimization and results in co-efficients of different quality in different image regions and more saved bytes.

This technique is especially useful for saving rectangular objects. Let’s see how it works with a more practical image:


13.51 KB.


12.65 KB.

In the first example, the microwave oven is randomly positioned. Before saving the second file, we align the image with the eight-pixel grid. Quality settings are the same for both: 55. Let’s take a closer look (the red lines mark the grid):

As you can see, we’ve saved 1 KB of image data simply by positioning the image correctly. Also, we made the image a little “cleaner,” too.

Color Optimization

This technique is rather complicated and works only for certain kinds of images. But we’ll go over it anyway, if only to learn the theory.

First, we need to know which color model is being used for the JPEG format. Most image formats are in the RGB color model, but JPEG can also be in YCbCr, which is widely used for television.

YCbCr is similar to the HSV model in the sense that YCbCr and HSV both separate lightness for which human visual system is very sensitive from chroma (HSV should be familiar to most designers). It has three components: hue, saturation and value. The most important one for our purposes here is value, also known as lightness (optimizers tend to compress color channels but keep the value as high as possible because the human is most sensitive to it). Photoshop has a Lab color mode, which helps us better prepare the image for compression using the JPEG optimizer.

Let’s stick with the microwave oven as our example. There is a fine net over the door, which is a perfect sample for our color optimization. After a simple compression, at a Quality of 55, the file weighs 64.39 KB.

3
990×405 pixels, Quality: 55 (in Photoshop), 64.39 KB.
Larger version.4

Open the larger version of the image in Photoshop, and turn on Lab Color mode: Image >> Mode >> Lab Color.

Lab mode is almost, but not quite, the same as HSV and YCbCr. The lightness channel contains information about the image’s lightness. The A channel shows how much red or green there is. And the B channel handles blue and yellow. Despite these differences, this mode allows us to get rid of redundant color information.

Switch to the Channels palette and look at the A and B channels. We can clearly see the texture of the net, and there seems to be three blocks of differing intensities of lightness.

We are going to be making some color changes, so keeping an original copy open in a separate window will probably help. Our goal is to smooth the grainy texture in these sections in both color channels. This will give the optimizer much simpler data to work with. You may be wondering why we are optimizing this particular area of the image (the oven door window). Simple: this area is made up of alternating black and orange pixels. Black is zero lightness, and this information is stored in the lightness channel. So, if we make this area completely orange in the color channels, we won’t lose anything because the lightness channel will determine which pixels should be dark, and the difference between fully black and dark orange will not be noticeable on such a texture.

Switch to the A channel, select each block separately and apply a Median filter (Filter >> Noise >> Median). The radius should be as small as possible (i.e. until the texture disappears) so as not to distort the glare too much. Aim for 4 in the first block, 2 in the second and 4 in the third. At this point, the door will look like this:

5
Larger version.6

The saturation is low, so we’ll need to fix this. To see all color changes instantly, duplicate the currently active window: Window >> Arrange >> New Window. In the new window, click on the Lab channel to see the image. As a result, your working space should look like this:


The original is on the right, the duplicate on the left and the workplace at the bottom.

Select all three blocks in the A channel in the workplace, and call up the Levels window (Ctrl+L or Image >> Adjustments >> Levels). Move the middle slider to the left so that the color of the oven’s inside in the duplicate copy matches that of the original (I got a value of 1.08 for the middle slider). Do the same with the B channel and see how it looks:

7
990×405 pixels, Quality: 55 (in Photoshop), 59.29 KB
Large version8

As you can see, we removed 5 KB from the image (it was 64.39 KB). Although the description of this technique looks big and scary, it only takes about a minute to perform: switch to the Lab color model, select different regions of color channels and use the Median filter on them, then do some saturation correction. As mentioned, this technique is more useful for the theory behind it, but I use it to fine-tune large advertising images that have to look clean and sharp.

Common JPEG Optimization Tips

We’ll finish here by offering some useful optimization tips.

Every time you select the image compression quality, be deliberate in your choice of the program you use for optimization. JPEG standards are strict: they only determine how an image is transformed when reduced file size. But the developer decides what exactly the optimizer does.

For example, some marketers promote their software as offering the best optimization, allowing you to save files at a small size with high Quality settings, while portraying Photoshop as making files twice as heavy. Do not get taken in. Each program has its own Quality scale, and various values determine additional optimization algorithms.

The only criterion by which to compare optimization performance is the quality to size ratio. If you save an image with a 55 to 60 Quality in Photoshop, it will look like and have the same size as files made with other software at, say, 80 Quality.

Never save images at 100 quality. This is not the highest possible quality, but rather only a mathematical optimization limit. You will end up with an unreasonably heavy file. Saving an image with a Quality of 95 is more than enough to prevent loss.

Keep in mind that when you set the Quality to under 50 in Photoshop, it runs an additional optimization algorithm called color down-sampling, which averages out the color in the neighboring eight-pixel blocks:


48×48 pixels, Quality: 50 (in Photoshop), 530 bytes.


48×48 pixels, Quality: 51 (in Photoshop), 484 bytes.

So, if the image has small, high-contrast details in the image, setting the Quality to at least 51 in Photoshop is better.

(al)

Footnotes

  1. 1 http://optipng.sourceforge.net/
  2. 2 http://sylvana.net/jpegcrop/jpegtran/
  3. 3 http://www.smashingmagazine.com/wp-content/uploads/images/jpg-optimization-techniques/co-original.jpg
  4. 4 http://www.smashingmagazine.com/wp-content/uploads/images/jpg-optimization-techniques/co-original.png
  5. 5 http://www.smashingmagazine.com/wp-content/uploads/images/jpg-optimization-techniques/channel-a-blured.jpg
  6. 6 http://www.smashingmagazine.com/wp-content/uploads/images/jpg-optimization-techniques/channel-a-blured.jpg
  7. 7 http://www.smashingmagazine.com/wp-content/uploads/images/jpg-optimization-techniques/optimized.jpg
  8. 8 http://www.smashingmagazine.com/wp-content/uploads/images/jpg-optimization-techniques/optimized.jpg

↑ Back to top Tweet itShare on Facebook

Sergey Chikuyonok is a Russian front-end web-developer and writer with a big passion on optimization: from images and JavaScript effects to working process and time-savings on coding.

Advertisement
  1. 1

    Is it worth it for 5kb?

    0
  2. 52

    Sergey Chikuyonok

    July 2, 2009 1:25 am

    Which programs do you recommend for JPEG optimization.
    How do those softwares compare to the Photoshop when it comes to JPEG optimization

    I recommend to use xat.com image optimizer. It has a nice selective optimization engine, but it Windows-only.

    It is also possible to do selective optimization in Photoshop, but it’s really buggy.

    What about an article on Adobe(R) programs. Photoshop, Fireworks etc. Which once is best used on what application/image??

    I prefer to use Photoshop for my daily work, because I do a lot more with images when exporting them for web (and you’ll see it on my next PNG article). I was looking for PS alternative for a few years, but all the software I tried didn’t contains even 20% of needed tools.

    0
  3. 154
  4. 205

    I have a question, don’t know related or not related to this article (thank you for it, a lot of useful information), what goes on with red, pink and similar dominating gamma of colours of an image when it is uploaded somewhere where restricted size or weight of file is unavoidable? For example photoalbums on myspace, facebook etc which compress the image seriously (up to their own norms). Nothing bad happens to images (jpgs as a rule) created in any other gamma of colours, but when the dominant is red something awful happens with the image, I am sure you know how it looks. Is there any method to avoid this catastrophe? Recently I made a poster (for Internet, not for print, that is why it is jpg in RGB), to be honest it looks very good in viewer, photoshop, everywhere else in its original quality and weight, but I uploaded it on facebook album (which of course makes some compression) and as a a result—strongest pixelation, and looks ugly of course. This is a real puzzle for me and headache… If somebody knows what to do with red images in order to avoid such the troubles, please share your secrets, thank you in advance.

    0
  5. 256

    really interessting…. some very good tips

    0
  6. 307

    Great article,
    check also this tool ImageOptim for mac users.

    0
  7. 358

    Nice and useful article.

    However, there is quite serious typo in line (just sounds stupid):
    RGB is similar to the HSV model
    at the beginning of Color Optimimization section.
    should be:
    YCbCr is similar to the HSV model
    in the sense that YCbCr and HSV both separate
    lightness for which human visual system is very sensitive from chroma.

    0
  8. 409

    thanks, it’s refresh brain article.

    0
  9. 460

    Really really reaaaaaly nice post.

    :)

    0
  10. 511

    It got too technical for me there, I’d rather read an article about best export settings for different types of images, common situations etc!

    0
  11. 562

    That was such an interesting post – well done. I had no idea about these techniques, I’ll be using them first thing tomorrow!

    0
  12. 613

    Jaypegg is a silly format for design where shapes and shit need to stay in one colour. Save it for the office ladies who e-mail attachments of their desktops to their co-workers.

    0
  13. 664

    @Jubal: A 10% reduction in filesize, and a 10% improvement in a suggestion algorithm are two completely different things, that can’t even be compared.

    0
  14. 715

    great article! Looking forward to the PNG article :)

    0
  15. 766

    wow. This is a level of insight I haven´t found anywhere else so far. And yet all very relevant to practical usage.
    Truly excellent.

    0
  16. 817

    Thanks so much for this article.
    I agree, this is a bit techie in order to save a few bytes. And the people who would do this amount of work are probably already conscientious of their file sizes. However, it does help in the understanding of the compression techniques. I try to be thoughtful of file sizes, considering my personal website has limited storage space.
    Although I know the generalities of compression (the JPEG blocks, GIF compression) a good comparison of JPEG, GIF and PNG would be very helpful. I *thought* PNG was developed to be the replacement for GIF back when there were patent issues with GIF, but it seems like PNG files are always larger than GIF and JPEG files. And what about JPEG2000? What ever happened to that format?

    0
  17. 868

    Jaypegg2000 got SMASHED by the Y2K-bug! :O

    0
  18. 919

    This article is a tremendous help! I can’t wait to read the PNG article!

    thank you!

    0
  19. 970

    I agree that saving a few bytes here and there does add up and for that reason I’ll back this up. Theres only one problem though. What if you have thousands of images? I don’t fancy sitting there doing all that work. Might save on bandwidth but doesn’t save on my time. Oh and if I’m paying someone to do it – well I don’t need to know this. Just my opinion.

    0
  20. 1021

    Really cool article! It is really useful for web designers like me.

    0
  21. 1072

    Sergey Chikuyonok

    July 2, 2009 5:58 am

    I *thought* PNG was developed to be the replacement for GIF back when there were patent issues with GIF, but it seems like PNG files are always larger than GIF and JPEG files.

    JPEG is good for photographic images, while PNG is the best format for storing line-art (like logos, vector graphics, gradients, etc.) images. JPEG almost always introduce image quality degradation, while PNG saves image data “as is”, without changing image.

    And PNG is better than GIF in every aspect. There are some cases when GIF is smaller than PNG (because of some overhead used to store image data in PNG), but the image must be too small.

    And what about JPEG2000? What ever happened to that format?

    JPEG2000 doesn’t have enougth support in web-browsers, so this format is rather useless for web graphics.

    0
  22. 1123

    Nice approach to the nitty-gritty details Sergey. This is especially helpful on stylized controlled images and backgrounds. Keep them coming. I host a lot of sites on my server, and this sort of optimization can help keep them under storage and bandwidth limits.

    0
  23. 1174

    As far as application goes, I see a marginal number of people making use of this tutorial more than occasionally – despite the value it can provide. From the theory side of things, however, this was fascinating and worth the read. For those of us without art or graphic design backgrounds, every little piece of this helps us better understand digital processes and overall design technique.

    0
  24. 1225

    @greg: I think the issue here is workflow and considering the merits of spending a minute per image on a process that has a relatively limited payoff.

    Being a professional includes making sure that one is working to schedule and using allocated hours wisely. It’s to do with simple economics (not ethics).

    BTW ‘lossless’ algorythms such as those used in the PNG-24 format result in no loss of data, but usually produce larger file sizes – which kind of contradicts your argument.

    0
  25. 1276

    Dr. Girlfriend

    July 2, 2009 8:23 am

    Well done, Sergey! I learned these techniques back in school — from one of my professors who worked at NASA Ames doing this:
    http://128.102.216.35/factsheets/view.php?id=56

    Thanks for the refresher course. You have an enviable way of distilling complex subject matter into easy to understand language. I’m looking forward to your follow-up article.

    0
  26. 1327

    Anthony Proulx

    July 2, 2009 8:33 am

    Interesting, I have been photoshop optimizing for quite some time, its good to know the more indepth to how it all works, and some tips to do it better.

    0
  27. 1378

    I’d use PNGs exclusively if it wasn’t for goddamn IE6.

    0
  28. 1429

    “Not worth saving a few bytes”…

    This is the same argument people present for not living greener – that they alone make no difference. In mass quantities, little changes make a huge difference in the overall quality of life – and in this case, user experience.

    Value is a objective metric of satisfaction. What makes one person very happy may be completely irrelevant to another.

    Come on, guys, the tech-web culture is supposed to be smarter than this. Don’t say something is worthless just because it doesn’t appease your personal interests.

    0
  29. 1480

    “I’d use PNGs exclusively if it wasn’t for goddamn IE6.”

    LOL… absolutely!

    I’ve started using an Apache redirect for IE6 clients – it takes them to the mobile version of the respective site. That’s about the only thing IE6 can digest!

    0
  30. 1531

    Can anyone clarify the difference between normal JPEG in Photoshop versus the Save For Web JPEG algorithm? Save for Web Optimized are often fractions of the size– is there a more complicated order of operations that JPEG considers other than color-down sampling at 50 percent?

    0
  31. 1582

    For JPEG optimization, you can try using punypng.

    It’s a service I wrote that is kinda similar to Smushit. However, unlike Smush.it, it’s a little smarter when dealing with JPEGs, as it will try to use a lossless PNG compression (which might be beneficial if the JPEG has a lot of solid colors). If it that doesn’t produce savings, punypng will use jpeg-tran to strip out meta-data. Either way, it’ll help you produce the smallest JPEGs possible.

    Give punypng a try.

    0
  32. 1633

    Despite all the naysayers of JPEG, this article is very interesting and informative. Personally, yes, I use PNG most of the time, but the insight into image data you’ve provided is worthwhile.

    0
  33. 1684

    great post and very informative… great reading through the comments here as well.

    not sure, but i wonder if i’m the only one thinking that’s a pretty slick looking microwave…

    0
  34. 1735

    Sergey Chikuyonok

    July 2, 2009 11:32 am

    Can anyone clarify the difference between normal JPEG in Photoshop versus the Save For Web JPEG algorithm?

    In both cases Photoshop uses the same algorithm, but when you save JPEG normally PS also generates and saves the preview image inside the original one. Thus, you get two images inside single file.

    The same behavior is used when saving in PNG, GIF or any other format. So, if you create images for web, Save For Web is your best friend.

    0
  35. 1786

    For images that really count where I’m trying to squeeze out the best quality with the smallest file size I’ve used the Photoshop alpha channel quality control. Essentially you create a new alpha channel and mask the most important sections of your image. Using Save for Web, click the icon next to the Quality slider and choose the channel. Now adjust your min and max quality and watch the file size difference. Sure it’s slow if you’re doing batches and not ideal for highly detailed images across the whole rectangle, but for images with solid fields or low detail (sky, walls, bokeh, etc.) this will allow you to selectively control image compression. You can achieve 5-10% file size reduction quite easily. If a quick write-up would help, let me know and I’ll get something posted.

    0
  36. 1837

    Daniel Laskowski

    July 2, 2009 3:20 pm

    Additional technique that has not been mentioned here (also from the “old days”) is to use Photoshop alpha channel to control quality settings. Think of “masking” certain areas of the image and compressing them more than other areas. This can be used to leave one key object sharp at high settings and use low settings for background.

    0
  37. 1888

    The color manipulation trick is cool but a bit adhoc. Insightful article though–Thanks!

    0
  38. 1939

    Tanya,
    I too noticed this weird thing happening to orangey and red little images, not just compressed by other applications but by browsers as well. Not always though. It’s puzzling to say the least. I’d be interested to find out as well.

    0
  39. 1990

    You can tell there are a lot of people here that only work small. It’s not just about lowering the amount of bytes for the visitor, it’s also for reducing the load on the server.

    Huge sites, that obviously most people here never work with, do care about reducing both code and images in file size. This obviously isn’t that important when you do your wordpress websites for someone that has 10 visitors in a month…

    Great idea for an article series.

    0
  40. 2041

    Interesting. Thanks!

    0
  41. 2092

    Nice and in depth-analyxing article!

    I’ve heard that the eight-pixels-grid-tip can be used when optimizing jpg to print as well… It’s all about “scaling” the quality so it matches the printers output-dpi/lpi… Or something like that…

    :)

    0
  42. 2143

    I’d love to see some real life web case studies, with maybe the top 5 or 10 image optimisation work flows professionals are using.
    For each format .jpg .pngs, That would really help me out.
    (Not asking for much hey lol)

    0
  43. 2194

    Wow! brilliant post! thanks a lot Smashing Team!
    – GearYourFaith.com

    0
  44. 2245

    Anders Bakfeldt

    July 3, 2009 3:46 am

    .. feel like a complete rookie..

    0
  45. 2296

    Good article, I was unaware of the 8px grid. I have to think some of the uninformed comments here are from those that don’t do much professional web work. No – you probably won’t go to great lengths to optimize a jpg if you’re only uploading it to Facebook or a blog, but if it’s a site for a client, then as a professional you should always strive for the highest quality within a given file size.

    Regarding PNG, those commenting that PNG is only lossless – don’t forget about 8-bit PNG, which is tantamount to GIF (256 colors; alpha channel).

    Sergey, I’d love to see more; maybe covering embedded color profiles in both jpg and png?

    0
  46. 2347

    Very simple and helpfull, thx you for your article ; )

    0
  47. 2398

    The optimization is a bit pointless. The size doesn’t matter. A cleaner picture is fine, but now one corner of the microwave screen is cleaner. What’s with the other corner? Unless everything is designed on a 8 pixel grid, this optimization is useless.

    0
  48. 2449

    One of the problems today is that a lot of people have entered the field of web design at a time when bandwidth is cheap and plentiful and there is a reliance on the IDE to do it all for you, and it is assumed that whatever is output is fine as it is. Those of us who grew into web design/development from the CDROM industry and had to deal with the problems of 56K dialup and extremely limited storage and traffic allowances (yes, there was a time when web hosts didn’t offer “unlimited everything”). At that time it was imperative that all images were optimized to their full, and saving a couple of K here and there was cause for celebration.

    That mindset has carried over to this ‘new internet’ for many, and even though the bandwidth and storage are plentiful, the savings that can be achieved through selective optimization are still definitely valid.

    And, additionally, for those of us who care about such things – there’s a sense of achievement in making those savings in file size. It’s the same feeling of accomplishment us programmers get from optimizing code, chopping a few lines out of a program or improving the performance of an algorithm. Most people would never notice, but we know we did it and our day is a little bit brighter for it. :)

    0
  49. 2500

    Thank you for this post.
    Some posts have said it doesn’t affect saving a few pixels. However, because I work for mobile interfaces, it is critical to save even a few pixels. People likely want to see the content as quick as possible.

    0
  50. 2551

    thanks for sharing,, really nice aricle

    0

↑ Back to top