Menu Search
Jump to the content X X
Smashing Conf San Francisco

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf San Francisco, dedicated to smart front-end techniques and design patterns.

Front-End Performance Checklist 2017 (PDF, Apple Pages)

Are you using progressive booting already? What about tree-shaking and code-splitting in React and Angular? Have you set up Brotli or Zopfli compression, OCSP stapling and HPACK compression? Also, how about resource hints, client hints and CSS containment — not to mention IPv6, HTTP/2 and service workers?

Back in the day, performance was often a mere afterthought. Often deferred till the very end of the project, it would boil down to minification, concatenation, asset optimization and potentially a few fine adjustments on the server’s config file. Looking back now, things seem to have changed quite significantly.

Performance isn’t just a technical concern: It matters, and when baking it into the workflow, design decisions have to be informed by their performance implications. Performance has to be measured, monitored and refined continually, and the growing complexity of the web poses new challenges that make it hard to keep track of metrics, because metrics will vary significantly depending on the device, browser, protocol, network type and latency (CDNs, ISPs, caches, proxies, firewalls, load balancers and servers all play a role in performance).

So, if we created an overview of all the things we have to keep in mind when improving performance — from the very start of the process until the final release of the website — what would that list look like? Below you’ll find a (hopefully unbiased and objective) front-end performance checklist for 2017 — an overview of the issues you might need to consider to ensure that your response times are fast and your website smooth.

(You can also just download the checklist PDF1 (0.129 MB) or download the checklist in Apple Pages2 (0.236 MB). Happy optimizing!)

Front-End Performance Checklist 2017 Link

Micro-optimizations are great for keeping a performance on track, but it’s critical to have clearly defined targets in mind — measurable goals that would influence any decisions made throughout the process. There are a couple of different models, and the ones discussed below are quite opinionated — just make sure to set your own priorities early on.

Getting Ready And Setting Goals Link

  1. Be 20% faster than your fastest competitor.
    According to psychological research3, if you want users to feel that your website is faster than any other website, you need to be at least 20% faster. Full-page loading time isn’t as relevant as metrics such as start rendering time, the first meaningful paint4 (i.e. the time required for a page to display its primary content) and the time to interactive5 (the time at which a page — and primarily a single-page application — appears to be ready enough that a user can interact with it).

    Measure start rendering (with WebPagetest1686) and first meaningful paint times (with Lighthouse1727) on a Moto G, a mid-range Samsung device and a good middle-of-the-road device like a Nexus 4, preferably in an open device lab8 — on regular 3G, 4G and Wi-Fi connections.

    Google's Lighthouse tool9
    Lighthouse, a new performance auditing tool by Google.

    Look at your analytics to see what your users are on. You can then mimic the 90th percentile’s experience for testing. Collect data, set up a spreadsheet10, shave off 20%, and set up your goals (i.e. performance budgets11) this way. Now you have something measurable to test against. If you’re keeping the budget in mind and trying to ship down just the minimal script to get a quick time-to-interactive value, then you’re on a reasonable path.

    Performance budget builder by Brad Frost12
    Performance budget builder13 by Brad Frost.

    Share the checklist with your colleagues. Make sure that the checklist is familiar to every member of your team to avoid misunderstandings down the line. Every decision has performance implications, and the project would hugely benefit from front-end developers being actively involved when the concept, UX and visual design are decided on. Map design decisions against performance budget and the priorities defined in the checklist.

  2. 100-millisecond response time, 60 frames per second.
    The RAIL performance model14 gives you healthy targets: Do your best to provide feedback in less than 100 milliseconds after initial input. To allow for <100 milliseconds response, the page must yield control back to main thread at latest after every <50 milliseconds. For high pressure points like animation, it’s best to do nothing else where you can and the absolute minimum where you can’t.

    Also, each frame of animation should be completed in less than 16 milliseconds, thereby achieving 60 frames per second (1 second ÷ 60 = 16.6 milliseconds) — preferably under 10 milliseconds. Because the browser needs time to paint the new frame to the screen your code should finish executing before hitting the 16.6 milliseconds mark. Be optimistic15 and use the idle time wisely. Obviously, these targets apply to runtime performance, rather than loading performance.

  3. First meaningful paint under 1.25 seconds, SpeedIndex under 1000.
    Although it might be very difficult to achieve, your ultimate goal should be a start rendering time under 1 second and a SpeedIndex16 value under 1000 (on a fast connection). For the first meaningful paint, count on 1250 milliseconds at most. For mobile, a start rendering time under 3 seconds for 3G on a mobile device is acceptable17. Being slightly above that is fine, but push to get these values as low as possible.

Defining the Environment Link

  1. Choose and set up your build tools.
    Don’t pay much attention to what’s supposedly cool these days. Stick to your environment for building, be it Grunt, Gulp, Webpack, PostCSS or a combination of tools. As long as you are getting results fast and you have no issues maintaining your build process, you’re doing just fine.
  2. Progressive enhancement.
    Keeping progressive enhancement18 as the guiding principle of your front-end architecture and deployment is a safe bet. Design and build the core experience first, and then enhance the experience with advanced features for capable browsers, creating resilient19 experiences. If your website runs fast on a slow machine with a poor screen in a poor browser on a suboptimal network, then it will only run faster on a fast machine with a good browser on a decent network.
  3. Angular, React, Ember and co.
    Favor a framework that enables server-side rendering. Be sure to measure boot times in server- and client-rendered modes on mobile devices before settling on a framework (because changing that afterwards, due to performance issues, can be extremely hard). If you do use a JavaScript framework, make sure your choice is informed20 and well considered21. Different frameworks will have different effects on performance and will require different strategies of optimization, so you have to clearly understand all of the nuts and bolts of the framework you’ll be relying on. When building a web app, look into the PRPL pattern22 and application shell architecture23.
  4. PRPL Pattern in the application shell architecture24
    PRPL stands for Pushing critical resource, Rendering initial route, Pre-caching remaining routes and Lazy-loading remaining routes on demand.
    Application shell architecture25
    An application shell26 is the minimal HTML, CSS, and JavaScript powering a user interface.
  5. AMP or Instant Articles?
    Depending on the priorities and strategy of your organization, you might want to consider using Google’s AMP27 or Facebook’s Instant Articles28. You can achieve good performance without them, but AMP does provide a solid performance framework with a free content delivery network (CDN), while Instant Articles will boost your performance on Facebook. You could build progressive web AMPs29, too.
  6. Choose your CDN wisely.
    Depending on how much dynamic data you have, you might be able to “outsource” some part of the content to a static site generator30, pushing it to a CDN and serving a static version from it, thus avoiding database requests. You could even choose a static-hosting platform31 based on a CDN, enriching your pages with interactive components as enhancements (JAMStack32).

    Notice that CDNs can serve (and offload) dynamic content as well? So, restricting your CDN to static assets is not necessary. Double-check whether your CDN performs content compression and conversion, smart HTTP/2 delivery, edge-side includes, which assemble static and dynamic parts of pages at the CDN’s edge (i.e. the server closest to the user), and other tasks.

Build Optimizations Link

  1. Set your priorities straight.
    It’s a good idea to know what you are dealing with first. Run an inventory of all of your assets (JavaScript, images, fonts, third-party scripts and “expensive” modules on the page, such as carousels, complex infographics and multimedia content), and break them down in groups.

    Set up a spreadsheet. Define the basic core experience for legacy browsers (i.e. fully accessible core content), the enhanced experience for capable browsers (i.e. the enriched, full experience) and the extras (assets that aren’t absolutely required and can be lazy-loaded, such as web fonts, unnecessary styles, carousel scripts, video players, social media buttons, large images). We published an article on “Improving Smashing Magazine’s Performance33,” which describes this approach in detail.

  2. Use the “cutting-the-mustard” technique.
    Use the cutting-the-mustard technique34 to send the core experience to legacy browsers and an enhanced experience to modern browsers. Be strict in loading your assets: Load the core experience immediately, the enhancements on DomContentLoaded and the extras on the load event.

    Note that the technique deduces device capability from browser version, which is no longer something we can do these days. For example, cheap Android phones in developing countries mostly run Chrome and will cut the mustard despite their limited memory and CPU capabilities. Beware that, while we don’t really have an alternative, use of the technique has become more limited recently.

  3. Consider micro-optimization and progressive booting.
    In some apps, you might need some time to initialize the app before you can render the page. Display skeleton screens35 instead of loading indicators. Look for modules and techniques to speed up the initial rendering time (for example, tree-shaking36 and code-splitting37), because most performance issues come from the initial parsing time to bootstrap the app. Also, use an ahead-of-time compiler38 to offload some of the client-side rendering39 to the server40 and, hence, output usable results quickly. Finally, consider using Optimize.js41 for faster initial loading by wrapping eagerly invoked functions (it might not be necessary42 any longer, though).

    Progressive booting43
    Progressive booting44 means using server-side rendering to get a quick first meaningful paint, but also include some minimal JavaScript to keep the time-to-interactive close to the first meaningful paint.

    Client-side rendering or server-side rendering? In both scenarios, our goal should be to set up progressive booting45: Use server-side rendering to get a quick first meaningful paint, but also include some minimal JavaScript to keep the time-to-interactive close to the first meaningful paint. We can then, either on demand or as time allows, boot non-essential parts of the app. Unfortunately, as Paul Lewis noticed46, frameworks typically have no concept of priority that can be surfaced to developers, and hence progressive booting is difficult to implement with most libraries and frameworks. If you have the time and resources, use this strategy to ultimately boost performance.

  4. Are HTTP cache headers set properly?
    Double-check that expires, cache-control, max-age and other HTTP cache headers have been set properly. In general, resources should be cacheable either for a very short time (if they are likely to change) or indefinitely (if they are static) — you can just change their version in the URL when needed.

    If possible, use Cache-control: immutable, designed for fingerprinted static resources, to avoid revalidation (as of December 2016, supported only in Firefox47 on https:// transactions). You can use Heroku’s primer on HTTP caching headers48, Jake Archibald’s “Caching Best Practices49” and Ilya Grigorik’s HTTP caching primer50 as guides.

  5. Limit third-party libraries, and load JavaScript asynchronously.
    When the user requests a page, the browser fetches the HTML and constructs the DOM, then fetches the CSS and constructs the CSSOM, and then generates a rendering tree by matching the DOM and CSSOM. If any JavaScript needs to be resolved, the browser won’t start rendering the page until it’s resolved, thus delaying rendering. As developers, we have to explicitly tell the browser not to wait and to start rendering the page. The way to do this for scripts is with the defer and async attributes in HTML.

    In practice, it turns out we should prefer defer to async51 (at a cost to users of Internet Explorer52 up to and including version 9, because you’re likely to break scripts for them). Also, limit the impact of third-party libraries and scripts, especially with social sharing buttons and <iframe> embeds (such as maps). You can use static social sharing buttons53 (such as by SSBG54) and static links to interactive maps55 instead.

  6. Are images properly optimized?
    As far as possible, use responsive images56 with srcset, sizes and the <picture> element. While you’re at it, you could also make use of the WebP format57 by serving WebP images with the <picture> element and a JPEG fallback (see Andreas Bovens’ code snippet58) or by using content negotiation (using Accept headers). Sketch natively supports WebP, and WebP images can be exported from Photoshop using a WebP plugin for Photoshop59. Other options are available60, too.

    Responsive Image Breakpoints Generator61
    Responsive Image Breakpoints Generator6562 automates images and markup generation.

    You can also use client hints63, which are now gaining browser support64. Not enough resources to bake in sophisticated markup for responsive images? Use the Responsive Image Breakpoints Generator6562 or a service such as Cloudinary66 to automate image optimization. Also, in many cases, using srcset and sizes alone will reap significant benefits. On Smashing Magazine, we use the postfix -opt for image names — for example, brotli-compression-opt.png; whenever an image contains that postfix, everybody on the team knows that it’s been optimized.

  7. Take image optimization to the next level.
    When you’re working on a landing page on which it’s critical that a particular image loads blazingly fast, make sure that JPEGs are progressive and compressed with mozJPEG67 (which improves the start rendering time by manipulating scan levels), Pingo68 for PNG, Lossy GIF69 for GIF and SVGOMG70 for SVG. Blur out unnecessary parts of the image (by applying a Gaussian blur filter to them) to reduce the file size, and eventually you might even start to remove colors or make a picture black and white to reduce the size further. For background images, exporting photos from Photoshop with 0 to 10% quality can be absolutely acceptable as well.

    Not good enough? Well, you can also improve perceived performance for images with the multiple71 background72 images73 technique74.

  8. Are web fonts optimized?
    Chances are high that the web fonts you are serving include glyphs and extra features that aren’t being used. You can ask your type foundry to subset web fonts or subset them yourself75 if you are using open-source fonts (for example, by including only Latin with some special accent glyphs) to minimize their file sizes. WOFF2 support76 is great, and you can use WOFF and OTF as fallbacks for browsers that don’t support it. Also, choose one of the strategies from Zach Leatherman’s “Comprehensive Guide to Font-Loading Strategies8077,” and use a service worker cache to cache fonts persistently. Need a quick win? Pixel Ambacht has a quick tutorial and case study78 to get your fonts in order.

    Zach Leatherman's Comprehensive Guide to Font-Loading Strategies79
    Zach Leatherman’s Comprehensive Guide to Font-Loading Strategies8077 provides a dozen of options for bettern web font delivery.

    If you can’t serve fonts from your server and are relying on third-party hosts, make sure to use Web Font Loader81. FOUT is better than FOIT82; start rendering text in the fallback right away, and load fonts asynchronously — you could also use loadCSS83 for that. You might be able to get away with locally installed OS fonts84 as well.

  9. Push critical CSS quickly.
    To ensure that browsers start rendering your page as quickly as possible, it’s become a common practice85 to collect all of the CSS required to start rendering the first visible portion of the page (known as “critical CSS” or “above-the-fold CSS”) and add it inline in the <head> of the page, thus reducing roundtrips. Due to the limited size of packages exchanged during the slow start phase, your budget for critical CSS is around 14 KB. If you go beyond that, the browser will need addition roundtrips to fetch more styles. CriticalCSS86 and Critical87 enable you to do just that. You might need to do it for every template you’re using. If possible, consider using the conditional inlining approach88 used by the Filament Group.

    With HTTP/2, critical CSS could be stored in a separate CSS file and delivered via a server push without bloating the HTML. The catch is that server pushing isn’t supported consistently and has some caching issues (see slide 114 onwards of Hooman Beheshti’s presentation89). The effect could, in fact, be negative90 and bloat the network buffers, preventing genuine frames in the document from being delivered. Server pushing is much more effective on warm connections91 due to the TCP slow start. So, you might need to create a cache-aware HTTP/2 server push mechanism92. Keep in mind, though, that the new cache-digest specification93 will negate the need to manually build these “cache-aware” servers.

  10. Use tree-shaking and code-splitting to reduce payloads.
    Tree-shaking94 is a way to clean up your build process by only including code that is actually used in production. You can use Webpack 2 to eliminate unused exports95, and UnCSS96 or Helium97 to remove unused styles from CSS. Also, you might want to consider learning how to write efficient CSS selectors98 as well as how to avoid bloat and expensive styles99.

    Code-splitting100 is another Webpack feature that splits your code base into “chunks” that are loaded on demand. Once you define split points in your code, Webpack takes care of the dependencies and outputted files. It basically enables you to keep the initial download small and to request code on demand, when requested by the application.

    Note that Rollup101 shows significantly better results than Browserify exports. While we’re at it, you might want to check out Rollupify102, which converts ECMAScript 2015 modules into one big CommonJS module — because small modules can have a surprisingly high performance cost103 depending on your choice of bundler and module system.

  11. Improve rendering performance.
    Isolate expensive components with CSS containment104 — for example, to limit the scope of the browser’s styles, of layout and paint work for off-canvas navigation, or of third-party widgets. Make sure that there is no lag when scrolling the page or when an element is animated, and that you’re consistently hitting 60 frames per second. If that’s not possible, then at least making the frames per second consistent is preferable to a mixed range of 60 to 15. Use CSS’ will-change105 to inform the browser of which elements and properties will change.

    Also, measure runtime rendering performance106 (for example, in DevTools107). To get started, check Paul Lewis’ free Udacity course on browser-rendering optimization108. We also have a lil’ article by Sergey Chikuyonok on how to get GPU animation right109.

  12. Warm up the connection to speed up delivery.
    Use skeleton screens, and lazy-load all expensive components, such as fonts, JavaScript, carousels, videos and iframes. Use resource hints110 to save time on dns-prefetch111 (which performs a DNS lookup in the background), preconnect112 (which asks the browser to start the connection handshake (DNS, TCP, TLS) in the background), prefetch113 (which asks the browser to request a resource), prerender114 (which asks the browser to render the specified page in the background) and preload115 (which prefetches resources without executing them, among other things). Note that in practice, depending on browser support, you’ll prefer preconnect to dns-prefetch, and you’ll be cautious with using prefetch and prerender — the latter should only be used if you are very confident about where the user will go next (for example, in a purchasing funnel).

HTTP/2 Link

  1. Get ready for HTTP/2.
    With Google moving towards a more secure web116 and eventual treatment of all HTTP pages in Chrome as being “not secure,” you’ll need to decide on whether to keep betting on HTTP/1.1 or set up an HTTP/2 environment117. HTTP/2 is supported very well118; it isn’t going anywhere; and, in most cases, you’re better off with it. The investment will be quite significant, but you’ll need to move to HTTP/2 sooner or later. On top of that, you can get a major performance boost119 with service workers and server push (at least long term).

    Eventually, Google plans to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that Chrome uses for broken HTTPS. (Image source121)

    The downsides are that you’ll have to migrate to HTTPS, and depending on how large your HTTP/1.1 user base is (that is, users on legacy operating systems or with legacy browsers), you’ll have to send different builds, which would require you to adapt a different build process122. Beware: Setting up both migration and a new build process might be tricky and time-consuming. For the rest of this article, I’ll assume that you’re either switching to or have already switched to HTTP/2.

  2. Properly deploy HTTP/2.
    Again, serving assets over HTTP/2123 requires a major overhaul of how you’ve been serving assets so far. You’ll need to find a fine balance between packaging modules and loading many small modules in parallel.

    On the one hand, you might want to avoid concatenating assets altogether, instead breaking down your entire interface into many small modules, compressing them as a part of the build process, referencing them via the “scout” approach124 and loading them in parallel. A change in one file won’t require the entire style sheet or JavaScript to be redownloaded.

    On the other hand, packaging still matters125 because there are issues with sending many small JavaScript files to the browser. First, compression will suffer. The compression of a large package will benefit from dictionary reuse, whereas small separate packages will not. There’s standard work to address that, but it’s far out for now. Secondly, browsers have not yet been optimized for such workflows. For example, Chrome will trigger inter-process communications126 (IPCs) linear to the number of resources, so including hundreds of resources will have browser runtime costs.

    Progressive CSS loading127
    To achieve best results with HTTP/2, consider to load CSS progressively129128, as suggested by Chrome’s Jake Archibald.

    Still, you can try to load CSS progressively129128. Obviously, by doing so, you are actively penalizing HTTP/1.1 users, so you might need to generate and serve different builds to different browsers as part of your deployment process, which is where things get slightly more complicated. You could get away with HTTP/2 connection coalescing130, which allows you to use domain sharding while benefiting from HTTP/2, but achieving this in practice is difficult.

    What to do? If you’re running over HTTP/2, sending around 10 packages seems like a decent compromise (and isn’t too bad for legacy browsers). Experiment and measure to find the right balance for your website.

  3. Make sure the security on your server is bulletproof.
    All browser implementations of HTTP/2 run over TLS, so you will probably want to avoid security warnings or some elements on your page not working. Double-check that your security headers are set properly131, eliminate known vulnerabilities132, and check your certificate133.

    Haven’t migrated to HTTPS yet? Check The HTTPS-Only Standard134 for a thorough guide. Also, make sure that all external plugins and tracking scripts are loaded via HTTPS, that cross-site scripting isn’t possible and that both HTTP Strict Transport Security headers135 and Content Security Policy headers136 are properly set.

  4. Do your servers and CDNs support HTTP/2?
    Different servers and CDNs are probably going to support HTTP/2 differently. Use Is TLS Fast Yet?139137 to check your options, or quickly look up how your servers are performing and which features you can expect to be supported.
  5. Is TLS Fast Yet?138
    Is TLS Fast Yet?139137 allows you to check your options for servers and CDNs when switching to HTTP/2.
  6. Is Brotli or Zopfli compression in use?
    Last year, Google introduced140 Brotli141, a new open-source lossless data format, which is now widely supported142 in Chrome, Firefox and Opera. In practice, Brotli appears to be more effective143 than Gzip and Deflate. It might be slow to compress, depending on the settings, and slower compression will ultimately lead to higher compression rates. Still, it decompresses fast. Because the algorithm comes from Google, it’s not surprising that browsers will accept it only if the user is visiting a website over HTTPS — and yes, there are technical reasons for that as well. The catch is that Brotli doesn’t come preinstalled on most servers today, and it’s not easy to set up without self-compiling NGINX or Ubuntu. However, you can enable Brotli even on CDNs that don’t support it144 yet (with a service worker).

    Alternatively, you could look into using Zopfli’s compression algorithm145, which encodes data to Deflate, Gzip and Zlib formats. Any regular Gzip-compressed resource would benefit from Zopfli’s improved Deflate encoding, because the files will be 3 to 8% smaller than Zlib’s maximum compression. The catch is that files will take around 80 times longer to compress. That’s why it’s a good idea to use Zopfli on resources that don’t change much, files that are designed to be compressed once and downloaded many times.

  7. Is OCSP stapling enabled?
    By enabling OCSP stapling on your server146, you can speed up your TLS handshakes. The Online Certificate Status Protocol (OCSP) was created as an alternative to the Certificate Revocation List (CRL) protocol. Both protocols are used to check whether an SSL certificate has been revoked. However, the OCSP protocol does not require the browser to spend time downloading and then searching a list for certificate information, hence reducing the time required for a handshake.
  8. Have you adopted IPv6 yet?
    Because we’re running out of space with IPv4147 and major mobile networks are adopting IPv6 rapidly (the US has reached148 a 50% IPv6 adoption threshold), it’s a good idea to update your DNS to IPv6149 to stay bulletproof for the future. Just make sure that dual-stack support is provided across the network — it allows IPv6 and IPv4 to run simultaneously alongside each other. After all, IPv6 is not backwards-compatible. Also, studies show150 that IPv6 made those websites 10 to 15% faster due to neighbor discovery (NDP) and route optimization.

  9. Is HPACK compression in use?
    If you’re using HTTP/2, double-check that your servers implement HPACK compression151 for HTTP response headers to reduce unnecessary overhead. Because HTTP/2 servers are relatively new, they may not fully support the specification, with HPACK being an example. H2spec152 is a great (if very technically detailed) tool to check that. HPACK works153.
  10. h2spec154
    H2spec (View large version155) (Image source156)
  11. Are service workers used for caching and network fallbacks?
    No performance optimization over a network can be faster than a locally stored cache on user’s machine. If your website is running over HTTPS, use the “Pragmatist’s Guide to Service Workers157” to cache static assets in a service worker cache and store offline fallbacks (or even offline pages) and retrieve them from the user’s machine, rather than going to the network. Also, check Jake’s Offline Cookbook158 and the free Udacity course “Offline Web Applications159.” Browser support? It’s getting there160, and the fallback is the network anyway.

Testing and Monitoring Link

  1. Monitor mixed-content warnings.
    If you’ve recently migrated from HTTP to HTTPS, make sure to monitor both active and passive mixed-content warnings, with a tool such as Report-URI.io161. You can also use Mixed Content Scan162 to scan your HTTPS-enabled website for mixed content.
  2. Is your development workflow in DevTools optimized?
    Pick a debugging tool and click on every single button. Make sure you understand how to analyze rendering performance and console output, and how to debug JavaScript and edit CSS styles. Umar Hansa recently prepared a (huge) slidedeck163 and talk164 covering dozens of obscure tips and techniques to be aware of when debugging and testing in DevTools.
  3. Have you tested in proxy browsers and legacy browsers? Testing in Chrome and Firefox is not enough. Look into how your website works in proxy browsers and legacy browsers. UC Browser and Opera Mini, for instance, have a significant market share in Asia165 (up to 35% in Asia). Measure average Internet speed166 in your countries of interest to avoid big surprises down the road. Test with network throttling, and emulate a high-DPI device. BrowserStack167 is fantastic, but test on real devices as well.
  4. Is continuous monitoring set up?
    Having a private instance of WebPagetest1686 is always beneficial for quick and unlimited tests. Set up continuous monitoring of performance budgets with automatic alerts. Set your own user-timing marks to measure and monitor business-specific metrics. Look into using SpeedCurve169 to monitor changes in performance over time, and/or New Relic170 to get insights that WebPagetest cannot provide. Also, look into SpeedTracker171, Lighthouse1727 and Calibre173.

Quick Wins Link

This list is quite comprehensive, and completing all of the optimizations might take quite a while. So, if you had just 1 hour to get significant improvements, what would you do? Let’s boil it all down to 10 low-hanging fruits. Obviously, before you start and once you finish, measure results, including start rendering time and SpeedIndex on a 3G and cable connection.

  1. Your goal is a start rendering time under 1 second on cable and under 3 seconds on 3G, and a SpeedIndex value under 1000. Optimize for start rendering time and time-to-interactive.
  2. Prepare critical CSS for your main templates, and include it in the <head> of the page. (Your budget is 14 KB).
  3. Defer and lazy-load as many scripts as possible, both your own and third-party scripts — especially social media buttons, video players and expensive JavaScript.
  4. Add resource hints to speed up delivery with faster dns-lookup, preconnect, prefetch, preload and prerender.
  5. Subset web fonts and load them asynchronously (or just switch to system fonts instead).
  6. Optimize images, and consider using WebP for critical pages (such as landing pages).
  7. Check that HTTP cache headers and security headers are set properly.
  8. Enable Brotli or Zopfli compression on the server. (If that’s not possible, don’t forget to enable Gzip compression.)
  9. If HTTP/2 is available, enable HPACK compression and start monitoring mixed-content warnings. If you’re running over LTS, also enable OCSP stapling.
  10. If possible, cache assets such as fonts, styles, JavaScript and images — actually, as much as possible! — in a service worker cache.

Download The Checklist (PDF, Apple Pages) Link

With this checklist in mind, you should be prepared for any kind of front-end performance project. Feel free to download the print-ready PDF of the checklist as well as an editable Apple Pages document to customize the checklist for your needs:

If you need alternatives, you can also check the front-end checklist by Dan Rublic176 and the “Designer’s Web Performance Checklist177” by Jon Yablonski.

Off We Go! Link

Some of the optimizations might be beyond the scope of your work or budget or might just be overkill given the legacy code you have to deal with. That’s fine! Use this checklist as a general (and hopefully comprehensive) guide, and create your own list of issues that apply to your context. But most importantly, test and measure your own projects to identify issues before optimizing. Happy performance results in 2017, everyone!

Huge thanks to Anselm Hannemann, Patrick Hamann, Addy Osmani, Andy Davies, Tim Kadlec, Yoav Weiss, Rey Bango, Matthias Ott, Mariana Peralta, Jacob Groß, Tim Swalling, Bob Visser, Kev Adamson and Rodney Rehm for reviewing this article, as well as our fantastic community, which has shared techniques and lessons learned from its work in performance optimization for everybody to use. You are truly smashing! (al)

Footnotes Link

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30
  31. 31
  32. 32
  33. 33
  34. 34
  35. 35
  36. 36
  37. 37
  38. 38
  39. 39
  40. 40
  41. 41
  42. 42
  43. 43
  44. 44
  45. 45
  46. 46
  47. 47
  48. 48
  49. 49
  50. 50
  51. 51
  52. 52
  53. 53
  54. 54
  55. 55
  56. 56
  57. 57
  58. 58
  59. 59
  60. 60
  61. 61
  62. 62
  63. 63
  64. 64
  65. 65
  66. 66
  67. 67
  68. 68
  69. 69
  70. 70
  71. 71
  72. 72
  73. 73
  74. 74
  75. 75
  76. 76
  77. 77
  78. 78
  79. 79
  80. 80
  81. 81
  82. 82
  83. 83
  84. 84
  85. 85
  86. 86
  87. 87
  88. 88
  89. 89
  90. 90
  91. 91
  92. 92
  93. 93
  94. 94
  95. 95
  96. 96
  97. 97
  98. 98
  99. 99
  100. 100
  101. 101
  102. 102
  103. 103
  104. 104
  105. 105
  106. 106
  107. 107
  108. 108
  109. 109
  110. 110
  111. 111
  112. 112
  113. 113
  114. 114
  115. 115
  116. 116
  117. 117
  118. 118
  119. 119
  120. 120
  121. 121
  122. 122
  123. 123
  124. 124
  125. 125
  126. 126
  127. 127
  128. 128
  129. 129
  130. 130
  131. 131
  132. 132
  133. 133
  134. 134
  135. 135
  136. 136
  137. 137
  138. 138
  139. 139
  140. 140
  141. 141
  142. 142
  143. 143
  144. 144
  145. 145
  146. 146
  147. 147
  148. 148
  149. 149
  150. 150
  151. 151
  152. 152
  153. 153
  154. 154
  155. 155
  156. 156
  157. 157
  158. 158
  159. 159
  160. 160
  161. 161
  162. 162
  163. 163
  164. 164
  165. 165
  166. 166
  167. 167
  168. 168
  169. 169
  170. 170
  171. 171
  172. 172
  173. 173
  174. 174
  175. 175
  176. 176
  177. 177

↑ Back to top Tweet itShare on Facebook

Vitaly Friedman loves beautiful content and doesn’t like to give in easily. Vitaly is writer, speaker, author and editor-in-chief of Smashing Magazine. He runs responsive Web design workshops, webinars and loves solving complex UX, front-end and performance problems in large companies. Get in touch.

  1. 1

    Great article, Vitaly. It was a pleasure to pre-read and give feedback.

  2. 2

    Call me crazy, but I have a distrust for AMP / Instant Articles.

    In the words of Maciej Cegłowski:
    > The AMP project is ostentatiously open source, and all kinds of publishers have signed on. Out of an abundance of love for the mobile web, Google has volunteered to run the infrastructure, especially the user tracking parts of it.

  3. 3

    Really nice check list! Will start the next year with going through it with my colleagues. Can you provide the editable version in a different format? docx, odt something I can use with Linux. Thanks

  4. 5

    I must be Rumpelstiltskin. I only got 1/3 of the jargon. I just want my sites to load faster. I say “performance budget” to a client and their eyes glaze over. I need something simple and quick to implement AND easy to explain.

    • 6

      Ain’t no such thing. The fact that this list exists and is welcome by readers indicates that performance is a difficult topic with lots of subtleties, ie life’s hard, there are no shortcuts.

  5. 7

    Gandalf Grayhat

    December 24, 2016 1:05 am

    Note that compression should be turned off for TLS sites. Not just TLS compression, but all compression.

    Also with respect to service workers, they are useful and I do use them but I use them sparingly. There are two issues with them.

    Basically they act as a proxy between the window and the server, and that presents some security issues. They try to update themselves every 24 hours, if an attacker is able to poison DNS cache and obtained a fraudulently signed TLS certificate, they can install a malicious service worker that the browser then trusts.

    Fraudulently signed x509 certs are not difficult to obtain. Hopefully browsers enforce HPKP when fetching a new service worker but HPKP has some issues itself, Chrome for example does not enforce them under certain conditions and says they won’t fix it.

    When browsers start enforcing DNSSEC then it will be safer to use service workers but for now they should be used sparingly.

    The author issue with service workers has to do with data consumption and mobile devices. Many users have to pay for data, and service workers often are used to pre-fetch data that only benefits the user when they visit a page on your site that actually uses that resource before it expires. If they only did this when a mobile device was on wifi, I wouldn’t care. Unfortunately they also do it when the user is connected via 3g/4glte and many users have to pay for the data they consume.

    If using service workers to pre-fetch resources, make sure they are small and cached for a long time increasing the chances the are actually used before they expire.

    • 8

      “Note that compression should be turned off for TLS sites. Not just TLS compression, but all compression.”
      Why ???

  6. 9

    Holy shit, this was the most comprehensive article for performance improvement I’ve ever read!

  7. 10

    Great article as always. I’m surprised about Pingo and how the “Lossy Mode” works.. really useful, the quality is well preserved !

  8. 11

    “Prepare critical CSS for your main templates, and include it in the of the page. (Your budget is 14 KB).”

    – If my site’s css is below 14kb, should I keep it in the html file?
    – If it’s, let’s say, 200kb and I want to put the critical css in the html, it’s OK as long as the html file doesn’t go past 14kb or the css in the file doesn’t go past 14kb?

  9. 12

    I wonder what would be the cost of implementing all of this, at what scale and target audience do each of these optimizations become cost effective?

  10. 13

    Ashwin Preetham Lobo

    December 31, 2016 7:25 pm

    Nice Article!


Leave a Comment

You may use simple HTML to add links or lists to your comment. Also, use <pre><code class="language-*">...</code></pre> to mark up code snippets. We support -js, -markup and -css for comments.

↑ Back to top