Menu Search
Jump to the content X X
Smashing Conf San Francisco

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf San Francisco, dedicated to smart front-end techniques and design patterns.

Improving Smashing Magazine’s Performance: A Case Study

Today Smashing Magazine turns eight years old. Eight years is a long time on the web, yet for us it really doesn’t feel like a long journey at all. Things have changed, evolved and moved on, and we gratefully take on new challenges one at a time. To mark this special little day, we’d love to share a few things that we’ve learned over the last year about the performance challenges of this very website and about the work we’ve done recently. If you want to craft a fast responsive website, you might find a few interesting nuggets worth considering. – Ed.

Further reading on Smashing: Link

Improvement is a matter of steady, ongoing iteration. When we redesigned Smashing Magazine back in 2012, our main goal was to establish trustworthy branding that would reflect the ambitious editorial direction of the magazine. We did that primarily by focusing on crafting a delightful reading experience. Over the years, our focus hasn’t changed a bit; however, that very asset that helped to establish our branding turned into a major performance bottleneck.

Good Old-Fashioned Website Decay Link

Looking back at the early days of our redesign, some of our decisions seem to be quick’n’dirty fixes rather than sound long-term solutions. Our advertising constraints pushed us to compromises. Legacy browsers drove us to dependencies on (relatively) heavy JavaScript libraries. Our technical infrastructure led us to heavily customized WordPress plugins and complex PHP logic. With every new feature added, our technical debt grew, and our style sheets, markup and JavaScript weren’t getting any leaner.

Sound familiar? Admittedly, responsive web design as a technique often gets a pretty bad rap for bloating websites and making them difficult to maintain. (Not that non-responsive websites are any different, but that’s another story.) In practice, all assets on a responsive website will show up pretty much everywhere: be it a slow smartphone, a quirky tablet or a fancy laptop with a Retina screen. And because media queries merely provide the ability to respond to screen dimensions — and do not, rather, have a more local, self-contained scope — adding a new feature and adjusting the reading experience potentially means going through each and every media query to prevent inconsistencies and fix layout issues.

“Mobile First” Means “Always Mobile First” Link

When it comes to setting priorities for the content and functionality on a website, “mobile first” is one of those difficult yet incredibly powerful constraints that help you focus on what really matters, and identify critical components of your website. We discovered that designing mobile first is one thing; building mobile first is an entirely different story. In our case, both the design and development phases were heavily mobile first, which helped us to focus tightly on the content and its presentation. But while the design process was quite straightforward, implementation proved to be quite difficult.

Because the entire website was built mobile first, we quickly realized that adding or changing components on the page would entail going through the mobile-first approach for every single (minor and major) design decision. We’d design a new component in a mobile view first, and then design an “extended” view for the situations when more space is available. Often that meant adjusting media queries with every single change, and more often it meant adding new stuff to style sheets and to the markup to address new issues that came up.

Tim Kadlec's article about SmashingMag's performance6
Shortly after the new SmashingMag redesign went live, we ran into performance issues. An article by Tim Kadlec from 20127 shows just that.

We found ourselves trapped: development and maintenance were taking a lot of time, the code base was full of minor and major fixes, and the infrastructure was becoming too slow. We ended up with a code base that had become bloated before the redesign was even released — very bloated8, in fact.

Performance Issues Link

In mid-2013, our home page weighed 1.4 MB and produced 90 HTTP requests. It just wasn’t performing well. We wanted to create a remarkable reading experience on the website while avoiding the flash of unstyled text (FOUT), so web fonts were loaded in the header and, hence, were blocking the rendering of content (actually it’s correct behaviour according to the spec9, designed to avoid multiple repaints and reflows.) jQuery was required for ads to be displayed, and a few JavaScripts depended on jQuery, so they all were blocking rendering as well. Ads were loaded and rendered before the content to ensure that they appeared as quickly as possible.

Images delivered by our ad partners were usually heavy and unoptimized, slowing down the page further. We also loaded Respond.js and Modernizr to deal with legacy browsers and to enhance the experience for smart browsers. As a result, articles were almost inaccessible on slow and unstable networks, and the start rendering time on mobile was disappointing at best.

It wasn’t just the front-end that was showing its age though. The back-end wasn’t getting any better either. In 2012 we were playing with the idea of having fully independent sections of the magazine — sections that would live their own lives, evolving and growing over time as independent WordPress installations, with custom features and content types that wouldn’t necessarily be shared across all sections.

browser-stats-opt10
Yes, we do enjoy a quite savvy user base, so optimization for IE8 is really not an issue. Large view.11

Because WordPress multi-install wasn’t available at the time, we ended up with six independent, autonomous WordPress installs with six independent, autonomous style sheets. Those installs were connected to 6 × 2 databases (a media server and a static content server). We ran into dilemmas. For example, what if an author wrote for two sections and we’d love to show their articles from both sections on one single author’s bio page? Well, we’d need to somehow pull articles from both installs and add redirects for each author’s page to that one unified page, or should we just be using one of those pages as a “host”? Well, you know where this is going: increasing complexity and increasing maintenance costs. In the end, the sections didn’t manage to evolve significantly — at least not in terms of content — yet we had already customized technical foundation of each section, adding to the CSS dust and PHP complexity.

(Because we had outsourced WordPress tasks, some plugins depended on each other. So, if we were to deactivate one, we might have unwittingly disabled two or three others in the process, and they would have to be turned back on in a particular order to work properly. There were even differences in the HTML outputted by the PHP templates behind the curtains, such as classes and IDs that differed from one installation to the next. It’s no surprise that this setup made development a bit frustrating.)

The traffic was stagnant, readers kept complaining about the performance on the site and only a very small portion of users visited more than 2 pages per visit. The visual feedback when browsing the site was visible and surely wasn’t instant, and this lag has been driving readers away from the site to Instapaper and Pocket — both on mobile and desktop. We knew that because we asked our readers, and the feedback was quite clear (and a bit frustrating).

It was time to push back — heavily, with a major refactoring of the code base. We looked closely under the hood, discovering a few pretty scary (and nasty) things, and started fixing issues, one by one. It took us quite a bit of time to make things right, and we learned quite a few things along the way.

Switching Gears Link

Up until mid-2013, we weren’t using a CSS preprocessor, nor any build tools. Good long-term solutions require a good long-term foundation, so the first issues we tackled were tooling and the way the code base was organized. Because a number of people had been working on the code base over the years, some things proved to be rather mysterious… or challenging, to say the least.

We started with a code inventory, and we looked thoroughly at every single class, ID and CSS selector. Of course, we wanted to build a system of modular components, so the first task was to turn our seven large CSS files into maintainable, well-documented and easy-to-read modules. At the time, we’d chosen LESS, for no particular reason, and so our front-end engineer Marco12 started to rewrite CSS and build a modular, scalable architecture. Of course, we could very well have used Sass instead, but Marco felt quite comfortable with LESS at the time.

With a new CSS architecture, Grunt13 as a build tool and a few14 time-saving15 Grunt16 tasks17, the task of maintaining the entire code base became much easier. We set up a brand new testing environment, synced up everything with GitHub, assigned roles and permissions, and started digging. We rewrote selectors, reauthored markup, and refactored and optimized JavaScript. And yes, it took us quite some time to get things in order, but it really wouldn’t have been so difficult if we hadn’t had a number of very different stylesheets to deal with.

The Big Back-End Cleanup Link

With the introduction of Multisite, creating a single WordPress installation from our six separate installations became a necessary task for our friends at Inpsyde18. Over the course of five months, Christian Brückner and Thomas Herzog cleaned up the PHP templates, kicked unnecessary plugins into orbit, rewrote plugins we had to keep and added new ones where needed. They cleared the databases of all the clutter that the old plugins had created — one of the databases weighed in at 70 GB (no, that’s not a typo; we do mean gigabytes) — merged all of the databases into one, and then created a single fresh and, most importantly, maintainable WordPress Multisite installation.

The speed boost from those optimizations was remarkable. We are talking about 400 to 500 milliseconds of improvement by avoiding sub-domain redirects and unifying the code base and the back-end code. Those redirects19 are indeed a major performance culprit, and just avoiding them is one of those techniques that usually boost performance significantly because you avoid full DNS lookups, improve time to first byte and reduce round trips on the network.

Thomas and Christian also refactored our entire WordPress theme according to the coding standard of their own theme architecture, which is basically a sophisticated way of writing PHP based on the WordPress standard. They wrote custom drop-ins that we use to display content at certain points in the layout. Writing the PHP strictly according to WordPress’ official API felt like getting out of a horse-drawn carriage and into a race car. All modifications were done without ever touching WordPress’ core, which is wonderful because we’ll never have to fear updating WordPress itself anymore.

Spam comments
We’ve also marked a few millions spam comments across all the sections of the magazine. And before you ask: no, we did not import them into the new install.

We migrated the installations during a slow weekend in mid-April 2014. It was a huge undertaking, and our server had a few hiccups during the process. We brought together over 2500 articles, including about 15,000 images, all spread over six databases, which also had a few major inconsistencies. While it was a very rough start at first — a lot of redirects had to be set up, caching issues on our server piled up, and some articles got lost between the old and new installations — the result was well worth the effort.

Our editorial team, primarily Iris20, Melanie21 and Markus22, worked very hard to bring those lost articles back to life by analyzing our 404s with Google Webmaster Tools. We spent a few weekends to ensure that every single article was recovered and remains accessible. Losing articles, including their comments, was simply unacceptable.

We know well how much time it takes for a good article to get published, and we have a lot of respect for authors and their work, and ensuring that the content remains online was a matter of respect for the work published. It took us a few weeks to get there and it wasn’t the most enjoyable experience for sure, but we used the opportunity to introduce more consistency in our information architecture and to adjust tags and categories appropriately. (Ah, if you do happen to find an article that has gotten lost along the way, please do let us know23 and we’ll fix it right away. Thanks!)

Front-End Optimization Link

In April 2014, once the new system was in place and had been running smoothly for a few days, we rewrote the LESS files based on what was left of all of the installs. Streamlining the classes for posts and pages, getting rid of all unneeded IDs, shortening selectors by lowering their specificity, and rooting out anything in the CSS we could live without crunched the CSS from 91 KB down to a mere 45 KB.

Once the CSS code base was in proper shape, it was time to reconsider how assets are loaded on the page and how we can improve the start rendering time beyond having clean, well-structured code base. Given the nightmare we experienced with the back-end previously, you might assume that improving performance now would have been a complex, time-consuming task, but actually it was quite a bit easier than that. Basically, it was just a matter of getting our priorities right by optimizing the critical rendering path.

The key to improving performance was to focus on what matters most: the content, and the fastest way for readers to actually start reading our articles on their devices. So over a course of a few months we kept reprioritizing. With every update, we introduced mini-optimizations based on a very simple, almost obvious principle: optimize the delivery of content, and defer the rest — without any compromises, anywhere.

Our optimizations were heavily influenced by the work done by Scott Jehl24, as well as The Guardian25 and the BBC26 teams (both of which open-sourced their work). While Scott has been sharing valuable insight27 into the front-end techniques that Filament Group was using, the BBC and The Guardian helped us to define and refine the concept of the core experience on the website and use it as a baseline. A shared main goal was to deliver the content as fast as possible to as many people as possible regardless of their device or network capabilities, and enhance the experience with progressive enhancement for capable browsers.

However, historically we haven’t had a lot of JavaScript or complex interactions on Smashing Magazine, so we didn’t feel that it was necessary to introduce complex loading logic with JavaScript preloaders. However, being a content-focused website, we did want to reduce the time necessary for the articles to start displaying as far as humanly possible.

Performance Budget: Speed Index <= 1000 Link

How fast is fast enough?28 Well, that’s a tough question to answer. In general, it’s quite difficult to visualize performance and explain why every millisecond counts—unless you have hard data. At the same time, falling into trap of absolutes and relying on not truly useful performance metrics is easy. In the past, the most commonly cited performance metric was average loading time. However, on its own, average loading time isn’t that helpful because it doesn’t tell you much about when a user can actually start using the website. This is why talking about “fast enough” is often so tricky.

Comparing Progress29
A nice way of visualizing performance is to use WebPageTest to generate an actual video of the page loading and run a test between two competing websites. Besides, the Speed Index metric30 often proves to be very useful.

Different components require different amounts of time to load, yet some components of the page are more important than others. E.g. you don’t need to load the footer content fast, but it’s a good idea to render the visible portion of the page fast. You know where it’s heading: of course, we are talking about the “above the fold” view here. As Ilya Grigorik once said31, “We don’t need to render the entire page in one second, [just] the above the fold content.” To achieve that, according to Scott’s research and Google’s test results, it’s helpful to set ambitious performance goals:

What does it mean and why are they important? According to HCI research, “for an application to feel instant, a perceptible response to user input must be provided within hundreds of milliseconds34. After a second or more, the user’s flow and engagement with the initiated task feels broken.” With the first goal, we are trying to ensure an instant response on our website. It refers to the so-called Speed Index metric for the start rendering time — the average time (in ms) at which visible parts of the page are displayed, or become accessible. So the first goal basically reflects that a page starts rendering under 1000ms, and yes, it’s a quite difficult challenge to take on.

Browser Networking35
Ilya Grigorik’s book High Performance Browser Networking36 is a very helpful guide with useful guidelines and advice on making websites fast. And it’s available as a free HTML book, too.

The second goal can help in achieving the first one. The value of 14 KB has been measured empirically37 by Google and is the threshold for the first package exchanged between a server and client via towers on a cellular connection. You don’t need to include images within 14 Kb, but you might want to deliver the markup, style sheets and any JavaScript required to render the visible portion of the page in that threshold. Of course, in practice this value can only realistically be achieved with gzip compression.

By combining the two goals, we basically defined a performance budget that we set for the website — a threshold for what was acceptable. Admittedly, we didn’t concern ourselves with the start rendering time on different devices on various networks, mainly because we really wanted to push back as far as possible everything that isn’t required to start rendering the page. So, the ideal result would be a Speed Index value that is way lower than the one we had set — as low as possible, actually — in all settings and on all connections, both shaky and stable, slow and fast. This might sound naive, but we wanted to figure out how fast we could be, rather than how fast we should be. We did measure start rendering time for first and subsequent page loads, but we did that much later, after optimizations had already been done, and just to keep track of issues on the front-end.

Our next step would be to integrate Tim Kadlec’s Perf-Budget Grunt task38 to incorporate the performance budget right into the build process and, thus, run every new commit against WebPagetest’s performance benchmark. If it fails, we know that a new feature has slowed us down, so we probably have to reconsider how it’s implemented to fit it within our budget, or at least we know where we stand and can have meaningful discussions about its impact on the overall performance.

Prioritization And Separation Of Concerns Link

If you’ve been following The Guardian‘s work recently, you might be familiar with the strict separation of concerns that they introduced39 during the major 2013 redesign. The Guardian separated40 its entire content into three main groups:

  • Core content
    Essential HTML and CSS, usable non-JavaScript-enhanced experience
  • Enhancement
    JavaScript, geolocation, touch support, enhanced CSS, web fonts, images, widgets
  • Leftovers
    Analytics, advertising, third-party content

Separation of Concerns41
A strict separation of concerns, or loading priorities, as defined by The Guardian team. Large view.42

Once you have defined, confirmed and agreed upon these priorities, you can push performance optimization quite far. Just by being very specific about each type of content you have and by clearly defining what “core content” is, you are able to load Core content as quickly as possible, then load Enhancements once the page starts rendering (after the DOMContentLoaded event fires), and then load Leftovers once the page has fully rendered (after the load event fires).

The main principle here of course is to strictly separate the loading of assets throughout these three phases, so that the loading of the Core content should never be blocked by any resources grouped in Enhancement or Leftovers (we haven’t achieved the perfect separation just yet, but we are on it). In other words, you try to shorten the critical rendering path that is required for the content to start displaying by pushing the content down the line as fast as possible and deferring pretty much everything else.

We followed this same separation of concerns, grouping our content types into the same categories and identifying what’s critical, what’s important and what’s secondary. In our case, we identified and separated content in this way:

  • Core content
    Only essential HTML and CSS
  • Enhancement
    JavaScript, code syntax highlighter, full CSS, web fonts, comment ratings
  • Leftovers
    Analytics, advertising, Gravatars

Once you have this simple content/functionality priority list, improving performance is becoming just a matter of adding a few snippets for loading assets to properly reflect those priorities. Even if your server logic forces you to load all assets on all devices, by focusing on content delivery first, you ensure that the content is accessible quickly, while everything else is deferred and loaded in the background, after the page has started rendering. From a strategic perspective, the list also reflects your technical debt, as well as critical issues that slow you down. Indeed, we had quite a list of issues to deal with already at this point, so it transformed fairly quickly into a list of content priorities. And a rather tricky issue sat right at the top of that list: good ol’ web fonts.

Deferring Web Fonts

Despite the fact that the proportion of Smashing Magazine’s readers on mobile has always been quite modest (just around 15%—mainly due to the length of articles), we never considered mobile as an afterthought, but we never pushed user experience on mobile either. And when we talk about user experience on mobile, we mostly talk about speed, since typography was pretty much well designed from day one.

We had conversations during the 2012 redesign about how to deal with fonts, but we couldn’t find a solution that made everybody happy. The visual appearance of content was important, and because the new Smashing Magazine was all about beautiful, rich typography, not loading web fonts at all on mobile wasn’t really an option.

With the redesign back then, we switched to Skolar for headings and Proxima Nova for body copy, delivered by Fontdeck. Overall, we had three fonts for each typeface — Regular, Italic and Bold — totalling in six font files to be delivered over the network. Even after our dear friends at Fontdeck subsetted and optimized the fonts, the assets were quite heavy with over 300 KB in total, and because we wanted to avoid the frequent flash of unstyled text (FOUT), we had them loaded in the header of every page. Initially we thought that the fonts would reliably be cached in HTTP cache, so they wouldn’t be retrieved with every single page load. Yet it turned out that HTTP cache was quite unreliable: the fonts showed up in the waterfall loading chart every now and again for no apparent reason, both on desktop and on mobile.

The biggest problem, of course, was that the fonts were blocking rendering43. Even if the HTML, CSS and JavaScript had already loaded completely, the content wouldn’t appear until the fonts had loaded and rendered. So you had this beautiful experience of seeing link underlines first, then a few keywords in bold here and there, then subheadings in the middle of the page and then finally the rest of the page. In some cases, when Fontdeck had server issues, the content didn’t appear at all, even though it was already sitting in the DOM, waiting to be displayed.

LP font by Ian Feather44
In his article, Web Fonts and the Critical Path45, Ian Feather provides a very detailed overview of the FOUT issues and font loading solutions. We tested them all.

We experimented with a few solutions before settling on what turned out to be perhaps the most difficult one. At first, we looked into using Typekit and Google’s WebFontLoader46, an asynchronous script which gives you more granular control of what appears on the page while the fonts are being loaded. Basically, the script adds a few classes to the body element, which allows you to specify the styling of content in CSS during the loading and after the fonts have loaded. So you can be very precise about how the content is displayed in fallback fonts first, before users see the switch from fallback fonts to web fonts.

We added fallback fonts declarations and ended up with pretty verbose CSS font stacks, using iOS fonts, Android fonts, Windows Phone fonts and good ol’ web-safe fonts as fallbacks — we are still using these font stacks today. E.g. we used this cascade for the main headings (it reflects the order of popularity of mobile operating systems in our analytics):

h2 {
   font-family: "Skolar Bold",
   AvenirNext-Bold, "Avenir Bold",
   "Roboto Slab", "Droid Serif",
   "Segoe UI Bold",
   Georgia, "Times New Roman", Times, serif;
}

So readers would see a mobile OS font (or any other fallback font first), and it probably would be a font that they are quite familiar with on their device, and then once the fonts have loaded, they would see a switch, triggered by WebFontLoader. However, we discovered that after switching to WebFontLoader, we started seeing FOUT way too often, with HTTP cache being quite unreliable again, and that permanent switch from a fallback font to the web font being quite annoying, basically ruining the reading experience.

So we looked for alternatives. One solution was to include the @font-face directive only on larger screens by wrapping it in a media query, thus avoiding loading web fonts on mobile devices and in legacy browsers altogether. (In fact, if you declare web fonts in a media query, they will be loaded only when the media query matches the screen size. So no performance hit there.) Obviously it helped us improve performance on mobile devices in no time, but we didn’t feel right with having a “simplified” reading experience on mobile devices. So it was a no-go, too.

What else could we do? The only other option was to improve the caching of fonts. We couldn’t do much with HTTP cache, but there was one option we hadn’t looked into: storing fonts in AppCache or localStorage. Jake Archibald’s article on the beautiful complexity of AppCache47 led us away from AppCache to experiment with localStorage, a technique48 that The Guardian’s team was using at the time.

Now, offline caching comes with one major requirement: you need to have the actual font files to be able to cache them locally in the client’s browser. And you can’t cache a lot because localStorage space is very limited49, sometimes with just 5Mb available per domain. Luckily, the Fontdeck guys were very helpful and forthcoming with our undertaking, so despite the fact that font delivery services usually require you to load files and have a synchronous or asynchronous callback to count the number of impressions, Fontdeck has been perfectly fine with us grabbing WOFF-files from Google Chrome’s cache and setting up a “flat” pricing based on the number of page impressions in recent history.

So we grabbed the WOFF files and embedded them, base64-encoded, in a single CSS file, moving from six external HTTP-requests with about 50 KB file each to at most one HTTP request on the first load and 400 KB of CSS. Obviously, we didn’t want this file to be loaded on every visit. So if localStorage is available on the user’s machine, we store the entire CSS file in localStorage, set a cookie and switch from the fallback font to the web font. This switch usually happens once at most because for the consequent visits, we check whether the cookie has been set and, if so, retrieve the fonts from localStorage (causing about 50ms in latency) and display the content in the web font right away. Just before you ask: yes, read/write to localStorage is much slower than retrieving files from HTTP cache50, but it proved to be a bit more reliable in our case.

Browserscope Graph51
Yes, localStorage is much slower than HTTP cache52, but it’s more reliable. Storing fonts in localStorage isn’t the perfect solution, but it helped us improve performance dramatically.

If the browser doesn’t support localStorage, we include fonts with good ol’ link href and, well, frankly just hope for the best — that the fonts will be properly cached and persist in the user’s browser cache. For browsers that don’t support WOFF53 (IE8, Opera Mini, Android <= 4.3), we provide external URLs to fonts with older font mime types, hosted on Fontdeck.

Now, if localStorage is available, we still don’t want it to be blocking the rendering of the content. And we don’t want to see FOUT every single time a user loads the page. That’s why we have a little JavaScript snippet in the header before the body element: it checks whether a cookie has been set and, if not, we load web fonts asynchronously after the page has started rendering. Of course, we could have avoided the switch by just storing the fonts in localStorage on the first visit and have no switch during the first visit, but we decided that one switch is acceptable, because our typography is important to our identity.

The script was written, tested and documented by our good friend Horia Dragomir54. Of course, it’s available as a gist on GitHub55:

<script type="text/javascript">
    (function () {
      "use strict";
      // once cached, the css file is stored on the client forever unless
      // the URL below is changed. Any change will invalidate the cache
      var css_href = './web-fonts.css';
      // a simple event handler wrapper
      function on(el, ev, callback) {
        if (el.addEventListener) {
          el.addEventListener(ev, callback, false);
        } else if (el.attachEvent) {
          el.attachEvent("on" + ev, callback);
        }
      }
      
      // if we have the fonts in localStorage or if we've cached them using the native browser cache
      if ((window.localStorage && localStorage.font_css_cache) || document.cookie.indexOf('font_css_cache') > -1){
        // just use the cached version
        injectFontsStylesheet();
      } else {
       // otherwise, don't block the loading of the page; wait until it's done.
        on(window, "load", injectFontsStylesheet);
      }
      
      // quick way to determine whether a css file has been cached locally
      function fileIsCached(href) {
        return window.localStorage && localStorage.font_css_cache && (localStorage.font_css_cache_file === href);
      }
 
      // time to get the actual css file
      function injectFontsStylesheet() {
       // if this is an older browser
        if (!window.localStorage || !window.XMLHttpRequest) {
          var stylesheet = document.createElement('link');
          stylesheet.href = css_href;
          stylesheet.rel = 'stylesheet';
          stylesheet.type = 'text/css';
          document.getElementsByTagName('head')[0].appendChild(stylesheet);
          // just use the native browser cache
          // this requires a good expires header on the server
          document.cookie = "font_css_cache";
        
        // if this isn't an old browser
        } else {
           // use the cached version if we already have it
          if (fileIsCached(css_href)) {
            injectRawStyle(localStorage.font_css_cache);
          // otherwise, load it with ajax
          } else {
            var xhr = new XMLHttpRequest();
            xhr.open("GET", css_href, true);
            on(xhr, 'load', function () {
              if (xhr.readyState === 4) {
                // once we have the content, quickly inject the css rules
                injectRawStyle(xhr.responseText);
                // and cache the text content for further use
                // notice that this overwrites anything that might have already been previously cached
                localStorage.font_css_cache = xhr.responseText;
                localStorage.font_css_cache_file = css_href;
              }
            });
            xhr.send();
          }
        }
      }
 
      // this is the simple utitily that injects the cached or loaded css text
      function injectRawStyle(text) {
        var style = document.createElement('style');
        style.innerHTML = text;
        document.getElementsByTagName('head')[0].appendChild(style);
      }
 
    }());
</script>

During the testing of the technique, we discovered a few surprising problems. Because the cache isn’t persistent in WebViews, fonts do load asynchronously in applications such as Tweetdeck and Facebook, yet they don’t remain in the cache once the window is closed. In other words, with every WebViews visit, the fonts are re-downloaded. Some old Blackberry devices seemed to clear cookies and delete the cache when the battery is running out. And depending on the configuration of the device, sometimes fonts do not persist in mobile Safari either.

Still, once the snippet was in place, articles started rendering much faster. By deferring the loading of Web fonts and storing them in localStorage, we’ve avoided around 700ms delay, and thus shortened the critical path significantly by avoiding the latency for retrieving all the fonts. The result was quite impressive for the first load of an uncached page, and it was even more impressive for concurrent visits since we were able to reduce the latency caused by Web fonts to just 40 to 50 ms. In fact, if we had to mention just one improvement to performance on the website, deferring web fonts is by far the most effective.

At this point, we haven’t even considered using the new WOFF2 format56 for fonts just yet. Currently supported in Chrome and Opera, it promises a better compression for font files and it already showed remarkable results. In fact, The Guardian was able to cut down on 200ms latency and 50 KB of the file weight57 by switching to WOFF2, and we intend to look into moving to WOFF2 soon as well.

Of course, grabbing WOFFs might not always be an option for you, but it wouldn’t hurt just to talk to type foundries to see where you stand or to work out a deal to host fonts “locally.” Otherwise, tweaking WebFontLoader for Typekit and Fontdeck is definitely worth considering.

Dealing With JavaScript Link

With the goal of removing all unnecessary assets from the critical rendering path, the second target we decided to deal with was JavaScript. And it’s not like we particularly dislike JavaScript for some reason, but we always tend to prefer non-JavaScript solutions to JavaScript ones. In fact, if we can avoid JavaScript or replace it with CSS, then we’ll always explore that option.

Back in 2012, we weren’t using a lot of scripts on the page, yet displaying advertising via OpenX depended on jQuery, which made it way too easy to lazily approach simple, straightforward tasks with ready-to-use jQuery plugins. At the time, we also used Respond.js to emulate responsive behaviour in legacy browsers. However, Internet Explorer 8 usage has dropped significantly between 2012 and 2014: with 4.7% before the redesign, it was now 1.43%, with a dropping tendency every single month. So we decided to deliver a fixed-width layout with a specific IE8.css stylesheet to those users, and removed Respond.js altogether.

As a strategic decision, we decided to defer the loading of all JavaScripts until the page has started rendering and we looked into replacing jQuery with lightweight modular JavaScript components.

jQuery was tightly bound to ads, and ads were supposed to start displaying as fast as possible, so to make it happen, we had to deal with advertising first. The decision to defer the loading of ads wasn’t easy to get agreement on, but we managed to make a convincing argument that better performance would increase click rates because users would see the content sooner. That is, on every page, readers would be attracted by the high-quality content and then, when the ads kick in, would pay attention to those squares in the sidebar as well.

Florian Sander58, our partner in crime when it comes to advertising, rewrote the script for our banner ads so that banners would be loaded only after the content has started rendering, and only then the advertising spots would be put into place. Florian was able to get rid of two render-blocking HTTP-requests that the ad-script normally generated, and we were able to remove the dependency on jQuery by rewriting the script in vanilla JavaScript.

Obviously, because the sidebar’s ad content is generated on the fly and is loaded after the render tree has been constructed, we started seeing reflows (this still happens when the page is being constructed). Because we used to load ads before the content, the entire page (with pretty much everything) used to load at once. Now, we’ve moved to a more modular structure, grouping together particular parts of the page and queuing them to load after each other. Obviously, this has made the overall experience on the site a bit noisier because there are a few jumps here and there, in the sidebar, in the comments and in the footer. That was a compromise we went for, and we are working on a solution to reserve space for “jumping” elements to avoid reflows as the page is being loaded.

Deferring Non-Critical JavaScript Link

When the prospect of removing jQuery altogether became tangible as a long-term goal, we started working step by step to decouple jQuery dependencies from the library. We rewrote the script to generate footnotes for the print style sheet (later replacing it with a PHP solution), rewrote the functionality for rating comments, and rewrote a few other scripts. Actually, with our savvy user base and a solid share of smart browsers, we were able to move to vanilla JavaScript quite quickly. Moreover, we could now move scripts from the header to the footer to avoid blocking construction of the DOM tree. In mid-July, we removed jQuery from our code base entirely.

We wanted full control of what is loaded on the page and when. Specifically, we wanted to ensure that no JavaScript blocks the rendering of content at any point. So, we use the Defer Loading JavaScript59 script to load JavaScript after the load event by injecting the JavaScript after the DOM and CSSOM have already been constructed and the page has been painted. Here’s the snippet that we use on the website, with the defer.js script (which is loaded asynchronously after the load event):

function downloadJSAtOnload() {
   var element = document.createElement("script");
   element.src = "defer.js";
   document.body.appendChild(element);
}
if (window.addEventListener)
   window.addEventListener("load", downloadJSAtOnload, false);
else if (window.attachEvent)
   window.attachEvent("onload", downloadJSAtOnload);
else
   window.onload = downloadJSAtOnload;

However, because script-injected asynchronous scripts are considered harmful60 and slow (they block the browser’s speculative parser), we might be looking into using the good ol’ defer and async attributes instead. In the past, we couldn’t use async for every script because we needed jQuery to load before its dependencies; so, we used defer, which respects the loading order of scripts. With jQuery out of the picture, we can now load scripts asynchronously, and fast. Actually by the time you read this article, we might already be using async.

Basically, we just deferred the loading of all JavaScripts that we identified previously, such as syntax highlighter and comment ratings, and cleared a path in the header for HTML and CSS.

Inlining Critical CSS Link

That wasn’t good enough, though. Performance did improve dramatically; however, even with all of these optimizations in place, we didn’t hit that magical Speed Index value of under 1000. In light of the ongoing discussion about inline CSS and above-the-fold CSS, as recommended by Google61, we looked into more radical ways to deliver content quickly. To avoid an HTTP request when loading CSS, we measured how fast the website would be if we were to load critical CSS inline and then load the rest of the CSS once the page has rendered.

scott-jehl62
Scott Jehl’s article63 explains how exactly to extract and inline critical CSS.

But what exactly is critical CSS? And how do you extract it from a potentially complex code base? As Scott Jehl points out64, critical CSS is the subset of CSS that is needed to render the top portion of the page across all breakpoints. What does that mean? Well, you would decide on a certain height that you would consider to be “above the fold” content — it could be 600, 800 or 1200 pixels or anything else — and you would collect into their own style sheet all of the styles that specify how to render content within that height across all screen widths.

Then you inline those styles in the head, and thus give the browser everything it needs to start render that visible portion of the page — within one single HTTP request. You’ve heard it a few times by now: everything else is deferred after the first initial rendering. You avoid an HTTP-request, and you load the full CSS asynchronously, so once the user starts scrolling, the full CSS will (hopefully) already have loaded.

Visually speaking, content will appear to render more quickly, but there will also be more reflowing and jumping on the page. So, if a user has followed a link to a particular comment below the “fold”, then they will see a few reflows as the website is being constructed because the page is rendered with critical CSS first (there is just so much we can fit within 14 KB!) and adjusted later with the complete CSS. Of course, inline CSS isn’t cached; so, if you have critical CSS and load the complete CSS on rendering, it’s useful to set a cookie, so that inline styles aren’t inlined with every single load. The drawback of course is that you might have duplicate CSS because you would be defining styles both inline and in the full CSS, unless you’re able to strictly separate them.

Because we had just refactored our CSS code base, identifying critical CSS wasn’t very difficult. Obviously, there are smart65 tools66 that analyze the markup and CSS, identify critical CSS styles and export them into a separate file during the build process, but we were able to do it manually. Again, you have to keep in mind that 14 Kb is your budget for HTML and CSS, so in the end we had to rename a few classes here and there, and compress CSS as well.

We analyzed the first 800px, checking the inspector for the CSS that was needed and separating our style sheet into two files – and actually that was pretty much it. One of those files, above-the-fold.css, is minified and compressed, and its content is placed inline in the head of our document as early as possible – not blocking rendering. The other file, our full CSS file, is then loaded with JavaScript after the content has loaded, and if JavaScript isn’t available for some reason or the user is on a legacy browser, we’ve put a full CSS file inside noscript tag at the end of the head, so they don’t get an unstyled HTML page.

Was It All Worth It? Link

Because we’ve just implemented these optimizations, we haven’t been able to measure their impact on traffic, but we’ll publish these results later as well. Obviously, we did notice a quite remarkable technical improvement though. By deferring and caching web fonts, inlining CSS and optimizing the critical rendering path for the first 14Kb, we were able to achieve dramatic improvements in loading times. The start rendering time started circling around 1s for an uncached page on 3G and was around 700ms (including latency!) on subsequent loads.

webpagetest67
We’ve been using WebPageTest6832 a lot for running tests. Our waterfall chart was becoming better over time and reflected the priorities we had defined earlier. Large view.69

On average, Smashing Magazine’s front page makes 45 HTTP-requests and has 440 KB in bandwidth on the first uncached load. Because we heavily cache everything but ads, subsequent visits have around 15 HTTP requests and 180 KB of traffic. The First Byte time is still around 300–600ms (which is a lot), yet Start Render time is usually under 0.7s70 on a DSL connection in Amsterdam (for the very first, uncached load), and usually under 1.7s on a slow 3G71. On a fast cable connection, the site starts rendering within 0.8s72, and on a fast 3G, within 1.1s73. Obviously, the results vary significantly depending on the First Byte time which we can’t improve just yet, at the time of writing. That’s the only asset that introduces unpredictability into the loading process, and as such has a decisive impact on the overall performance.

Just by following basic guidelines by our colleagues mentioned above and Google’s recommendations, we were able to achieve the 97–99 Google PageSpeed score74 both on desktop and on mobile. The score varies depending on the quality and the optimization level of advertising assets displayed randomly in the sidebar. Again, the main culprit is the server’s response time — not for long, though.

Google PageSpeed score: 9975
After a few optimizations, we achieved a Google PageSpeed score of 99 on mobile76.

99 out of 100 points on desktop with the Google PageSpeed tool77
We got a Google PageSpeed score of 99 on the desktop78 as well.

By the way, Scott Jehl has also published a wonderful article on the front-end techniques79 FilamentGroup uses to extract critical CSS and load it inline while loading the full CSS afterwards and avoid downloading overheads. Patrick Hamann’s talk on “Breaking News at 1000ms”80 explains a few techniques that The Guardian is using to hit the SpeedIndex 1000 mark. Definitely worth reading and watching, and indeed quite similar to what we implemented on this very site as well.

Work To Be Done Link

While the results we were able to achieve are quite satisfactory, there is still a lot of work to be done. For example, we haven’t considered optimizing the delivery of images just yet, and are now adjusting our editorial process to integrate the new picture element and srcset/sizes with Picturefill 2.1.081, to make the loading even faster on mobile devices. At the moment, all images have a fixed width of 500px and are basically scaled down on smaller views. Every image is optimized and compressed, but we don’t deliver different images for different devices — and no, we aren’t delivering any Retina images at all. That is all about to change soon.

While Smashing Magazine’s home page is well optimized, some pages and articles still perform poorly. Articles with many comments are quite slow because we use Gravatar.com82 for comments. Because each Gravatar URL is unique, each comment generates one HTTP request, slowing down the loading of the overall page. We are going to defer the loading of Gravatars and cache them locally with a Gravatar Cache WordPress plugin83. We might have already done it by the time you read this.

We’re playing around with DNS prefetching and HTML5 preloading to resolve DNS lookups way ahead of time (for example, for Gravatars and advertising). However, we are being careful and hesitant here because we don’t want to create a loading overhead for users on slow or expensive connections. Besides, we’ve added third-party meta data84 to make our articles a bit easier to share. So, if you link to an article on Facebook, Facebook will pull optimized images, a description and a title from our meta data, which is crafted individually for each article. We’ve also happily noticed that article pages scroll smoothly at 60fps85, and that with relatively large images and ads.

spdy86
Yes, we can use SPDY today87. We just need to install SPDY Nginx Module88 or Apache SPDY Module89. This is what we are going to tackle next.

Despite all of our optimizations, the main issue still hasn’t been resolved: very slow servers and the First Byte response times. We’ve been experiencing difficulties with our current server setup and architecture but are tied with a long-term contract, yet we will be moving to a new server soon. We’ll take that opportunity to also move to SPDY90 on the server, a predecessor of HTTP 2.0 (which is well supported in major browsers91, by the way), and we are looking into using a content delivery network as well.

Performance Optimization Strategy Link

To sum up, optimizing the performance of Smashing Magazine was quite an effort to figure out, yet many aspects of optimization can be achieved very quickly. In particular, front-end optimization is quite easy and straightforward as long as you have a shared understanding of priorities. Yes, that’s right: you optimize content delivery, and defer everything else.

Strategically speaking, the following could be your performance optimization roadmap:

  • Remove blocking scripts from the header of the page.
  • Identify and defer non-critical CSS and JavaScript.
  • Identify critical CSS and load it inline in the head, and then load the full CSS after rendering. (Make sure to set a cookie to prevent inline styles from loading with every page load.)
  • Keep all critical HTML and CSS to under 14 KB, and aim for a Speed Index of under 1000.
  • Defer the loading of Web fonts and store them in localStorage or AppCache.
  • Consider using WOFF2 to further reduce latency and file size of the web fonts.
  • Replace JavaScript libraries with leaner JavaScript modules.
  • Avoid unnecessary libraries, and look into options for removing Respond.js and Modernizr; for example, by “cutting the mustard92” to separate browsers into buckets. Legacy browsers could get a fixed-width layout. Clever SVG fallbacks93 also exist.

That’s basically it. By following these guidelines, you can make your responsive website really, really fast.

Conclusion Link

Yes, finding just the right strategy to make this very website fast took a lot of experimentation, blood, sweat and cursing. Our discussions kept circling around next steps and on critical and not-so-critical components and sometimes we had to take three steps back in order to pivot in a different direction. But we learned a lot along the way, and we have a pretty clear idea of where we are heading now, and, most importantly, how to get there.

So here you have it. A little story about the things that happened on this little website over the last year. If you notice any issues, please let us know on Twitter @smashingmag94 and we’ll hunt them down for good.

Ah, and thanks for keeping us reading throughout all these years. It means a lot. You are quite smashing indeed. You should know that.

A big “thank you” to Patrick Hamann and Jake Archibald for the technical review of the article as well as Andy Hume and Tim Kadlec for their fantastic support throughout the years. Also a big “thank you” to our front-end engineer, Marco, for his help with the article and for his thorough and tireless front-end work, which involved many experiments, failures and successes along the way. Also, kind thanks to the Inpsyde team and Florian Sander for technical implementations.

A final thank you goes out to Iris, Melanie, Cosima and Markus for keeping an eye out for those nasty bugs and looking after the content on the website. Without you, this website wouldn’t exist. And thank you for having my back all this time. I respect and value every single bit of it. You rock.

(al, vf, il)

Footnotes Link

  1. 1 https://www.smashingmagazine.com/2015/09/why-performance-matters-the-perception-of-time/
  2. 2 https://www.smashingmagazine.com/2015/11/why-performance-matters-part-2-perception-management/
  3. 3 https://www.smashingmagazine.com/2016/02/preload-what-is-it-good-for/
  4. 4 https://www.smashingmagazine.com/2016/02/getting-ready-for-http2/
  5. 5 https://www.smashingmagazine.com/2016/12/front-end-performance-checklist-2017-pdf-pages/
  6. 6 http://timkadlec.com/2012/01/work-to-be-done/
  7. 7 http://timkadlec.com/2012/01/work-to-be-done/
  8. 8 http://timkadlec.com/2012/01/work-to-be-done/
  9. 9 http://www.w3.org/TR/resource-priorities/#intro-download-priority
  10. 10 https://www.smashingmagazine.com/wp-content/uploads/2014/09/browser-stats.png
  11. 11 https://www.smashingmagazine.com/wp-content/uploads/2014/09/browser-stats.png
  12. 12 https://twitter.com/nice2meatu
  13. 13 https://www.smashingmagazine.com/2013/10/29/get-up-running-grunt/
  14. 14 https://github.com/gruntjs/grunt-contrib-less
  15. 15 https://github.com/nDmitry/grunt-autoprefixer
  16. 16 https://github.com/gruntjs/grunt-contrib-cssmin
  17. 17 https://github.com/gruntjs/grunt-contrib-watch
  18. 18 http://inpsyde.com/en/
  19. 19 https://twitter.com/markodugonjic/statuses/478980625215782912
  20. 20 https://twitter.com/smash_it_on
  21. 21 https://twitter.com/mel_in_media
  22. 22 https://twitter.com/indysigner
  23. 23 http://www.twitter.com/smashingmag
  24. 24 https://github.com/scottjehl
  25. 25 https://github.com/guardian
  26. 26 https://github.com/BBC-News
  27. 27 http://filamentgroup.com/lab/performance-rwd.html
  28. 28 http://timkadlec.com/2014/01/fast-enough/
  29. 29 https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/metrics/speed-index
  30. 30 https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/metrics/speed-index
  31. 31 http://www.lukew.com/ff/entry.asp?1756
  32. 32 http://www.webpagetest.org/
  33. 33 https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/metrics/speed-index
  34. 34 http://chimera.labs.oreilly.com/books/1230000000545/ch10.html#SPEED_PERFORMANCE_HUMAN_PERCEPTION
  35. 35 http://chimera.labs.oreilly.com/books/1230000000545
  36. 36 http://chimera.labs.oreilly.com/books/1230000000545
  37. 37 https://www.youtube.com/watch?v=YV1nKLWoARQ
  38. 38 http://timkadlec.com/2014/05/performance-budgeting-with-grunt/
  39. 39 https://speakerdeck.com/andyhume/anatomy-of-a-responsive-page-load-whiskyweb-2013
  40. 40 https://vimeo.com/77967591
  41. 41 https://www.smashingmagazine.com/wp-content/uploads/2014/09/separation-concerns.png
  42. 42 https://www.smashingmagazine.com/wp-content/uploads/2014/09/separation-concerns.png
  43. 43 http://ianfeather.co.uk/web-fonts-and-the-critical-path/
  44. 44 http://ianfeather.co.uk/web-fonts-and-the-critical-path/
  45. 45 http://ianfeather.co.uk/web-fonts-and-the-critical-path/
  46. 46 https://github.com/typekit/webfontloader
  47. 47 http://alistapart.com/article/application-cache-is-a-douchebag
  48. 48 https://github.com/ahume/webfontjson
  49. 49 http://www.html5rocks.com/en/tutorials/offline/quota-research/
  50. 50 https://github.com/addyosmani/basket.js/issues/24
  51. 51 https://github.com/addyosmani/basket.js/issues/24
  52. 52 https://github.com/addyosmani/basket.js/issues/24
  53. 53 http://caniuse.com/#search=woff
  54. 54 https://twitter.com/hdragomir
  55. 55 https://gist.github.com/hdragomir/8f00ce2581795fd7b1b7
  56. 56 https://gist.github.com/sergejmueller/cf6b4f2133bcb3e2f64a
  57. 57 https://twitter.com/patrickhamann/status/497767778703933442
  58. 58 http://www.kreativrauschen.de/
  59. 59 http://www.feedthebot.com/pagespeed/defer-loading-javascript.html
  60. 60 https://www.igvita.com/2014/05/20/script-injected-async-scripts-considered-harmful/
  61. 61 https://developers.google.com/web/fundamentals/performance/critical-rendering-path/page-speed-rules-and-recommendations
  62. 62 http://www.filamentgroup.com/lab/performance-rwd.html
  63. 63 http://www.filamentgroup.com/lab/performance-rwd.html
  64. 64 http://www.filamentgroup.com/lab/performance-rwd.html
  65. 65 http://css-tricks.com/authoring-critical-fold-css/
  66. 66 https://github.com/addyosmani/above-the-fold-css-tools
  67. 67 http://www.webpagetest.org/result/140904_H4_T5R/1/details/
  68. 68 http://www.webpagetest.org/
  69. 69 http://www.webpagetest.org/result/140904_H4_T5R/1/details/
  70. 70 http://www.webpagetest.org/result/140904_ZJ_T62/
  71. 71 http://www.webpagetest.org/result/140904_Y5_SXS/
  72. 72 http://www.webpagetest.org/result/140904_DB_T5Y/
  73. 73 http://www.webpagetest.org/result/140904_H4_T5R/
  74. 74 https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fwww.smashingmagazine.com&tab=desktop
  75. 75 https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fwww.smashingmagazine.com&tab=mobile
  76. 76 https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fwww.smashingmagazine.com&tab=mobile
  77. 77 https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fwww.smashingmagazine.com&tab=desktop
  78. 78 https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fwww.smashingmagazine.com&tab=desktop
  79. 79 http://filamentgroup.com/lab/performance-rwd.html
  80. 80 https://www.youtube.com/watch?v=dfweWyVScaI
  81. 81 http://scottjehl.github.io/picturefill/
  82. 82 https://en.gravatar.com/
  83. 83 https://wordpress.org/plugins/fv-gravatar-cache/
  84. 84 http://alistapart.com/article/like-able-content-spread-your-message-with-third-party-metadata
  85. 85 http://jankfree.org
  86. 86 http://caniuse.com/#search=SPDY
  87. 87 http://caniuse.com/#search=SPDY
  88. 88 http://nginx.org/en/docs/http/ngx_http_spdy_module.html
  89. 89 https://code.google.com/p/mod-spdy/
  90. 90 https://developers.google.com/speed/spdy/
  91. 91 http://caniuse.com/#search=SPDY
  92. 92 http://responsivenews.co.uk/post/18948466399/cutting-the-mustard
  93. 93 http://css-tricks.com/svg-fallbacks/
  94. 94 http://www.twitter.com/smashingmag

↑ Back to top Tweet itShare on Facebook

Vitaly Friedman loves beautiful content and doesn’t like to give in easily. Vitaly is writer, speaker, author and editor-in-chief of Smashing Magazine. He runs responsive Web design workshops, webinars and loves solving complex UX, front-end and performance problems in large companies. Get in touch.

  1. 1

    Jens Grochtdreis

    September 8, 2014 1:33 pm

    Happy Birthday and thanks for all the interesting articles of the past and our common future.

    8
  2. 2

    Happy Birthday and well done on the front-end optimization sprint. This is really impressive stuff. Also, big plus points for the write-up <333

    19
  3. 3

    Hell of a job on this, guys. Great work.

    4
  4. 5

    Martin LeBlanc

    September 8, 2014 1:56 pm

    Happy birthday!

    2
  5. 6

    An excellent overview Vitaly and well done on hitting 99/100 for both the mobile and desktop experience… something that I would have thought impossible with the Ad network side of things.

    It’s always great to see people sharing their performance approaches, I’ll be borrowing a few of these myself.

    5
  6. 7

    Well done. Good job. And happy birthday of course ;)

    2
  7. 8

    Happy birthday Smashing Magazine!

    2
  8. 9

    Conrad O'Connell

    September 8, 2014 2:10 pm

    Woah, this is incredibility detailed. Great stuff here – I’m not a dev, but looking into this stuff has given me a few things to look at on my own site.

    2
  9. 10

    Dave Thackeray

    September 8, 2014 2:14 pm

    I cannot express in words how much admiration I have for your team in accomplishing this. A huge feat and worthy of acclaim.

    Taking cues from the BBC and The Guardian was a wise move; stamping your own individual flair on the situation was a masterstroke.

    Good job all.

    7
  10. 11

    Fantastic article. Tremendously helpful. Congrats and Happy Birthday!

    2
  11. 12

    Dan Sunderland

    September 8, 2014 3:13 pm

    Well done guys, fantastic writeup. Within what you can say re: commercially sensitive information, can you comment on the outcome of the changes to the ad loading, and if it’s had an effect?

    3
    • 13

      Vitaly Friedman

      September 8, 2014 3:18 pm

      Thanks Dan! At the moment we are monitoring various metrics to get some hard data on this, but nothing worth sharing just yet. Once we have enough data, we’ll definitely share our findings, too!

      3
  12. 15

    Vitaly Friedman

    September 8, 2014 3:20 pm

    Thank you so much for your kind words and support, everyone! It’s much appreciated! I must emphasize at this point that it was actually a tremendous community effort, gathering useful tips and techniques from active members of the community. So thank you, everyone, for sharing your learnings, insights and experiments!

    5
  13. 16

    Cadu De Castro Alves

    September 8, 2014 3:20 pm

    Happy birthday. This post was a big lesson for me! Thanks!

    3
  14. 17

    Bastian Allgeier

    September 8, 2014 3:25 pm

    Happy Birthday and congrats for the performance updates! I’m sitting in a Hotel in Portugal with a miserable wifi and the site is still super fast! Very impressive!

    5
  15. 18

    I ♥ reading articles like this! There’s something about the low-level understanding of how the web works that is paramount to getting the most oomph out of your website’s performance. This was a great write up and I thoroughly enjoyed reading it.

    It must have been an amazing journey to go through, evaluating each of these separate components and devising a best method approach. It’s good to see you relied more on iteration that on instinct when a preliminary result proved to be beneficial. Sometimes, without that comprehension of the “how”, just relying on luck never did seem like a feasible gameplan.

    Kudos to the Smashing team (and all 3rd parties involved) on the work done.

    2
    • 19

      Vitaly Friedman

      September 10, 2014 2:41 pm

      Thank you, Aaron, it’s been quite a journey indeed! We’ll keep digging and we’ll keep publishing articles about our findings. In fact, there is a lot to be done still, and we are heading there at this very moment.

      0
  16. 20

    Aurelio De Rosa

    September 8, 2014 4:15 pm

    Wow, that’s an article. Thank you very much for sharing for experience with optimizing Smashing Magazine. I’m sure it’ll be a source of inspirations for a lot of developers.

    2
  17. 21

    Evert Albers (@evert_)

    September 8, 2014 5:01 pm

    So this is what you call a “little story”…? Wow.

    Holding my breath for the full version, not to mention the epic movie.

    3
  18. 22

    Fantastic article! Thanks for all the detail you provided.

    2
  19. 23

    Joseph R. B. Taylor

    September 8, 2014 6:00 pm

    Great read, great insight into your own page speed processes! I had never thought to try and use localStorage for web fonts – I’ll have to dig into that!

    Also thanks for the link to the related articles and resources.

    1
  20. 24

    Thank you for all the effort that went into not only the performance of Smashing Magazine but the writing of this excellent article.

    This one is one to revisit.

    1
  21. 25

    Great article. A sum of what is ‘build web’ today.
    I would love to read about how was the change to SPDY (when it happens)..
    Thank you very much for sharing all that.

    1
  22. 26

    Holy smokes what an article. I’m gonna read that baby at home. Happy Birthday and I hope I’ll meet some of you guys for the first time in Freiburg next week.

    2
  23. 27

    Happy Birthday :)

    1
  24. 28

    Daniele Piccone

    September 8, 2014 6:34 pm

    A great lesson learned for the last couple of years in web development. Thanks.

    1
  25. 29

    Happy Birthday SmashingMag! You are one of those great resources which have been helping people (like me) to learn all great Web design and dev stuff for free.

    This anniversary post is also very useful for others to learn and improve their sites in different aspects. I love how beautifully you manage things here and I’m amazed by those nice Page Speed insights!

    I want to congratulate all the SmashingMag folks and wish you achieve more success in the years ahead. Keep doing the great work :)

    2
  26. 30

    great work on the website optimization.
    I am wondering, did you experiment with googles mod-pagespeed in your journey?

    1
    • 31

      Vitaly Friedman

      September 8, 2014 8:24 pm

      Thanks Yaron! No, actually we haven’t looked into Google’s mod-pagespeed at all. Good suggestion though!

      1
  27. 32

    Prathamesh Satpute

    September 8, 2014 7:59 pm

    Happy birthday Smashing Magazine! Excellent Article …..

    1
  28. 33

    Hip Hip Hurray! You have certainly evolved the website into one of the top magazines on the internet, Your articles have helped millions and inspired trillions! Thanks for everything which you gave to us through your knowledge and experience.

    1
  29. 34

    Thanks for sharing such a wonderful article, I liked the way you analyzed first 800PX & separated CSS.
    Lessons Learnt.
    *Time to move to Vanilla JS
    * Picture Pill.
    * Defer.js
    * FontCache
    * WOFF2
    * Adding Native OS fonts in BreakPoints.
    Happy B’day Smashing Magazine. :)

    1
  30. 35

    That sounds like an awful lot of work, very impressive! A few notes:

    – I don’t really understand the recent jQuery hate. Yes, one should not have a ton of jQuery plugins, but the library itself is fine, not that large, and usually cached when loaded from a CDN.

    – As mentioned above already by another commenter, diversify in performance analysis tools. They can be very deceiving, and also conflict with each other.

    – A tip: have a look at Daan Jobsis’ technique for images. When executed well, you can serve Retina images, high res images, possibly all in a single file format, also for mobile. Hard to believe, but it works.

    – Question: It’s still not clear to me how you would separate inline CSS from extended CSS in a real world situation, where a site does not exist of a single page, instead it has dozens or even 100+ different page templates.

    1
    • 36

      To answer your question:
      It depends (isn’t this the most wonderful answer since 42?).

      There are wonderful tools like https://github.com/filamentgroup/grunt-criticalcss you can integrate in your build process.
      There are also online tools doing just that: http://jonassebastianohlsson.com/criticalpathcssgenerator/

      Those tools are doing a lot of the hard work for you. Be sure to check the outcome though as the fine-tuning still has to be done by you.
      I don’t want to say that those solutions are unreliable but you’d be a fool for not testing what any tool is giving you. ^_^

      Also, no matter how many templates there are, your header and navigation section should mostly be the same all across the board. If you got that covered and some reflows are okay with you (and a possible client), then you’re already fine and ready to go. I think it’s the/your/the client’s definition of “critical” which makes the difference between the inline CSS and what you can load (later on) as the main stylesheet.

      On a sidenote:
      What’s sure is that you can safe on precious data by separating all styles related to print into a print stylesheet and load it after the site is done loading.

      4
  31. 37

    Only had time to for a quick skim, but I think the most important lesson here is:

    “Prioritization And Separation Of Concerns”.

    Excellent advice.

    Too often, I see efforts wasted on low-impact or non-essential functions. Although I mainly work on backend optimizations, I find the prioritization aspect is often ignored when coming to performance optimization.

    In a recent case, we found 20% of page processing time was due to a single widget at the bottom of the page.

    The outsource web development team had been working for a month trying to improve site performance but never really stopped to ask, “Do we really need all of this stuff?”

    When asked if the widget was necessary, the answer was no.

    Removing the widget resulted in a 15% improvement. The process took 10 minutes. We then spent 4.5 hours on other diagnostics and tuning to only gain another 5%.

    So yes — prioritize first. It can save you a ton of time and frustration.

    3
  32. 38

    Happy Birthday Smashing Mag.
    I did the optimization stuff back in 2011 and then I left it for good :P I mean, we all tend to become lazy. Then you updated at FB about your 99 score at Google Page Speed and it hit that I am going to convert my client’s site, which I was designing at the moment, to same scorecard and guess what it only took me 3 days and voila: 99 on both mobile and pc with a good deal of js files. Thanks for inspiring us to work more and better.

    4
  33. 39

    Thank you for the detail and transparency, this is an invaluable post on just how much effort is required to bring performance into the heart of the development process. A huge effort, well done!

    I like the emphasis you’ve put on WebPagetest and in particular SpeedIndex as a measure of the impact of performance on user experience.

    You can also see the impact your changes have had over time here…

    http://speedcurve.com/share/39tfnozeq94p1o0hndk1kpbg4vb7cg/6/a/a/90/render/

    That’s an impressive improvement in start render time which means people are seeing content a lot earlier.

    3
  34. 40

    Happy Birthday and thank you for the detailed insights!

    2
  35. 41

    Your site is superfast. I was wondering about that even before the article, and now you have shared your experience, thanks!

    1
  36. 42

    Great work guys and very Interesting results!

    It looks that site is optimized for US east/ Europe west regions only, because it still takes about 10secs to load the content from Asia regions.
    And there are still minor issues related to touch experience and font size to be quite small.

    See test results there https://shots.testize.com/Results/Shared/e345d0b0-1a46-49d7-8843-24dd5fb8dfe6

    1
  37. 43

    I was definitely one of your readers who left the site (angrily) after the redesign. Half the page dedicated to adverts and the rest to articles (in column format) just pissed me off.

    The earlier design was good for me because I was able to get to the articles fast. The new design (even to this day) took that away.

    Still, a good article is a good article and this is what your magazine continues to showcase. I can easily overlook this redesign because of the quality and learning potential of your articles.

    I recently returned to Smashing because there are so few sites in this genre that talks about tried and true methods of web development and design.

    No. I will not ingratiate myself by saying that I like the current design (look of the site) because I don’t. What I like are the articles and what I am learning from them.

    Using the latest version of Firefox, your pages don’t seem to load any faster than before. What was taken away (for me) was the above-the-fold presence of your latest articles.

    I understand adverts pay the bills. Fortunately your huge archive of very good articles ebbs my dissatisfaction at changing the look of the site when the internals were causing the performance problems.

    In any case and as usual, thanks for sharing your optimization tips and experience because this is some good stuff.

    2
  38. 44

    Hi, Thank You Vitaly for this interesting post. Can we know how did you organise your css files and with which method did you optimise this code ? It would be great if you could provide this information. ;)

    Thanks

    2
  39. 45

    Dmitri Tcherbadji

    September 9, 2014 10:49 am

    Happy Birthday! Surviving the wild west of Interwebs for this long is a huge accomplishment in my eyes. Better yet, this publication has changed tremendously how I work and how I think of the web.

    My personal feedback regarding the speed improvements on the site: it definitely feels much snappier. To add, I have recently moved out of an apartment with fibre optics that generate speeds higher than my Wi-Fi connection can handle (Canada) to a an apartment in Thailand, which is (although very nice) has an extremely unreliable connection.

    Next speed improvement I would suggest is caching your external CSS file better (perhaps using the same setup as you do for Fonts) – or rethink how your comments are loaded. The issue is that I will come back to this article in a few hours to check whether anyone has added comments to the conversation. I will get here by going to smashingmagazine.com and clicking on the link that points straight to comments. Once I do that I am being instantly (which is awesome) jumped to a section of comments on this page that does not have its styles loaded (not so awesome). Moreover, as the page loads the comments keep on jumping up, out of my reach (which is distracting).

    Once again, thank you for all your work Vitaly and all of the SM team and contributors! I owe you big time for all the things you’ve taught me over the years.

    Cheers,
    -d.

    1
    • 46

      Hey Dmitri,

      we’ve covered this issue with the comments.
      Thanks for pointing out. =)

      0
  40. 47

    Really great article, thank you! I wonder: besides the “above-the-fold” CSS and the “full” CSS, did you modularize some CSS (or JS) on particular sites or did you concatenate all files and deliver one big file? (E.g. the old question about many small files with better caching but more requests vs. one big file with worse caching but less requests.) Thank you.

    0
    • 48

      Hi Pipo,

      the idea behind the way we load CSS and JS is this:

      – CSS that’s needed while rendering goes inline in the head section,
      – all JS, which’s needed very early, goes inline right after the footer,
      – any JS that is not needed instantly will be concatenated into one big file and delayed until the onload event fired (there is one JS file containing for example scripts for plugins and a script for loading the print stylesheet among others and the second JS file is for Prism, our Syntax Highlighter),
      – the script for our ads is separated from the rest because it comes in earlier than the onload event; also on some pages we don’t want to have ads and then we can simply diable it by having a PHP on/off switch for that.

      So, to answer your question: we are opting for one big JS file being loaded after the onload event and then it is cached and separated only those parts we needed earlier. For CSS I took the critical part inline in the head, delayed the main stylesheet until after the content has rendered and delayed the print stylesheet after the onload event.

      Any more questions?

      3
  41. 49

    Thank you for writing up the insights and learnings of your way to master the daily performance challenges.

    1
  42. 50

    Awesome write up! Smashing Magazine standards.

    Very nice to read the progress you as you optimise the site. You can almost feel the familiarity with decision makings like those :)

    Thanks.

    1
  43. 51

    Nice write up!

    I’m a noob so might be missing something, but why is it not proposed that other things like CSS get loaded like the picturefill spec?

    1
  44. 52

    Wow, 8 years… Happy B-day, Smashing Magazine!
    Keep up with the great work, lots of success always!
    Greetings from Switzerland ;-)

    3
  45. 53

    Happy Birthday! You helped soooo much especially when I was in school, Smashing Magazine taught me more than my own professors. Looking forward to our future relationship.

    0
  46. 54

    Daniel Schwarz

    September 9, 2014 4:01 pm

    First of all, happy birthday and well done on the smashing article!

    I’ve been following your Performance tips for a while now, mainly so I can employ the tricks on http://airwalk-design.com. Sadly, I haven’t been able to remove jQuery and other Plugins yet, I don’t know that I ever will, but I have managed to achieve some impressive load stats (according to Pingdom Tools, time: 910 ms, size: 1.3 MB, requests: 60) – impressive because it’s an image-based inspiration website that was undoubtably going to have a huge impact on load times.

    But these stats largely depend on the server, which isn’t the best. Next step is to improve server response time, TTFB, and move to some dedicated hosting.

    Great advice :)

    1
  47. 55

    I did notice that your files are on the same server why not use a CDN like css.smashingmagazine.com / js.smashingmagazine.com have some files on a different server to reduce load on the main server and also allow the browser to download more files at the same time (as you can only download 3 at the same time fromt he same IP). The CDN server should be running Nginx for a faster response as there is no php processing involved in getting static files.

    *mod-pagespeed yes that a good one to use, don’t forget that there are two flavours the mod for the server than you can install and the Google App version, most people will only use the mod but for big sites like yours the Google App would be one to look into as well.
    Here are some settings to get your started put into your apache.conf file. You will also need to make a cache file for pagespeed.

    ModPagespeed On
    ModPagespeedInheritVHostConfig on
    ModPagespeedFileCachePath “/var/cache/mod_pagespeed/”
    ModPagespeedEnableFilters combine_css,combine_javascript

    AddOutputFilterByType MOD_PAGESPEED_OUTPUT_FILTER text/html
    ModPagespeedPreserveUrlRelativity on

    Order allow,deny
    Allow from localhost
    Allow from 127.0.0.1
    SetHandler mod_pagespeed_statistics

    ModPagespeedMessageBufferSize 100000

    Order allow,deny
    Allow from localhost
    Allow from 127.0.0.1
    SetHandler mod_pagespeed_message

    Order allow,deny
    # This can be configured similarly to mod_pagespeed_statistics above.
    Allow from localhost
    Allow from 127.0.0.1
    SetHandler pagespeed_console

    Don’t forget the classic gzip and deflate, and expires that help to reduce content. (and varnish cache if your good with a server)

    For those not wanting to work with WordPress and know there way around a server I would recommend looking at a Linux Server with RubyGem + Nginx + Jekyll install just .html files pre-generated by Jekell from a template, with NO database involved and then uses a service like Disqus for the comments.

    0
  48. 56

    Great article. Kudos to you guys for digging in deep and addressing the issues while not compromising your content, readers or advertisers. Optimization can be obsessive but with ever millisecond you save it’s validating! Great tips for websites of any size.

    0
  49. 57

    Nice job on the optimizations, and thanks for documenting the journey thus-far.

    0
  50. 58

    Patrick Meenan

    September 9, 2014 8:54 pm

    Great work and awesome write-up. I’m somewhat biased but I love the focus on the render performance of the end-user experience. Reading into some of the metrics that you tracked it sounds like you have a mix of RUM (real-user beacons) in addition to the synthetic WebPagetest testing. That’s great because you can validate your testing and dev work against what the actual user experiences. The state of caching in browsers is somewhat disappointing though.

    A few random things to consider as you move to working on the TTFB and CDN phases:

    – First, install an APM product to track the back-end timings. I’m a huge fan of New Relic but there are several good ones out there. That will tell you exactly where the time is going for all of your actual traffic and give you something to optimize against (avoid premature optimization, measure twice – cut once and all that).

    – SSD all the things. Seriously, before doing any tuning, caching or other dev work. If you’re not already running SSDs for everything that does I/O it is a relatively inexpensive change that completely eliminates entire classes of problems. SSDs and web/database go together beautifully and the several orders of magnitude increase in IOPS can make caching layers unnecessary.

    – When/if you add a CDN, be careful how you reference your external resources. The classic model is to keep your dynamic content on the main domain which bypasses the CDN and use cdn.smashingmagazine.com (or some static domain) for the CDN resources. You might be better off working with a CDN that will also route your dynamic requests and keep everything on a single domain. A separate static domain adds another DNS lookup and socket connect before the request can be issued and if you move to SPDY it doesn’t benefit as much. Most browsers open 2 connections to the base domain right away so serving from the same domain gives you a good head start. Most CDNs will have no problem serving both the static and dynamic content.

    – I’m sure you’re already aware but SPDY means HTTPS so while you will gain from SPDY’s improved pipelining you have the cost of the TLS connection to make up for. Make sure you (or your CDN) has that well-optimized or things can go very bad very quickly (Ilya Grigorik has a great talk that he did at Velocity over the summer on the topic: https://www.youtube.com/watch?v=0EB7zh_7UE4 ).

    WebPagetest is a radically different app from a content site so take this with a grain of salt and as completely anecdotal but a few years ago I did a rebuild to get it to scale a lot better and the server response times (TTFB) dropped by an order of magnitude (500+ms to 20-50ms). That includes both the forums which are php/mysql and the app itself which is all php.

    The biggest improvements came from:

    – Moved from Apache prefork/mod_php + apc to Nginx + php-fpm + apc. I’m sure I could have tuned Apache a lot better and used some form of worker + fcgi but prefork sucked when combined with persistent connections and Nginx scaled insanely well. This didn’t really change the response times but it made the server capable of scaling to much higher levels.

    – Switched from HDD Raid 1 array to SSD Raid 1 array. This was responsible for upwards of 80% of the gains and this is using consumer SSDs from a few years ago. Today’s have at least double the IOPS and can do even better. I still use HDD for long-term results storage but everything that users interact with on a regular basis comes from SSD.

    – Tuned the server code based on the New Relic monitoring. It was basically just a cycle of “hey, that entry point is using the most amount of server time” -> “add some instrumentation” -> “optimize some code” -> repeat. This improved raw response times a bit in aggregate but it also helped the scalability considerably. By eliminating the really expensive operations I cut the server utilization by close to 90%.

    I’m really looking forward to see what you can pull off with the back-end work. There’s a huge amount of WordPress out there and not a lot of practical information on tuning it for performance (at least as a large-scale CMS).

    8
  51. 59

    Happy birthday Vitaly and many thanks that you work with us and also big thanks for the praise to the Inpsyde Team. Maybe we see us next, maybe Sofia?

    4
  52. 60

    Great wrap up of a massive clean up of your website. It’s great to see what a large company with knowledge, a budget and best practices does to optimise their website rather than the quick and dirty technique the rest of us have to employ.

    0
  53. 61

    Great case study and I thankful to author for sharing it. I really finding this type of case study to improve my site performance and recently SM improved a lot. Its really helpful for others.

    0
  54. 62

    Happy Birthday.

    I am wondering how we can do what you did on a .Net environment, Asp.Net.

    Any tips.

    Thanks.

    2
    • 63

      Otto van der Schaaf

      April 23, 2015 2:57 pm

      There is http://iispeed.com/ which ports mod_pagespeed. It will automatically attempt to solve any recommendations from PageSpeed Insights (including prioritising visible content).

      0
  55. 64

    Happy birthday and keep up the good work!

    I managed to achieve 100% on Pingdom and Google Page Speed (both mobile and desktop versions) on my personal site (medium complexity of content: http://wpy.me/). Btw, you should consider the Pingdom recommendation too in order to increase the performance of your delivered content: http://tools.pingdom.com/fpt/#!/eCv5DC/https://www.smashingmagazine.com/

    The biggest challenge involved Google Fonts and removing render-blocking JavaScript & CSS. The next thing I have in mind is to separate the css used for rendering the above content from the main css, but this part is really painstaking.

    I really recommend to develop with optimization in mind in order to save precious time from post project optimization!

    Happy optimization and stay 100%! :)

    8
  56. 65

    Great article. Happy Birthday!

    0
  57. 66

    One of the best articles I’ve ever read! Thanks for sharing this experience with us :)

    0
  58. 67

    Really Nice

    0
  59. 68

    Whether you use it or not, WordPress auto generate few version of each newly uploaded image (thumb, large etc). Doesn’t seem like a problem at first but when we are talking about 1000+ images, it goes big. Optimised right, it can certainly save a lot of space for sites like Smashing Magazine.

    Recently we were able to bring a 2GB WordPress installation to 200mb. No kidding.

    0
    • 69

      Markus Seyfferth

      September 11, 2014 12:51 pm

      We’re also using the plugin Remove Old Revisions to keep the database small and tidy. The plugin works in the background and removes all revisions older than n-days. Nifty!

      0
  60. 70

    I am sure to have the worst internet worldwide :( – so most time EDGE; sometimes HDSPA and UMTS in some wired configurations.
    Smashing is no fast loading website.
    Articles would be ready – but waiting for ads to load; after that button design comes; after that articles “read more” links are clickable…
    The “old” Smashing was much, much faster, sorry, to say that.
    The change in style – mostly with the fonts – why do you use such a needless typography for headings and text? – slowed down the website massive.

    4
  61. 71

    Happy birthday SmashingMag!! Impressive work on optimisation and brilliant story of it. Well done!

    PS took me three days of work travel time to finish this article. :)

    1
  62. 72

    Pooyan Khosravi

    September 12, 2014 3:09 am

    Use Varnish for locally caching article pages in memory. This will reduce server render time from 400ms to 0.1ms.
    Use CloudFlare and its Rail Gun as a CDN. This will dramatically reduce network time and time to first byte.

    Also if you need a custom host built for your website, I will be honored to do it.

    1
  63. 73

    Happy Birthday, Smashing! I’ve learned a great deal reading the articles and purchasing the books from here!

    Keep up the quality/

    0
  64. 74

    Happy Birthday

    0
  65. 75

    This must be article of the year!!

    1
  66. 76

    Fantastic read, in depth, lots of other resources to follow through. 10/10

    Congrats on eight successful trips around the sun! :-)

    0
  67. 77

    I would share my opinion about mobile browsing.
    To me, mobile is actually better suited for long reads, because its line length is just enough for reading in responsive websites. However, the font size here is sort of too small, plus that the font in use is the hard-to-read-in-16px Proxima Nova, and therefore, the reading experience is awful not only on the desktop but also on the phone. Compare Medium or DesignModo, both of which I would prefer to read.

    4
  68. 78

    Great writeup! Thanks for sharing your optimization-journey, a lot to learn / implement here…

    0
  69. 79

    Wow, well done!

    thanks for the in-depth article. Did learn something here! :)

    keep up the good work.

    bye

    0
  70. 80

    satya brat pandey

    September 16, 2014 12:27 pm

    thanks for the update.
    this is really great article for learning purpose.

    Regards,
    Satya Brat Pandey

    0
  71. 81

    Did anyone else run their own Google Page Speed test and not get the same results? I got 96/100.
    Obviously Google’s ranking factors have been altered since then.

    0
    • 82

      Hello Bradley,

      Google did not change the tool lately (the latest change included how they rate unscaled images), the changing score related to some of our ads being poorly compressed and delivered via third-party-scripts.

      The score should be fine by now, yet, as soon as an ad-partner decides to use some third-party-code from anywhere and that code ain’t optimized, we’ll be back down to somewhere around 95 (100 currently ^_^).

      0
  72. 83

    Thanks a lot for the nice article

    0
  73. 84

    Wow, your website is huge. What I really try to do is build websites that are based on a good foundation initially so that upgrades in future won’t be a hassle. I now understand how important it is to get a good start from the beginning.

    2
  74. 85

    A simple word count tool application http://rapidwordcounter.com/

    1
  75. 86

    Yes, I have used .htaccess for compressing my site fat-cow.net. And I got 88/100 for my (site desktop version) .

    0
    • 87

      Hello Fat-Cow,

      interestingly enough we don’t even have an .htaccess file at all. =D
      No apache – No htaccess. =)

      0
  76. 88

    Kieren Trinder

    November 23, 2014 9:56 pm

    Great article! I’ve been performing a lot of WordPress optimisation as well and fully agree – it’s not when the page itself has finished rendering every element, it’s the perceptual loading time that’s important.

    0
  77. 89

    Any more detail on the unreliable browser caching of fonts? If the issue is limited only to font files, and CSS files *are* reliably cached, then the localStorage step is pointless – just base64 encode them into CSS and let the browser take it from there.

    0
  78. 90

    Thanks you for this article :-) I feel there is a good book, in web performance with case studies like this and the The Guardians. It would also be cool if we could find some case studies that were for full on Web Apps rather than all content providers.

    0
  79. 91

    How did you get the INLINE STYLE to be displayed just right after your in wordpress?

    I tried with:
    add_action( ‘wp_head’, ‘rv_add_header_css’ );
    function rv_add_header_css() { ?>………………………………….

    but so far this puts the style almost at the end of my

    0
    • 92

      Hey Daniel,

      what I basically did is manually inserting the inline style tag in our header template and then add all the CSS for “above the fold”.

      That’s a wonderful thing in the first place but one should keep in mind to not overdo it. Keep the CSS in that inline section as short as possible.

      I hope that answers your question. =)

      1
    • 93

      howdy daniel!

      actually, we’ve moved the “above-the-fold”-styles, which should be inline in the head-section, to a css-File of its own. This is normally registered via wp_register_style() to WordPress for easier managing the dependencies.

      Additionally we’re using the style_loader_tag-Filter to fetch this handle and insert it inline instead of loading it externally via link-Tag.

      Last but not least: we’ve changed the order of execution of wp_print_styles to 1 (directly after title) in the wp_head-Action

      15
  80. 94

    Andrew Welch

    March 29, 2015 9:25 pm

    Interesting article… but running Google PageSpeed Insights on this article itself reveals that there must have been some regression going forward.

    https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fwww.smashingmagazine.com%2F2014%2F09%2F08%2Fimproving-smashing-magazine-performance-case-study%2F&tab=mobile

    Results are 66/100 for mobile, and 73/100 for desktop.

    Largely it seems to be due to the lack of a cache expiration on gravatar images, and failure to optimize other images used on the site.

    This in itself is a useful lesson for web professionals: the job is never done, optimization is an ongoing process requiring vigilance and constant re-testing.

    0
  81. 96

    No one standart caching plugins? It’s because of advertising?

    1

↑ Back to top