Bandwidth Media Queries? We Don’t Need ’Em!

Advertisement

From time to time, when a discussion is taking place about ways to implement responsive images, someone comes along and says, “Hey, guys! What we really need is a media query that enables us to send high-resolution images to people on a fast connection and low-resolution images to people on a slow connection.” At least early on, a lot of people agreed.

At first glance, this makes a lot of sense. High-resolution images have a significant performance cost, because they take longer to download. On a slow network connection, that cost can have a negative impact on the user’s experience. Users might prefer low-resolution images if it means that pages will download significantly faster. On the other hand, for users on a high-speed connection, the performance cost of delivering high-resolution images diminishes, and users would probably prefer better-quality images in this case.

If only we had a media query (and the HTML element that allows us to use that media query) that enables Web developers to have that degree of control over served images as a function of bandwidth, life would be peachy.

As it turns out, accurately implementing such a dream media query is not a whole lot easier than implementing a machine that can accurately predict how much it will rain two weeks from next Tuesday. And even if it were possible to implement, its side effects would result in a worse user experience, rather than a better one.

What Is Bandwidth?

Since we’re discussing bandwidth, let’s pause for a minute to define it.

When network operators publish their network speed, what they’re usually referring to is the amount of raw data that can be sent on the radio band, usually in ideal conditions.

That number has little relevance to Web developers. The bandwidth that we care about is the “effective” bandwidth. That bandwidth is usually a function of several factors: the network’s bandwidth, its delay, packet loss rate, and the protocol used to download the data. In our case, the protocol in question is TCP.

TCP1 is the dominant protocol for the reliable transfer of data on the Internet. The protocol itself has a certain overhead that results in a difference between the theoretical and the effective bandwidth. But another factor is responsible for an even more significant gap between the two. TCP doesn’t always use all of the bandwidth available to it, simply because it doesn’t know the bandwidth’s size.

Sending more data than the available bandwidth can cause network congestion2, which has disastrous implications on the network. Therefore, TCP uses a couple of congestion control mechanisms: slow-start3 and congestion avoidance4.

Each TCP connection is created with a very limited bandwidth “allowance,” called a congestion window5. The congestion window grows exponentially over time, until the TCP connection sends enough data to “fill the pipe.” That is the slow-start phase, which can take up to several seconds, depending on the network’s delay. Then, TCP periodically tries to send just a little more, to see if it can increase the amount of data it sends without causing congestion. That is the congestion-avoidance phase.

Media query download tests6
Is network speed the issue in responsive Web design? More often it’s images are being downloaded when they aren’t supposed to. Image credit: Elliot Jay Stocks7.

Both of these mechanisms assume that packet loss is always caused by congestion; so, packet loss results in a significant reduction to the congestion window, as well as to the rate at which data is sent. On wireless and mobile networks, this behavior is not always ideal because they can suffer from packet loss for reasons other than congestion (for example, poor reception), which can result in under-utilization of the network.

Bandwidth Changes

Bandwidth, by its very nature, is variable over time. During the downloading of a single resource, the bandwidth of a mobile user can change significantly for multiple reasons:

  • The user has switched cells or moved to an area with different cellular coverage.
  • Other users have moved into the cell that the user is in.
  • Network congestion on the server side of the network has caused the effective bandwidth to drop.

The bandwidth of Wi-Fi users can also vary widely. This is mainly due to packet loss, which has a significant impact on effective bandwidth, because TCP considers packet loss a result of congestion.

The bandwidth of desktop users can also vary depending on their connection type, although not likely as significantly as for mobile and Wi-Fi users. ADSL can suffer from weather conditions; cable can suffer from uplink sharing; and other networks could have their own share of problems.

One more thing. When we think about the user’s bandwidth, we tend to think of the user’s connection to the Internet; but the connection of the Web server to the Internet, its load and its proximity to the user can also have a significant impact on the effective bandwidth that the browser sees. So, another variable is at play here: the Web server itself. When we say that we want to measure bandwidth, are we talking about the bandwidth between the user and the Web server, or just the radio link’s bandwidth? It all depends on where the bandwidth bottleneck is.

This variability is the major reason why making predictions about future bandwidth is likely to be highly inaccurate and error prone.

Measuring Bandwidth Is Hard

OK, so predicting bandwidth is complicated, but measuring it must be easy, right? After all, the browser is already downloading resources. It knows their sizes and how long it took to download them — the number of bits downloaded divided by the time it took to download them. How hard can it be?

Well, if you want to measure bandwidth accurately, it is kinda hard. The above calculation is true when you download a large file over a single warmed-up TCP connection. That is rarely the case.

Let’s look at a typical scenario of loading a Web page, shall we?

  1. Initial HTML page is downloaded
    During this phase, most of the time, the browser downloads the initial HTML page on a new TCP connection. That new TCP connection needs to be careful not to send more data than the physical link can handle, so it uses its slow-start mechanism. This means that, during this phase, if the browser needs to measure the effective bandwidth it’s got, it will significantly underestimate the available bandwidth. When we’re discussing bandwidth media queries, this phase is the critical one. When the browser needs to decide which resources to download according to the media query, this measurement is most likely the only bandwidth measurement that the browser will have with this particular server.
  2. CSS and JavaScript external resources are loaded.
    During this phase, the browser has a collection of new TCP connections, all in their slow-start phase, and they are not all necessarily to the same destination server. Again, estimating bandwidth in this phase is not straightforward.
  3. Images are loaded.
    Here the browser has multiple connections, each one downloading a resource. The problem is that these connections are not always in the same phase of their life cycle. Some might be in the slow-start phase; some may have suffered a packet loss and, thus, reduced their window and the bandwidth they are trying to fill; and some might be warmed-up TCP connections, ready to fill the bandwidth. These TCP connections are not necessarily all to the same destination server, and the bandwidth towards the various destination servers might be different between one another.

So, estimating bandwidth is possible, but it is far from simple, and it is possible only for certain phases of the page-loading process. And because having several TCP connections to various destination servers is common (for example, a CDN could host the image resources of a Web page), we cannot really tell what is the bandwidth we want to measure.

Media Query Is An Order

As far as the browser is concerned, a media query is a direct order with which the browser must comply. It has no room for optimizations. It cannot avoid obeying it, even when it makes absolutely no sense. Let’s explore what that means for bandwidth media queries.

Let’s assume for a minute that browsers actually do have an accurate way to measure the current bandwidth and that a solution for responsive images is in place and can use a bandwidth media query.

8
Latency is the network killer. It can significantly slow down the loading of a responsive website, especially if it has loads of images. Image credit: Andy Chung9.

The natural thing to do, then, would be to define a responsive image with multiple sources — a high-resolution image to use when the bandwidth exceeds a certain value, and a low-resolution image when the bandwidth is low. The browser loading the page that contains the image would then download the high-resolution image, since it has good bandwidth conditions.

Let’s say that, after a few seconds, the bandwidth conditions change for some reason. The browser (sharp at detecting bandwidth as it is) will immediately detect that the bandwidth is down. The browser is now obligated to download the lower-resolution image and replace the high-resolution one with it. The result is a useless download of the low-resolution image and a worse user experience, because the quality of the image that the user sees is worse than it could be. Even if this happens to only a low percentage of users, that’s bad news.

Now, you could argue that the browser could optimize and use the version it already has. But the fact of the matter is, it can’t; not with media queries — at least not without changing the whole meaning of them.

So, What’s The Alternative?

Well, if media queries can’t be used for bandwidth detection, is there no hope? Shouldn’t we, as Web developers, be able to define image resolution according to the available bandwidth?

I hate to say it, but we shouldn’t.

Web developers should be able to define image resources of various resolutions, which would later be used by the browser to perform heuristic optimizations. This can be done by a declarative syntax, similar to the syntax of the srcset responsive images proposal10. The syntax would define various resources and would hint to the browser which images to use for various criteria, such as screen density and screen size. The browser would then use these hints, but with room left open for further optimizations (related to bandwidth, user preference, data plan costs, you name it).

While I have reservations about the srcset proposal, and many agree that the syntax it uses is confusing, the declarative model is better when bandwidth optimizations (and other browser heuristics) are involved.

Why Do We Need Media Queries, Then?

Media queries are necessary when the browser must not be allowed to use heuristics to pick the right image resource to present. One of the best examples of this (and an illustration of one of the biggest shortcomings of the srcset proposal) is the art direction use case11. This is the use case where the proposed picture element12 really shines.

13
The “Art Direction” use case. Image credit: W3C14.

This is the reason why we need both approaches15 in order to achieve a well-rounded responsive-images solution that covers all use cases.

Network Measurements

Don’t get me wrong. I’m not saying that we should give up on network measurement as a whole on the Web platform.

Work is being done on the network information API16 to figure out how to expose network characteristics to Web developers. In the work being done on this specification, one of the major issues is — surprise! — how to measure bandwidth.

I believe that browser makers will find a way one day to measure bandwidth at the end of the initial page load in a semi-accurate way, which would be useful for various progressive-enhancement decisions. But we’re not there yet.

Conclusion

The main reason why Web developers want bandwidth media queries is to be able to serve image resolutions according to their users’ network conditions. Basically, they want to have a say in the trade-off between beautiful images and slow page loading.

Unfortunately, this doesn’t seem to be something that can be accurately implemented in the near future. Even if it could be implemented, because a media query is an order, it would force the downloading of multiple resources for the same image in many cases, resulting in a worse user experience. Bandwidth optimizations are better left to the browser, which know the user, their preferences and their network conditions better.

(al)

Footnotes

  1. 1 http://en.wikipedia.org/wiki/Transmission_Control_Protocol
  2. 2 http://en.wikipedia.org/wiki/Network_congestion
  3. 3 http://en.wikipedia.org/wiki/Slow-start
  4. 4 http://en.wikipedia.org/wiki/TCP_congestion_avoidance_algorithm
  5. 5 http://en.wikipedia.org/wiki/Congestion_window
  6. 6 http://www.flickr.com/photos/elliotjaystocks/7073360897
  7. 7 http://www.flickr.com/photos/elliotjaystocks/7073360897
  8. 8 http://blog.andychung.ca/post/2818324434/a-responsive-redesign
  9. 9 http://blog.andychung.ca/post/2818324434/a-responsive-redesign
  10. 10 http://dev.w3.org/html5/srcset/
  11. 11 http://usecases.responsiveimages.org/#art-direction
  12. 12 http://picture.responsiveimages.org/
  13. 13 http://usecases.responsiveimages.org/#art-direction
  14. 14 http://usecases.responsiveimages.org/#art-direction
  15. 15 http://www.w3.org/community/respimg/2012/06/18/florians-compromise/
  16. 16 http://www.w3.org/TR/netinfo-api/

↑ Back to topShare on Twitter

Yoav Weiss is a developer that likes to get his hands dirty fiddling with various layers of the Web platform stack. Constantly striving towards a faster Web, he's trying to make the world a better place, one Web performance issue at a time. He recently prototyped the picture element in a Chromium build as part of the Responsive Images Community Group. You can follow his rants on Twitter or have a peek at his latest prototypes on Github.

Advertising
  1. 1

    Wouldn’t you just love it if you wanted to save that beautiful high res picture as your phones background? Oh wait, it’s being served at 100×200, that’s weird, I would have thought they would have uploaded a bigger image.

    And tough, you are stuck with that version. In the same way that some sites hide data on mobiles. What if you want that data?

    I still think building a separate mobile site and giving the user a choice is the best route.

    1
    • 2

      I don’t think it’s so concrete either way. The choice to go with a single responsive site or a separate mobile site should, like all things, depend on the needs of your project and the people intended to consume it. If your site has a lot of text-based content and minimal imagery, it wouldn’t really make sense to have a separate mobile site only meant to serve the same content.

      Besides that, the mobile web/desktop web line is being blurred every single day as more devices are made cheaper to produce and own. For some, their smartphone or tablet is the only way they access the web, and they do it from everywhere. Responsive design isn’t about imposing a new standard. Although, I’ll admit with the tablet explosion it’s becoming a case of adapt or die. If you build a mobile site, but your metrics show a LOT of mobile users going to your desktop site anyway, that may be a good indicator to go responsive.

      That is unless your mobile site offers content and capabilities that are optimized for mobile. However, RESS can accomplish much the same thing. I’ve never seen responsive design as an ultimatum. It’s not “you should do it this way” but more “maybe it’s time to rethink how things are done”. No one’s going to fault you for serving a mobile site and desktop site separately if you do it well, but the cost vs. benefit is quickly skewing toward being prohibitive.

      8
  2. 3

    Great post Yoav, very insightful. I’ve always thought the “bandwidth media query idea” was unrealistic, due to the variability of bandwidth. Though this article really cements that idea.

    I also like the idea of a declarative syntax used to help browsers perform their own heuristic optimizations. I would say the declarative syntax should be optional so developers can use it, if they do not need the precise control of media queries, i.e for art direction.

    I guess my only other thought would be if the declarative syntax should be inside of the picture element, or a separate entity? With Florians compromise, combining the src-set syntax with the picture element markup, the picture element is becoming verbose. Not sure if a optional declarative syntax inside the picture element would be too much?

    4
  3. 4

    Great post. I couldn’t help but think of some of the concepts used to help performance in games. Some game textures include several sizes of the image in 1 image file. The game engine / video card loads the appropriate size based on how large the polygon model is drawn on the screen at that time. Each detail level is called a ‘mip map’ and is usually a power of 2. (1024×1024, 512×512, 256×256, etc). The experience is completely transparent to the gamer.

    Seems to be some of the same intentions, just a wildly different approach. The result is the same in both cases though – the speedy and transparent delivery of a picture.

    5
  4. 5

    Yes, I was never fully sold on a proper use case for bandwidth media-queries. There are more variables at stake than just bandwidth. Some users want to minimize their data usage (since they end up paying money and time per kb) while others have unlimited (data, not time). Why should developers have the control to force them to download a very large image if they have a “fast” connection.

    I think/hope the answer lies in a better image format (similar to progressive jpgs) with multiple resolutions included and smarter browsers/networks so the decisions about the resolution needed is determined and then that portion of the asset is requested, downloaded and displayed. Possible? meh?

    3
    • 6

      This seems like the best solution to me. You could then have a switch on your browser to ‘only use low rez’ etc which would be handy. New image formats please…

      0
    • 7

      I completely agree, my thoughts drew towards this scenario while reading through the article. I think a user preference to define your data usage could be useful, hard to implement though.

      0
  5. 8

    The server is the one that has to know what the bandwidth is because it’s the one that decides whether to send out a hi-res or low-res image.

    Why not the server query the client with a ping and the client reply on a high-priority basis; the server can then calculate the bandwidth and choose an image appropriately.

    This would probably call for both server and client to have the appropriate probe-response routines installed. The code would be simple. Getting the server and browser communities to agree on such a mechanism would be tough, though :-)

    0
    • 9

      It doesn’t matter if you look at the connection from the client or the server – the problems measuring bandwith described in the article still apply. The network and routing is the key, not the ‘role’ of the hosts on each endpoint involved and unless the network cannot estimate the bandwidth for a specific route by itself (which is by the very nature of the internet still temporary), bandwidth detection is out of the responsive equation.

      1
      • 10

        uh…

        “It doesn’t matter if you look at the connection from the client or the server…”

        really? Does someone need to post a SYN,SYN-ACK,ACK diagram on this thread? Of COURSE the server has EVERYTHING to do with measuring bandwidth. That is why we call it a server. Networks only consist of internets pipes, and it is not a networks role to “estimate” or even touch a packet (unless you are the NSA). The network feeds into the router or switch, router or switch go into firewall(if exists), and all connect to a server of some sort or another. One can only measure bandwidth between server and client. I’m not even going to describe what issues might arise in a more clustered environment.

        The role of the client is only to consume data from a server. If you want an image to resize, then use imagemagick, or talk to a programmer. There…end of article.

        -3
        • 11

          Tim, apart from unneccessarily insinuating that there was a lack of knowledge about TCP on my side, I was referring to the statement that the server “has to know” the bandwidth and that it could “calculate” the bandwidth.

          Of course (only) the *sender* can estimate bandwidth and throughput by reacting to the receipients incoming (or missing) responses over time. Since both the server and the client can be sender as well as receipient, the view point on the connection is not the important factor (because of changing roles), but the fact that in adaptive routed networks the ‘network components’ (meaning the participating nodes in a specific, temporary route) are the only ones that could theoratically know the available bandwidth.
          Due to these routes being temporary, neither the ‘network’, the client nor the server can know precisely which bandwidth and throughput are potentially available beforehand.

          Which is the precise point of this article and my response.

          0
  6. 12

    could not the adaptive bitrate functionality from video be used to test connectivity and send appropriate file depending on bandwidth

    0
  7. 13

    We designers and front-end “coders” are over-complicating everything to the point of no return.

    26
  8. 14

    Great article. To build on what Oliver Caldwell said:

    Just because I’m on a slow connection, doesn’t necessarily mean want quality sacrificed for speed. For example, my mobile carrier re-encodes images on the fly to “improve” performance and the result is often horrid. Now, some of this is down to poor implementation (re-encoding flat colour pngs to low quality jpeg), but in some cases I’d be happy to wait for images longer to get better quality.

    Imagine I’m viewing a video on typical hotel wifi. If I’m watching the latest edition of “kitten falls of a thing”, I’d rather get that quickly at the expense of quality. However, if I’m watching “Anticipated high octane movie trailer II” I want to watch it at high quality, and I’m prepared to wait for it.

    The use cases for adapting to bandwidth quickly boil down to setting sensible defaults, but the user should still be able to choose.

    1
  9. 15

    Great post Yoav. One small knitpick.. You’ve mentioned packet-loss, on several occasions, as a big limiting factor for TCP performance, and while that’s absolutely true, that’s not the biggest problem for wireless networks. In fact, all wireless standards have lower-level retransmission mechanisms which hide vast majority of the retransmissions from TCP – this creates variable latency, but it doesn’t induce “TCP packet loss”.

    If nothing else, the larger problem is simply the fact that radio communication runs over a shared medium. Hence, bandwidth allocation can and does change at very high frequency. In 4G, this can happen at millisecond granularity… rendering any “estimation” meaningless, unless we’re talking about capturing a full distribution.

    In any case, all of that aside. I’m 100% with you on the conclusion: don’t do it, it won’t do you any good. The only exception to this rule are large, streaming transfers – like video. But that’s an entirely different story.

    0
    • 16

      Thanks!
      While technically you’re right that the lower layer in mobile networks handles its own retransmissions, from TCP’s perspective there is little difference between a highly delayed, out-of-order packet and a lost one.
      Often, by the time a lower level retransmission occurs, the following TCP packets arrived at the receiver, resulting in duplicate acks sent to the sender and the “fast retransmit” mechanism deduces a packet loss and retransmits. In other cases, the retransmission timeout can also trigger a retransmission if the packet is delayed long enough.

      That is not to say that the lower layer retransmission mechanisms are never helpful. Unfortunately, TCP retransmissions are not a rare site when examining wireless network captures.

      Regarding the bandwidth allocation changes, I agree completely.

      0
  10. 17

    Oh how I wish web development these days were as simple as it was 10 years ago – two design choices, cater for 800×600 resolution or ignore it and only cater for 1024×768 resolution. One browser to code for as well – IE6!

    Front-end development is seriously getting difficult to keep up with. A job that previously would take X amount of time is now taking 3 times as long – first multiplication being time taken to do all the extra artwork for differing resolutions/screen sizes/DPIs, second multiplication for the extra coding involved with media-queries for these various devices and no pixel density too.

    It really is a business area that is going the same way as computer game development has. 30 years ago it was possible to code a game in your bedroom, now you need a full production house with 100+ people and millions of dollars behind you. When I started web development 13 years ago it was no problem creating a site myself in a matter of days; whereas nowadays, our 4-man design agency struggles to juggle multiple jobs at once. Will it soon be that we need more manpower to simply work on a single site at any one time?

    This is all pushing web design/development out of reach of individuals, and is as a result preventing a lot of paying customers from getting decent (basic) websites. It’s unfortunate that as an agency, we now have to turn away lots of low budget jobs simply because we would not be able to do it properly for the amount offered (whereas before the “iPhone revolution” we could have).

    I know it’s all progress in the name of the greater good, but it’s just a real pain and a real shame that web development is no longer a ‘cottage/bedroom industry’.

    15
    • 18

      I agree with you to that the web, in general, is a mess. Even the author of HTML5 agrees on that.

      Having said that, there is a twisted advantage to this complexity: it makes web development a real skill again. Over the last years it has become a “commodity”: cheap developers (or hobbyists) are widely available so business go for the low wage options.

      The new web, however, requires a senior, multidisciplinary developer, which means good jobs. In a way this is bad news for business, and good news for those type of developers.

      1
      • 19

        @Ferdy Which is all very well, but it essentially removes the possibility of having a decent website from individuals. With that comes a loss of the independence and creativity that the web originally had and makes the whole thing more corporate. It may be inevitably, but I find it saddening nonetheless.

        3
  11. 20

    We already have a system like this, and we’ve had it for something like 15-20 years. But anno 2013, it seems that the world has forgotten about it. It stores multiple resolution versions of an image in one file, and starts by showing the lowest resolution.

    I’m talking about interlaced GIF/PNGs and progressive JPEGs. These technologies show a low-resolution version of the image while the rest is being downloaded. When the download completes, you’re looking at the high-resolution version, but if the download takes a long time, you can meanwhile enjoy the low-resolution image.

    The only problem here is that the continued download of the high-resolution image data blocks other elements from downloading. It would be great if browsers could spread the loading of progressive images, for example by pausing the image download after the first low-resolution pass if that took more than a certain amount of time, and prioritize downloading the first pass of the remaining images on the page.

    6
    • 21

      Indeed. JPEG2000, for example.

      I was involved 12-13 years ago in a project at Nokia that addressed all the issues of measuring bandwidth to devices accurately and dynamically and then selecting the appropriate image (resolution) in function of available capacity. Bandwidth was measured with an interesting algorithm that gave good results — it was measured neither on the terminal, nor on the server, but rather at a gateway between the wireless and wireline networks (which had interesting benefits regarding taking into account the different behaviours of each network, but alas only applicable by telecom operators). The project also evaluated JPEG2000 — which seemed very adequate for this kind of purpose.

      0
    • 22

      Besides progressive JPEGs and interlaced PNG/GIFs,
      what about the Web community getting SVG support as a standard?

      Vector-based images would offer little advantage to photography, but would be a boon to graphic and illustrative elements. For tiny images, there may be a size gain, but a real benefit would be seen in larger images. And, being resolution-independent, would show no degradation on larger and high-resolution displays. Final display would be processed on the device, obviating the need for media queries for different versions of the same image…

      3
      • 23

        I definitely agree about the use of SVG as the most apprpriate element for most graphics that aren’t photos. Trouble is, for Windows, there exists nothing that would be comparable to Bohemian Codings wonderful tool Sketch. Illustrator and CorelDraw are vector tools, but making SVGs was tacked on and feels more than clunky in use. Inkscape, the sole dedicated SVG editor, suffers from maddening UI/ UX issues.

        So it’s a matter of available tools, as usual.

        0
  12. 24

    Very interesting article. This is the main hurdle of making totally responsive websites (rather than seperate mobile sites). I must admit, bandwidth media queries would be awesome but now I have a better understanding on why they aren’t feasible.

    0
  13. 25

    Brett – I agree that the combination of picture and srcset is somewhat verbose, but I’m not sure how we can make the syntax terser without losing its semantics. If you have any ideas, I’d love to hear them.

    Kurt & Evan – I did some thinking regarding a responsive image format a while back (http://blog.yoav.ws/2012/05/Responsive-image-format & http://blog.yoav.ws/2012/08/Fetching-responsive-image-format).
    The main issue, besides having the various browser vendors agreeing on a format, is the fetching mechanism. There is no current way for the browser to fetch only the part of the image that it needs.
    A responsive image format may be a longer term solution to the resolution switching problem, not necessarily to the art direction problem.
    In any case, I believe that the search for an ideal long term solution, such as a responsive image format based solution, should not come at the expense of finding a good-enough short term solution to today’s responsive images needs.

    2
  14. 26

    Also, the bandwidth media queries thing smacks of the same chaos that device-based media queries eventually brought us. It feels like going responsive only to confine ourselves to a digital canvas of viewports misses the point. I don’t think anyone’s holding a gun to our heads to do it, but in technology especially: you ignore progress at your own peril. I’m honestly rather excited about where the industry is going next. Come what may, even as this gets harder.

    0
  15. 27

    Great article Yoav! I’m not a front-end developer, but it’s this type of thinking and conversation that has kept me reading Smashing for a long time. I’ll be sure to pass this along to my coworkers and colleagues.

    3
  16. 28

    Great article. Without getting into all the tech and design nitty gritties, I find it interesting that on the one hand, everyone and her brother is preaching about how we fell for consensus halluzinations in the past by predicting the user’s environment and how now we should design and build for the unpredictability of ‘the web’, taking the user first approach, and on the other hand try to do the same kind of fortune-telling by triangulating the user’s needs and in doing so maybe patronizing the user.

    1
  17. 29

    In my opinion, browsers should have image enhancement capabilities by default such that a low resolution image would be enhanced by the browser automatically like softwares used by government agencies to enhance images. I think this is the best option in the long run

    Today its hd and retina capable devices we’re trying to cope with, who knows what’s coming next?

    Also keep in mind not everyone have the same quality of internet connection, Africa especially have the slowest connections.

    -2
    • 30

      My understanding is that “image enhancement” is mostly a fabrication of Hollywood. You can remove data from an image (like removing red eye), but you can’t add data that simply isn’t there.

      Of course, many TVs have “upscale to HD” functionality, which uses complex algorithms to detect shapes and faces to improve image quality. But we probably won’t see that sort of function built into mobile browsers, given the few uses for it.

      0
  18. 31

    Why couldn’t browsers just send a header with their requests that provide the necessary data (like screen depth or even resolution)? I agree that it’s not really a problem that lives in the domain of HTTP, but it would be super easy to implement on both sides and allows the client easily to switch to save bandwidth.

    1
  19. 32

    I feel the need to offer another solution to this bandwidth problem: Device detection and non-responsive design. If you want to truly create a low bandwidth, streamlined mobile web service then you really should be moving on to device detection and redirection. Once a device has been detected you can push them to a specific website offering complete mobile features based around the specifics of said device. 51Degrees.mobi offer this service for free, but there are others out there as well!

    -14
  20. 33

    As others suggest, I think we’re over complicating things.

    If you optimise your images well, you can strike a good enough balance that they download quickly, and still look decent on high PPI devices. I’ve browsed websites I’ve made on my Nexus 10, which has a higher PPI than the iPad, and honestly unless you look close the issue of image quality is really not that big a deal.

    Some news website images do look bad, but I would say that’s a question of optimising your images correctly.

    When 4G coverage is more prevalent, that is when I will consider to start serving higher resolution images. My attitude is simple, I will support high resolution, large file size images that take advantage of high PPI screens…..WHEN the infrastructure is there to support them.

    Until then, just optimise correctly, and stop worrying about it.

    3
    • 34

      It’ll be years before 4G is prevalent anywhere outside of first-world countries, which is a really large portion of the Earth. Are we supposed to just leave those guys behind?

      0
      • 35

        I’m sorry but I think you’ve misread my comment. I said I am going to wait until 4G is more prevalent before I think about serving up higher resolution images. Obviously that would be dependent on the specific site. So if I am looking at a website where 95% of traffic is from the western world, perhaps we are talking a few months or a year. If we’re talking a website where significantly more traffic is from poorer countries with slow infrastructures, then we could be waiting a while longer.

        I never said anything about leaving people without 4G behind. I said that personally I would be waiting for the infrastructure that I feel is required for large images, to exist before I start serving large images.

        0
    • 36

      Odin Hørthe Omdal

      January 10, 2013 7:27 am

      But the world is not that simple. You’re not building for the world wide web, but the western world web.

      Heck, even when I’m on a plane surfing around, many websites become unusable because they’re too heavy and the bandwidth/latency in airplanes is so bad.

      If you make your site work fast on a slow modem, then you’ve done good. At that point, add more stuff as needed.

      YouTube found out that after shaving their video pages from 1.2 MB to around 100k, new non-western markets started spending much more time on the site. http://blog.chriszacharias.com/page-weight-matters

      4
      • 37

        Are you responding to me? If so, I think you will find we are actually in agreement. I’m saying the same thing, to not serve up heavy file size websites over slow connections.

        Where our opinions may differ is that my personal preference is to wait until connections have sped up (most notably mobile connections, and in this case 4G) before I start serving up higher resolution images, so as to ensure that the majority of people browsing my website do not have to wait long for the images to download.

        I am not saying for a second to disregard peoples connections. Quite the opposite. It is just that I think by the time we have agreed on this responsive images issue and implemented it, internet connections all over the world will have sped up significantly, and for most of us it just won’t be as much of an issue and certainly not something worth losing hair over.

        Also, Youtube is very fast, but as an advocate of performance and page optimisation, I’d say my efforts come pretty close http://tools.pingdom.com/fpt/#!/nR1Ulg8NF/http://jetbookingdirect.com/

        0
  21. 38

    @Odin I think it’s the opposite, we need to build for the world wide web, not the western world web. The web is universal, we need to keep it this way instead of setting up walls to silo the web.

    -2
  22. 39

    From my perspective, why should we automatize this process with lines of code and make the design bandwidth responsive? The truth is that the user him/herself knows better how his her bandwidth responds and maybe is up to the browser manufacturers to provide a 1st level choice on how the user wants to experience the web. Something as simple as Low res, Regular, High res. This way the browser can tell the site what to load. In video YouTube is detecting bandwidth automatically already and switches between experiences. Although, since sometimes bandwidth pulsates in intensity as you move, switching over and over again between low and high res is annoying, in sound and video quality. Should be tested if the users prefer an alternating adaptive automatic experience, or a lower quality homogenous experience without interruptions or alternating resolutions.

    3
  23. 40

    If the user is in control of a click action to view a higher res image, surely this is the better option, regardless of bandwidth.

    At what point does an image become ‘high bandwidth’ ?
    800×600, 1024×768, 1280×1024 ?

    Is entirely possible to save out a 1024×768 image at under 50k and common sense would dictate your not going to want to serve up more than 1 or 2 images of that size in a page request.

    Regarding responsive, the way I currently work is a ‘sweet spot’ where images are no larger than 800×600 and more often than not, thumbnail images no larger than 480×320.

    0
  24. 41

    I personally think that the newly proposed picture element is a too complex solution to the image problem, for the simple reason that most web pages are not authored by developers/designer, rather by users using some kind of CMS system, of which there are thousands. Even with the new element around, it will be years before browsers and ancient CMS systems support it.

    Here’s my take: as for graphics (icons and graphic effects) we should not use images at all and instead use icon fonts, SVG and CSS3. This will carve out many image needs already.

    Which leaves photos and illustrations. As for photos, they are usually JPEGs. Other than the progressive JPEG tip shared in another comment already, here’s a killer tip for JPEG handling: double its dimensions (from the original!) and recompress them at 60-80%. Result: a photo containing four times as much information as before, yet at an equal or smaller file size. One single image with a modest file size, yet having the quality to serve any device, including high PPI.

    And this works beautifully. Try it. You’ll forget about all this image switching madness instantly. The reason why this works? The artifacts introduced due to compression will scale down to become invisible.

    Example:
    You show a photo at 800 x 600 on a web page

    Take the high res original, and double the dimensions to 1600 x 1200 (4 times the pixels!). Next, recompress to 60-80%. You will now be at the file size (or lower) of the 800 x 600 file whilst having 4 times more pixels. Finally, show your 1600 x 1200 photo still in the same 800 x 600 image container and be done with it.

    1
  25. 42

    How about letting users decide what resolution of images to download? Being able to set it on a per-site basis would be nifty.

    0
    • 43

      Asking every time you enter a site to choose a image size is really annoying. (Imagine that idea implemented in various websites that you visit).
      Of course you could store that information but when the user changed from Wi-Fi to 3G it wouldn’t fit.

      0
  26. 44

    Obviously the best solution is for everyone to buy a Mac, use firefox or safari, and only use iphone, iPad, iMacs and mbp’s.

    Enough of these second rate (“but it’s cheaper”) mobile and desktop devices that are made by companies that make washing machines and use all sorts of ridiculous screen sizes and ugly, illogical UI’s.

    sup

    -2
  27. 45

    When applying images to a mobile website, I have read a lot about making the images twice their original size and then cutting the height and width in half to obtain crisp looking images on most mobile devices. Is this the right idea? This seems to do the trick when trying it. Otherwise, my images are blurry. I also have been using the meta view port tag based on the device width, which seems to work really well for me and also seems to be the most common.

    0

Leave a Comment

Yay! You've decided to leave a comment. That's fantastic! Please keep in mind that comments are moderated and rel="nofollow" is in use. So, please do not use a spammy keyword or a domain as your name, or else it will be deleted. Let's have a personal and meaningful conversation instead. Thanks for dropping by!

↑ Back to top