We’ve come a long way since the days of the first Macintosh and the introduction of graphical user interfaces, going from monochrome colors to millions, from estranged mice to intuitive touchscreens, from scroll bars to pinch, zoom, flick and pan. But while hardware, software and the people who use technology have all advanced dramatically over the past two decades, our approach to designing interfaces has not. Advanced technology is not just indistinguishable from magic (as Arthur C. Clarke said); it also empowers us and becomes a transparent part of our lives. While our software products have definitely empowered us tremendously, the ways by which we let interfaces integrate with our lives has remained stagnant for all these years.
In the accessibility industry, the word “inclusive” is relatively commonplace; but inclusive design principles should not be reserved for the realm of accessibility alone, because they apply to many more people than “just” the lesser-abled. Interface designers frequently think in binary terms: either all of the interface is in front of you or none of it is. But people are not binary. People aren’t either fully disabled or not at all, just like they aren’t merely old or young, dumb or smart, tall or short. People sit along a vast spectrum of who they are and what they are like; the same is true when they use interfaces, except that this spectrum is of expertise, familiarity, skill, expectations and so on.
So, why do we keep creating interfaces that ignore all of this? It’s time for us to get rid of these binary propositions!
What Is “Inclusive” In The World At Large?
In the world at large — meaning not one particular industry, country or demographic — the term “inclusive” applies to cultures in which, for example, women are as welcome to contribute their opinion as men are; in which a person’s race or sexual orientation has no bearing on their acceptance by a group; in which everyone feels safe and comfortable, and no one feels suppressed, stymied or silenced; in other words, a culture of equal opportunity. But when we apply the term to interfaces, it doesn’t mean that interfaces should be equal for everyone; rather, it means that they should be equally accessible to everyone, and equally empowering no matter what the user’s skill level or familiarity. When these two aspects are combined, the product gets the best of both worlds: it is accessible to more people, without being compromised for advanced users.
An excellent example of software that has done this well is in the video game genre, going back as far as 1985 with Nintendo’s Super Mario Bros. It was a game that truly anyone could pick up and play, with an invisible interface that taught you everything you needed to know to get started and become good at it. The screen would only scroll right, so you couldn’t walk left. You could jump, but standing on top of special bricks did nothing, so you would try to jump against them from below. Pipes visibly led down, so you’d try your luck with the down arrow on the direction pad. And at the end of the level, the bonus flag was raised high, encouraging competitive players to jump to the very peak for top points. All of the game’s mechanics were explained in one level, without a single instruction, tutorial or guiding word.
Many games since 1985 have not featured this principle to any significant degree. Super Mario Bros. truly was a game whose interface was equally empowering; meaning, the interface and product magnified the results of your efforts based on the (skill) level of your input. Put differently, beginners would see good results from their efforts, while advanced users would see far greater results from theirs. These principles aren’t limited to video game design either; they apply just as much to software applications and productivity tools, even websites! So, let’s start with some simple inclusive design concepts.
Language And Aesthetics
Language has an impact on everything, because it is the primary way we communicate as a species. Its significance is also frequently overlooked; a Duke University study revealed that gendered language in job listings affects a job’s appeal, independent of the type of job. There’s more: while not a single participant in the study picked up on the gendered language, each of them did find the listings more or less appealing as a result. This raises the question: how much of an impact does the language chosen for our designs have on the number of new users who sign up or the number of customers we convince to purchase our products? No good study in this area seems to exist or be readily available, but one study (of a sort) that is available is the W3C’s own resource on people’s names around the world and its effect on form design. Let’s call it a good start and do more research into how language shapes the Web.
But language is just one metric that we don’t take into consideration as often as we should. Aesthetics play a significant role as well, yet there is a lot more to aesthetics than taste and general appeal. The placement of elements, whether shapes are angular or rounded, and our use of color all affect how different genders, demographics and cultures respond to interfaces. Because no one color scheme will please everyone all the world over, the more international our (targeted) audiences are, the more fully designed our localizations will need to be.
Interface Design Legacies
In the world of interface design, being inclusive means being accepting and welcoming of the many different cognitive skills and levels of expertise among users. Historically, we have striven for the perfect middle ground between approachable and empowering. Making interfaces more intuitive plays a significant role in that process, but it often demands that we dumb interfaces down (i.e. remove features), which can be undesirable for the advanced user who wants more functionality or control. With more comprehensive interfaces, a frequent “solution” to this problem is to allow users to customize the interface to their needs. But is this truly empowering? When research shows that less than 5% of people adjust default settings, it is highly questionable whether customization and settings are truly empowering in interfaces.
Earlier, I mentioned how most interfaces offer a binary proposition: either the application is open or it isn’t. When it’s open, the entire user interface (UI) is typically available to you, whether or not you need all of it. This makes sense from a historical perspective—when all we had were physical interfaces—but it makes little sense with our modern software ones, especially since most software interfaces are far more comprehensive than a typical hardware interface.
When Steve Jobs announced the iPhone at MacWorld in 2007, he compared the yet-to-be-revealed iPhone to popular smartphones of the time, noting their main problem as being “the bottom 40%” — i.e. the hardware buttons on all of those devices. The buttons were there “whether you need them or not.” The solution, according to Apple, was a large touchscreen with fully software-based UI controls. That way, each application’s interface could be optimally designed for its particular purpose.
The point Apple made along the way was that sticking to convention is a bad idea if you want to move an industry forward. Hardware buttons used to be all a phone had. Then, they were used to supplement a tiny screen. The iPhone showed that, when it comes to innovation in interfaces, the screen should be the full surface, a blank canvas onto which software could paint any interface. The unparalleled success of the iPhone suggests that Apple has proven their point well.
But as fantastic as the iPhone may have been compared to the smartphones before it, it still suffered from this same binary UI problem. The iPhone merely shifted the problem from being device-wide to being specific to individual applications, and then it masked the remaining issues by removing features or hiding them in drill-down views, until one very elegant, simplified UI remained for each app — one that lacked the ability to become more sophisticated for users who wanted, or needed, more.
To pilots, this is a familiar view. To others, it is a smörgåsbord of buttons. Image Source: Julien Haler
To be clear, removing features is not in itself a negative. Most interfaces get better from the process, because every visible feature, every UI control adds to the overall cognitive load of the user. Think, for instance, of an airplane cockpit and its countless little controls, dials and meters covering every surface. If you are not a pilot, the mere sight of it would overwhelm you. To an experienced pilot, however, it is simply what they need in order to fly the plane. Is this really the best we can do, though? Super Mario Bros. showed us we can do better.
In software, we have a situation that calls for the kind of innovation I’m talking about. As it is, more complicated, advanced and powerful applications feature more complex interfaces, and some can be downright overwhelming to first-time users. But not everyone wants to fly a plane — some of us are just trying to get some simple work done. Application developers try to alleviate this problem with tutorials, guided tours, help screens and overlays that explain each aspect of the UI; a great solution these things are not. What we need are better interfaces, interfaces that understand that we are human beings with different needs. What we need are…
For interface designers with an eye on accessibility, most of their efforts have long focused on the technical challenges faced by users. Many commentators have encouraged us to consider cognitive (or learning) disabilities as one part of the broader area of (Web) accessibility, but rarely has anyone explained how to do this. Additionally, when someone sees the term “cognitive disability,” they understandably think of the mentally handicapped. But there is a huge range of cognitively able people, and they exist not on a linear scale: a quantum physicist might have a tough time figuring out how to use a feature phone, whereas the average teenager would have no problem with it.
People invest in an application (and, thus, its interface) in varying degrees, depending on how important the product is to their daily lives. This means that your interface should cater to varying degrees of investment in addition to differing levels of expertise and familiarity.
In an interface, each additional UI element increases complexity and asks for a deeper investment on the user’s part. This is why invisible interfaces (like the one in Super Mario) are so powerful: an interface that appears only when needed reduces the cognitive load, reduces the investment required to understand the product, and makes it easier for the user to focus on the task at hand. A button that is relevant only in certain contexts should be visible only in those contexts.
But we can take this principle to a level even beyond that. An interface that is truly inclusive of all kinds of users is one that begins with only the fundamentals and then evolves and adapts alongside the user. During this process, the interface can both grow and decay, acquiring more features and controls as the user becomes more fluent in using it, and dropping or reducing the prominence of UI controls that the user does not use much, if at all.
Doing this automatically also makes more sense than offering the user a large number of options to customize the UI, for two reasons: first, users shouldn’t be expected to spend a lot of time making an interface usable to them; secondly, people might not always know exactly what they want, but their behavior might make clear what they need. A system that intelligently measures what the user needs in order to deliver the most efficient, effective yet still understandable interface could allow such a thing. A highly effective interface is one that can be changed not to how each user wants it, but to how each user needs it.
Of course, measuring the cognitive skill of a user is difficult, and even then it can only be approximated. Certain aspects of the user’s behavior can be measured, which helps to inform us about how familiar the user is with the interface overall and how fluent they are in using it. The speed with which a user navigates an interface and uses or explores its features is a good metric for how comfortable they are with the interface. The frequency of their use of “Help” and “Undo” features suggests a certain confidence level. Users of keyboard shortcuts are almost certainly looking for more powerful features, and someone who uses quotes and
OR in their search queries is likely technically minded. These and many other measurable aspects of people’s behavior can help shape your application’s interface, which can then be adapted to better suit the needs of users.
This is not the end of the story; rather, it is only the beginning. Tony Fadell’s new product, Nest, is a great example of an adaptive interface in the real world. The Nest Thermostat learns from your behavior patterns as you go about your daily and weekly routines, and it becomes predictive, so that you need to adjust the thermostat less frequently the more you use it.
That’s but one example. The possibilities open up even more with inclusive and adaptive interfaces. One type of user might need Feature A very frequently, whereas another might need Feature B instead; a truly inclusive interface would adapt to these needs and be equally powerful for these two different types of users.
We’ve overcome the various technical challenges of interfaces and designs through Web standards, accessibility and ARIA, responsive Web design principles and touchscreen devices. But we have focused so much on these technical challenges that we’ve almost lost sight of innovating the human aspects of interface and design. The next stage of evolution for our industry is to explore how to make our applications and products more inclusive, taking into account the vast spectrum of differences in our audience, and to make our interfaces smarter so that they serve a wider range of people more effectively. Let our exploration of inclusive design begin!