The story of usability is a perverse journey from simplicity to complexity. That’s right, from simplicity to complexity—not the other way around.
If you expect a “user-friendly” introduction to usability and that the history of usability is full of well-defined concepts and lean methods, you’re in for a surprise. Usability is a messy, ill-defined, and downright confusing concept. The more you think about it—or practice it—the more confusing it becomes. We learned that the history of usability is a “perverse journey from simplicity to complexity”.
What Is Usability?
Early Roots of Usability
If we go far back in history, Vitruvius (1st century BC) was probably the first person to lay forth systematic and elaborated principles of design. His three core design principles became very influential:
- Firmitas: The strength and durability of the design;
- Utilitas: A design’s usefulness and suitability for the needs of its intended users;
- Venustas: The beauty of the design.
Vitruvius’ work was an inspiration to people like Leonardo da Vinci, who drew the well-known Vitruvian Man (fig. 1 below). By empirically measuring and calculating the proportions of the human body, and emphasising the “utilitas” principle, Vitruvius may be considered the first student of ergonomics and usability.
Figure 1: The Vitruvian Man drawing was created by Leonardo da Vinci circa 1487 based on the work of Vitruvius.
The discipline of usability is also rooted in the discipline called Human Factors, which started as military personnel asked themselves the very morbid question:
“What design do we need to kill more enemies through better matching soldier and weapon? And thus avoid getting killed ourselves.”
Both World War I and World War II fueled research into Human Factors. When designing artillery cannons, for example, usability yielded more precision, greater kills, and shorter training of personnel.
Thus, military designers could extract some very concrete usability metrics. For example:
- How quickly will a new crew member learn how to use the artillery cannon (now that the former crew member is dead)?
- How many rounds per minute (ordnance) is the cannon able to fire with an inexperienced versus an experienced crew?
- How will improving the design of the cannon improve target acquisition (and thus kill more enemies)?
- How does a design improvement decrease soldier fatigue (as a consequence of a lighter cognitive load)?
Figure 2: Photograph of the French 320 mm railway gun Cyclone, taken in Belgium in 1917. The gun required not only trained personnel to fire it, but also trained personnel to drive it.
Figure 3: A 155 mm artillery shell fired by a United States 11th Marine Regiment’s M-198 howitzer. The setup time (and thus usability) is essential as anti-artillery weapons necessitate that the position of the howitzer be changed very quickly after firing.
Recent Roots of Usability
The concept of usability has its more recent and direct origins in the falling prices of computers in the 1980s, when for the first time it was feasible for many employees to have their own personal computer. In the 80s, most computer users had practically no, or only basic, training on operating systems and applications software. However, software design practices continued to implicitly assume knowledgeable and competent users, who would be familiar with technical vocabularies and system architectures, and also possess an aptitude for solving problems arising from computer usage.
Such implicit assumptions rapidly became unacceptable. For the average user, interactive computing became associated with constant frustrations and consequent anxieties. Computers were obviously too hard to use for most users, and often absolutely inpractical. Usability thus became a key goal for the design of any interactive software that would not be used by trained technical computer specialists.
The current understanding of usability is different from the early days in the 1980s. Usability used to be a dominant concept but this changed with research increasingly focused on usage contexts. Usage quality no longer appeared to be a simple issue of how inherently usable an interactive system was, but how well it fitted its context of use.
Usability Evaluation: What’s Good And What’s Bad?
Usability is a contested historical term that is difficult to replace. User experience specialists have to refer to usability, since it is a strongly established concept within the IT landscape. In simplified terms, usability work is about finding out what’s good and what’s bad. However, when we examine the hundreds of usability evaluation methods in use, we do see that different approaches to usability result in differences over the causes of good and poor usability. That may sound complicated but let’s take two different approaches to usability:
- If you think usability is a feature of an interactive system, your approach to usability may be called essentialist—i.e. poor/good usability resides in the “essense” of the system. You will typically find yourself saying things like “that website is not user-friendly”, “a website or system has poor usability when there is no visibility of system status”, “I can reliably measure which website has the best usability”, etc. This means you think that all causes of user performance are due to technology. In that case, you will typically use system-centered usability inspection methods to identify such causes.
- On the other hand, if you think usability is a feature of the interaction between user, computer and the context, your approach to usability is contextual—i.e. depending on the context. This means that you think questions of user performance have different causalities, some due to technologies, and others due to some aspect(s) of usage contexts, but most due to interactions between both. Several evaluation and other methods may be needed to identify and relate a nexus of causes. You will often find yourself saying vague things like “that depends…”, “well… this website checkout procedure is great for male, fact-oriented, middle-aged buyers, but not for an impatient teenager doing the purchase on his smartphone sitting in the bus”, etc.
The reason I mention the essentialist/contextual distinction is that anyone involved with usability should—ideally—be able to say “this website/technology/system is good, that one is bad”. After all, isn’t that what your client or boss is paying you to do?
To answer if the usability of a website is good or bad you have to employ a usability method. And your choice of usability method will depend on your approach to usability—whether you admit it or not. Maybe you’ll deny it and say, “I’ve never heard of any essentialist/contextual approaches to usability.” However, that would be like selling French wine without ever having spent time in a French vineyard. You can do it, but at some point your client or boss will start asking questions you can’t answer. Or your decisions will have unexpected side-effects.
So what usability method should you choose to determine “what’s good and what’s bad”?
Analytical And Empirical Evaluation Methods
Usability work can involve a mix of methods and the mix can be guided by high level distinctions between methods, for example the distinction between analytical and empirical evaluation methods.
- Analytical evaluation methods are based on examination of an interactive system and/or potential interactions with it. You analyse the system or analyse the interaction with the system.
- Empirical evaluation methods, on the other hand, are based on actual usage data.
Analytical evaluation methods may be system-centred, like Jakob Nielsen’s Heuristic Evaluation or interaction-centred, like the Cognitive Walkthrough method. Design teams use the resources provided by a method (e.g. heuristics) to identify strong and weak elements of a design from a usability perspective.
Three types of analytical evaluation methods:
- Inspection methods tend to focus on the causes of good or poor usability.
- System-centered inspection methods focus solely on software and hardware features regarding attributes that will promote or obstruct usability.
- Interaction-centered methods focus on two or more causal factors (i.e. software features, user characteristics, task demands, or other contextual factors).
Figure 4: Jakob Nielsen’s Heuristic Evaluation is a good example of an analytical evaluation method. Heuristic Evaluation became the most popular user-centered design approach in the 1990s, but has become less prominent with the move away from desktop applications.
Empirical evaluation methods focus on evidence of good or poor usability, i.e. the positive or negative effects of attributes of software, hardware, user capabilities and usage environments. User testing is the principal project-focused method. It uses project-specific resources such as test tasks, users, and also measuring instruments to expose usability problems that can arise in use. Also, empirical experiments can be used to demonstrate superior usability arising from user interface components (e.g. text entry on mobile phones) or to optimise tuning parameters (e.g. timings of animations for windows opening and closing).
Such experiments assume that the test tasks, users and contexts allow generalisation in regards to other users, tasks and contexts. Such assumptions are readily broken, e.g. when users are very young or elderly, or have impaired movement or perception.
How To Balance The Mix: Why Is It Difficult?
Achieving a balanced mix of evaluation methods is not straightforward, and involves more than simply combining analytical and empirical methods. This is because there is more to usability work than simply choosing and using methods.
Evaluation methods are as complete as a Chicken Fajita Kit, which contains very little of what is actually needed to make Chicken Fajitas: no chicken, no onion, no peppers, no cooking oil, etc. Similarly, user testing ‘methods’ miss out equally vital ingredients and project-specific resources such as participant recruitment criteria, screening questionnaires, consent forms, test task selection criteria, test (de)briefing scripts, target thresholds, data analysis methods, or reporting formats.
There is no complete published user testing method that novices can pick up and use ‘as is’. All user testing requires extensive project-specific planning and implementation. Much usability work is about configuring and combining methods for project-specific use.
The Only Methods Are The Ones That You Complete Yourselves
When planning usability work, it is important to recognise that so-called ‘methods’ are simply loose collections of resources better understood as ‘approaches’. There is much work in getting usability work to work, and as with all knowledge-based work, methods cannot be copied from books and applied without a strong understanding of fundamental underlying concepts. All methods must have unique usage settings that require project-specific resources. For user testing, for example, these include participant recruitment, test procedures and (de-)briefings. There are no universal measures of usability that are relevant to every software development project.
The Position So Far
- There are fundamental differences on the nature of usability, i.e. it is either an inherent property of interactive systems, or an emergent property of usage. There is no single definitive answer to what usability ‘is’.
- There are no universal measures of usability and no fixed thresholds above or below which all interactive systems are or are not usable. There are no universal, robust, objective and reliable metrics. All positions here involve hard won expertise, judgement calls, and project-specific resources beyond what all documented evaluation methods provide.
- Usability work is too complex and project-specific to admit generalised methods. What are called ‘methods’ are more realistically ‘approaches’ that provide loose sets of resources that need to be adapted and configured on a project by project basis.
Readers could reasonably draw the conclusion from the above that usability is an attractive idea in principle that has limited real-world application. However, the reality is that we all continue to experience frustrations when using interactive digital technologies, and even we would say that we find them difficult to use. Even so, frustrating user experiences may not be due to some single abstract construct called ‘usability’, but instead be the result of unique complex interactions between people, technology and usage contexts.
Usability In The Design Room
In well directed design teams, there will not be enough work for a pure usability specialist. This is evidenced by a trend within the last decade of a broadening from usability to user experience expertise. User experience work focuses on both positive and negative value, both during usage and after it. A sole focus on negative aspects of interactive experiences is becoming rarer. Useful measures of usage are extending beyond the mostly cognitive problem measures of 1980s usability to include positive and negative effect, attitudes and values, e.g. fun, trust, and self-affirmation. The coupling between evaluation and design is being improved by user experience specialists with design competences.
Many user experience professionals have also developed specific competences in areas such as brand experience, trust markers, search experience/optimisation, usable security and privacy, game experience, self and identity, and human values. We can see two trends here. The first involves complementing human-centered expertise with strong understandings of specific technologies such as search and security. The second involves a broadening of human-centered expertise to include business competences (e.g. branding) and humanistic psychological approaches (e.g. phenomenology, meaning and value).
As such, a usability person is not a lone judge who makes the call “is this usable?” Instead, the usability proficient person will often be a team player taking on many roles in the product development lifecycle.
Figure 6 A-B. From Solo Specialist to Team Member: User Experience—as opposed to Usability—as an integrated part of design teams. Copyright of leftmost picture: Flickr user ‘Tety’ through the Creative Commons Attribution Share-Alike 2.0 licence
Conclusion: Are You Confused?
This Smashing Magazine article is a digested version of the monstrous 25,000 word encyclopedic introduction to the history of Usability Evaluation at Interaction-Design.org. It’s available in a special version for Smashing readers.
Did this article confuse you more than it informed you? Well, it should! If you want an answer to the question of “what is usability?”, it’s just as complicated as asking “what is beauty?” The more you think about it, the less you feel you know. I don’t believe you will ever find the answer to the question “what is usability?”, but I nevertheless hope you—as I—will continue to ask that very question. And be confused again and again. Confusion is—after all—the mother of wisdom.
The concept of usability is a product of millions of designers trying for decades to describe what they are doing to make technology easier and more pleasant. As such, the concept is immensely complex. It may have started out as a simple concept but the more people who are involved with usability, the more multi-faceted the concept will become. Therefore, the history of usability is a journey from simplicity to complexity. Is that journey worth the effort? Certainly! Anyone who masters usability—and all its facets—has a power position when it comes to designing easy/pleasant/cool/useful/usable/etc. technology. As confusing as that endeavour may be!