What Sci-Fi Tells Interaction Designers About Gestural Interfaces

Advertisement

One of the most famous interfaces in sci-fi is gestural — the precog scrubber interface used by the Precrime police force in Minority Report. Using this interface, Detective John Anderton uses gestures to “scrub” through the video-like precognitive visions of psychic triplets. After observing a future crime, Anderton rushes to the scene to prevent it and arrest the would-be perpetrator.

This interface is one of the most memorable things in a movie that is crowded with future technologies, and it is one of the most referenced interfaces in cinematic history. (In a quick and highly unscientific test, at the time of writing, we typed [sci-fi movie title] + “interface” into Google for each of the movies in the survey and compared the number of results. “Minority Report interface” returned 459,000 hits on Google, more than six times as many as the runner-up, which was “Star Trek interface” at 68,800.)

It’s fair to say that, to the layperson, the Minority Report interface is synonymous with “gestural interface.” The primary consultant to the filmmakers, John Underkoffler, had developed these ideas of gestural control and spatial interfaces through his company, Oblong, even before he consulted on the film. The real-world version is a general-purpose platform for multiuser collaboration. It’s available commercially through his company at nearly the same state of the art as portrayed in the film.

Though this article references Minority Report a number of times, two lessons are worth mentioning up front.

Minority Report (2002)

Minority Report (2002)
Figures 5.6a–b: Minority Report (2002)

Lesson: A Great Demo Can Hide Many Flaws.

Hollywood rumor has it that Tom Cruise, the actor playing John Anderton, needed continual breaks while shooting the scenes with the interface because it was exhausting. Few people can hold their hands above the level of their heart and move them around for any extended period. But these rests don’t appear in the film — a misleading omission for anyone who wants to use a similar interface for real tasks.

Although a film is not trying to be exhaustively detailed or to accurately portray a technology for sale, demos of real technologies often suffer the same challenge. The usability of the interface, and in this example its gestural language, can be a misleading though highly effective tool to sell a solution, because it doesn’t need to demonstrate every use exhaustively.

Lesson: A Gestural Interface Should Understand Intent.

The second lesson comes from a scene in which Agent Danny Witwer enters the scrubbing room where Anderton is working and introduces himself while extending his hand. Being polite, Anderton reaches out to shake Witwer’s hand. The computer interprets Anderton’s change of hand position as a command, and Anderton watches as his work slides off of the screen and is nearly lost. He then disregards the handshake to take control of the interface again and continue his work.

Minority Report (2002)

Minority Report (2002)

Minority Report (2002)

Minority Report (2002)
Figures 5.7a–d: Minority Report (2002)

One of the main problems with gestural interfaces is that the user’s body is the control mechanism, but the user intends to control the interface only part of the time. At other times, the user might be reaching out to shake someone’s hand, answer the phone or scratch an itch. The system must accommodate different modes: when the user’s gestures have meaning and when they don’t. This could be as simple as an on/off toggle switch somewhere, but the user would still have to reach to flip it.

Perhaps a pause command could be spoken, or a specific gesture reserved for such a command. Perhaps the system could watch the direction of the user’s eyes and regard the gestures made only when he or she is looking at the screen. Whatever the solution, the signal would be best in some other “channel” so that this shift of intentional modality can happen smoothly and quickly without the risk of issuing an unintended command.

Gesture Is a Concept That Is Still Maturing.

What about other gestural interfaces? What do we see when we look at them? A handful of other examples of gestural interfaces are in the survey, dating as far back as 1951, but the bulk of them appear after 1998 (Figure 5.8).

Chrysalis (2007)

Lost in Space (1998)

The Matrix Reloaded (2003)

Sleep Dealer (2008)
Figure 5.8a–d: Chrysalis (2007); Lost in Space (1998); The Matrix Reloaded (2003); Sleep Dealer (2008)

Looking at this group, we see an input technology whose role is still maturing in sci-fi. A lot of variation is apparent, with only a few core similarities among them. Of course, these systems are used for a variety of purposes, including security, telesurgery, telecombat, hardware design, military intelligence operations and even offshored manual labor.

Most of the interfaces let their users interact with no additional hardware, but the Minority Report interface requires its users to don gloves with lights at the fingertips, as does the telesurgical interface in Chrysalis (see Figure 5.8a). We imagine that this was partially for visual appeal, but it certainly would make tracking the exact positions of the fingers easier for the computer.

Hollywood’s Pidgin

Although none of the properties in the survey takes pains to explain exactly what each gesture in a complex chain of gestural commands means, we can look at the cause and effect of what is shown on screen and piece together a basic gestural vocabulary. Only seven gestures are common across properties in the survey.

1. Wave to Activate

The first gesture is waving to activate a technology, as if to wake it up or gain its attention. To activate his spaceship’s interfaces in The Day the Earth Stood Still, Klaatu passes a flat hand above their translucent controls. In another example, Johnny Mnemonic waves to turn on a faucet in a bathroom, years before it became common in the real world (Figure 5.9).

Johnny Mnemonic (1995)

Johnny Mnemonic (1995)

Johnny Mnemonic (1995)
Figure 5.9a–c: Johnny Mnemonic (1995)

2. Push to Move

To move an object, you interact with it in much the same way as you would in the physical world: fingers manipulate; palms and arms push. Virtual objects tend to have the resistance and stiffness of their real-world counterparts for these actions. Virtual gravity and momentum may be “turned on” for the duration of these gestures, even when they’re normally absent. Anderton does this in Minority Report as discussed above, and we see it again in Iron Man 2 as Tony moves a projection of his father’s theme park design (Figure 5.10).

Iron Man 2 (2010)

Iron Man 2 (2010)
Figure 5.10a–b: Iron Man 2 (2010)

3. Turn to Rotate

To turn objects, the user also interacts with the virtual thing as one would in the real world. Hands push opposite sides of an object in different directions around an axis and the object rotates. Dr. Simon Tam uses this gesture to examine the volumetric scan of his sister’s brain in an episode of Firefly (Figure 5.11).

Firefly, “Ariel” (Episode 9, 2002)

Firefly, “Ariel” (Episode 9, 2002)
Figure 5.11a–b: Firefly, “Ariel” (Episode 9, 2002)

4. Swipe to Dismiss

Dismissing objects involves swiping the hands away from the body, either forcefully or without looking in the direction of the push. In Johnny Mnemonic, Takahashi dismisses the videophone on his desk with an angry backhanded swipe of his hand (Figure 5.12). In Iron Man 2, Tony Stark also dismisses uninteresting designs from his workspace with a forehanded swipe.

Johnny Mnemonic (1995)

Johnny Mnemonic (1995)

Johnny Mnemonic (1995)
Figure 5.12a–c: Johnny Mnemonic (1995)

5. Point or Touch to Select

Users indicate options or objects with which they want to work by pointing a fingertip or touching them. District 9 shows the alien Christopher Johnson touching items in a volumetric display to select them (Figure 5.13a). In Chrysalis, Dr. Brügen must touch the organ to select it in her telesurgery interface (Figure 5.13b).

District 9 (2009)

Chrysalis (2007)
Figure 5.13a–b: District 9 (2009), Chrysalis (2007)

6. Extend the Hand to Shoot

Anyone who played cowboys and Indians as a child will recognize this gesture. To shoot with a gestural interface, one extends the fingers, hand and/or arm toward the target. (Making the pow-pow sound is optional.) Examples of this gesture include Will’s telecombat interface in Lost in Space (see Figure 5.8c), Syndrome’s zero-point energy beam in The Incredibles (Figure 5.14a) and Tony Stark’s repulsor beams in Iron Man (Figure 5.14b).

The Incredibles (2004)

Iron Man (2008)
Figures 5.14a–b: The Incredibles (2004), Iron Man (2008)

7. Pinch and Spread to Scale

Given that there is no physical analogue to this action, its consistency across movies comes from the physical semantics: to make a thing bigger, indicate the opposite edges of the thing and drag the hands apart. Likewise, pinching the fingers together or bringing the hands together shrinks virtual objects. Tony Stark uses both of these gestures when examining models of molecules in Iron Man 2 (Figure 5.15).

Though there are other gestures, the survey revealed no other strong patterns of similarity across properties. This will change if the technology continues to mature in the real world and in sci-fi. More examples of it may reveal a more robust language forming within sci-fi, or reflect conventions emerging in the real world.

Iron Man 2 (2010)

Iron Man 2 (2010)
Figures 5.15a–b: Iron Man 2 (2010)

Opportunity: Complete the Set of Gestures Required.

In the real world, users have some fundamental interface controls that movies never show but for which there are natural gestures. An example is volume control. Cupping or covering an ear with a hand is a natural gesture for lowering the volume, but because volume controls are rarely seen in sci-fi, the actual gesture for this control hasn’t been strongly defined or modeled for audiences. The first gestural interfaces to address these controls will have an opportunity to round out the vocabulary for the real world.

Lesson: Deviate Cautiously From the Gestural Vocabulary.

If these seven gestures are already established, it is because they make intuitive sense to different sci-fi makers and/or because the creators are beginning to repeat controls seen in other properties. In either case, the meaning of these gestures is beginning to solidify, and a designer who deviates from them should do so only with good reason or else risk confusing the user.

Direct Manipulation

An important thing to note about these seven gestures is that most are transliterations of physical interactions. This brings us to a discussion of direct manipulation. When used to describe an interface, direct manipulation refers to a user interacting directly with the thing being controlled — that is, with no intermediary input devices or screen controls.

For example, to scroll through a long document in an “indirect” interface, such as the Mac OS, a user might grasp a mouse and move a cursor on the screen to a scroll button. Then, when the cursor is correctly positioned, the user clicks and holds the mouse on the button to scroll the page. This long description seems silly only because it describes something that happens so fast and that computer users have performed for so long that they forget that they once had to learn each of these conventions in turn. But they are conventions, and each step in this complex chain is a little bit of extra work to do.

But to scroll a long document in a direct interface such as the iPad, for example, users put their fingers on the “page” and push up or down. There is no mouse, no cursor and no scroll button. In total, scrolling with the gesture takes less physical and cognitive work. The main promise of these interfaces is that they are easier to learn and use. But because they require sophisticated and expensive technologies, they haven’t been widely available until the past few years.

In sci-fi, gestural interfaces and direct manipulation strategies are tightly coupled. That is, it’s rare to see a gestural interface that isn’t direct manipulation. Tony Stark wants to move the volumetric projection of his father’s park, so he sticks his hands under it, lifts it and walks it to its new position in his lab. In Firefly, when Dr. Tam wants to turn the projection of his sister’s brain, he grabs the “plane” that it’s resting on and pushes one corner and pulls the other as if it were a real thing. Minority Report is a rare but understandable exception because the objects Anderton manipulates are video clips, and video is a more abstract medium.

This coupling isn’t a given. It’s conceptually possible to run Microsoft Windows 7 entirely with gestures, and it is not a direct interface. But the fact that gestural interfaces erase the intermediaries on the physical side of things fits well with erasing the intermediaries on the virtual side of things, too. So, gesture is often direct. But this coupling doesn’t work for every need a user has. As we’ve seen above, direct manipulation does work for gestures that involve physical actions that correspond closely in the real world. But, moving, scaling and rotating aren’t the only things one might want to do with virtual objects. What about more abstract control?

As we would expect, this is where gestural interfaces need additional support. Abstractions by definition don’t have easy physical analogues, and so they require some other solution. As seen in the survey, one solution is to add a layer of graphical user interface (GUI), as we see when Anderton needs to scrub back and forth over a particular segment of video to understand what he’s seeing, or when Tony Stark drags a part of the Iron Man exosuit design to a volumetric trash can (Figure 5.16). These elements are controlled gesturally, but they are not direct manipulation.

Minority Report (2002)

Iron Man (2008)

Iron Man (2008)
Figure: 5.16a–c Minority Report (2002), Iron Man (2008)

Invoking and selecting from among a large set of these GUI tools can become quite complicated and place a DOS-like burden on memory. Extrapolating this chain of needs might very well lead to a complete GUI to interact with any fully featured gestural interfaces, unlike the clean, sparse gestural interfaces that sci-fi likes to present. The other solution seen in the survey for handling these abstractions is the use of another channel altogether: voice.

In one scene from Iron Man 2, Tony says to the computer, “JARVIS, can you kindly vacuform a digital wireframe? I need a manipulable projection.” Immediately JARVIS begins the scan. Such a command would be much more complex to issue gesturally. Language handles abstractions very well, and humans are pretty good at using language, so this makes language a strong choice.

Other channels might also be employed: GUI, finger positions and combinations, expressions, breath, gaze and blink, and even brain interfaces that read intention and brainwave patterns. Any of these might conceptually work but may not take advantage of the one human medium especially evolved to handle abstraction — language.

Lesson: Use Gesture for Simple, Physical Manipulations, and Use Language for Abstractions.

Gestural interfaces are engaging and quick for interacting in “physical” ways, but outside of a core set of manipulations, gestures are complicated, inefficient and difficult to remember. For less concrete abstractions, designers should offer some alternative means, ideally linguistic input.

Gestural Interfaces: An Emerging Language

Gestural interfaces have enjoyed a great deal of commercial success over the last several years with the popularity of gaming platforms such as Nintendo’s Wii and Microsoft’s Kinect, as well as with gestural touch devices like Apple’s iPhone and iPad. The term “natural user interface” has even been bandied about as a way to try to describe these. But the examples from sci-fi have shown us that gesturing is “natural” for only a small subset of possible actions on the computer. More complex actions require additional layers of other types of interfaces.

Gestural interfaces are highly cinemagenic, rich with action and graphical possibilities. Additionally, they fit the stories of remote interactions that are becoming more and more relevant in the real world as remote technologies proliferate. So, despite their limitations, we can expect sci-fi makers to continue to include gestural interfaces in their stories for some time, which will help to drive the adoption and evolution of these systems in the real world.

This post is an excerpt of Make It So: Interaction Design Lessons From Science Fiction by Nathan Shedroff and Christopher Noessel (Rosenfeld Media1, 2012). You can read more analysis on the book’s website2.

(al)

↑ Back to topShare on Twitter

In his day job as a Managing Director at Cooper, Christopher designs products, services, and strategy for a variety of domains, including health, financial, and consumer. In prior experience he’s been a small business owner, developed kiosks for museums, helped to visualize the future of counter-terrorism, built prototypes of coming technologies for Microsoft, and designed telehealth devices to accommodate the crazy facts of modern healthcare.

His spidey sense goes off about random topics, leading him to speak about a range of things including interactive narrative, ethnographic user research, interaction design, sex-related interactive technologies, historical epochs in technology and ways to think about the coming one, free-range learning, and, most recently, the relationship between sci-fi and interface design in the book Make It So: Interaction Design Lessons from Science Fiction.

  1. 1

    Of course, these film versions of gestural languages and 3D interfaces trace back to Negreponte and Bolt’s work on the Media Room and “Put That There” interface which goes back to at least the mid-1970′s in the years leading up to the creation of the Media Lab. Like Alam Kay’s DynaBook, and the KiddieComp that it arose from, the Media Room was a visionary concept that captured the imaginations of hackers, researchers and science fiction fans and colored a lot of the notions of the future. The Hacker/SF communities of the early 70′s were full of folk whose imaginations and visions were captured by these and other ideas of how the future would, could and should be. Many went on to create these things years and decades later.

    6
  2. 2

    Since language is a strong choice for communication,it can’t help you in all places. For peoples with speech / communication disability gestural interfaces are easy to use…

    -4
    • 3

      “For peoples with speech / communication disability gestural interfaces are easy to use…”

      Why would you think that?
      What data do you have?

      For people within the same culture, an inability to communicate orally may be tied to innate brain defects or infirmities that may affect the same centers of the brain used for speech.

      Fine motor control (necessary for gestures) and the ability to move in a consistent way (without the tremors of old age and/or Parkinson’s disease, for example) are often lost to the very old, sick, or infirm.

      Furthermore, for people OUTSIDE of the cultural group, you run into the same issues we have with icons – CULTURAL BIAS.

      Simply, some gestures mean totally different things within different cultures, rendering an intuitive interface NON-INTUITIVE and what then?

      2
  3. 4

    You should have a look at the Leap Motion, 3D motion technology , available since yesterday!

    4
  4. 5

    Good article with a couple of side points:

    1. I’d like to see a mention of the real ergonomic constraints of these types of interfaces. Are these comfortable interactions? Probably not.

    2. Photo sensors have been used on bathroom faucets long before Johnny Nmeumonic (first commercial products were in the 1930′s). Bathrooms need to be retrofitted to support this technology, so there may be some lag in the author’s exposure to this technology, but O’Hare airport, for example, has used this technology since at least the early 1980s.

    7
    • 6

      As an avid Kinect user, they’re actually not too bad, when done well. It largely comes down to the software implementation, rather than the gestures themselves. It gets tiring, for sure, and I don’t think we’ll know for a while the long-term effects of using this kind of technology all day, every day, but I think a key to ergonomic comfort will be some kind of “let me rest” gesture.

      The latest Fable actually did this somewhat seamlessly. You spend a large amount of time driving a horse-drawn cart, but you don’t have to hold the reins all of the time. You can set them down by dropping your hands and the cart will continue to drive itself. It’s an opportunity to rest, without going into a fully-paused state.

      And of course, the Kinect system also has a full pause gesture, which involves holding the left arm out at a 45 degree angle for a few seconds. It’s easy enough to do instinctively after you learn about it, but requires enough deliberate action and over enough time that it’s not easily triggered by accident.

      2
  5. 7

    Gianpaolo D'Amico

    March 1, 2013 8:45 am

    Great and inspiring article, especially for the part of the vocabulary and the observations on abstract actions.

    I guess some new devices like Leap Motion, MYO and other ones, should be taken in considerations, because they are adding new ways for interacting with a device via gestures.

    I work at a research centre in Italy dealing with natural interaction topics (you can see some examples here: http://www.micc.unifi.it/vim/category/projects/natural-interaction/ ) and some days ago we were talking about a triple way for interacting with a desktop pc: conventional keyboard and mouse, touch screen and leap motion. Using an interface like this, you can create an augmented way for working with a desktop machine. Some tasks can be effectively done with mouse, other ones with gestures on the Leap and so on. I’m thinking about reading articles, taking notes, working with production software, even playing music…

    2
  6. 8

    Really good article! I Like touch screens, but an UI like the one in “Minority Report” Would be awsome!

    0
  7. 9

    The future? Or sci fi?

    Very interesting read. Love that you captured the 7 common gestures.

    0
    • 10

      The future! I agree, this is my favourite read so far since I have been keeping a list of all these user interaction methods that were featured in scifi.

      0
  8. 11

    nice article but waaay too verbose and rambling for the little information given.

    It is clear the author loved the movies and it showed, but from a technical standpoint could easily have been 60 % shorter.

    Still, all in all, a nice segue from the ubiquitous “40 Ways To Do Some Thing In CSS3″ or “This Is Why We Designers Are So Awesome And Clients Suck!”

    2
  9. 12

    All these interfaces look cool on screen, but I think it is ridiculous to imagine that now in any given future these projections appear in thin air. I accept them in virtual worlds like TRON2, but I can never imagine how future tech-designers make pure air forming sharp images in focus for the guy in front.
    I remember an attempt in the series “Seaquest DSV” in which Roy Scheider received videocalls projected on a “curtain of steam”, a technology that really exists nowadays. They might develop it into cleaner images, but I can’t imagine how future technologies can make pure air glow for any kind of displays.

    2
    • 13

      “but I can’t imagine how future technologies can make pure air glow for any kind of displays”

      Actually, they would not have to.

      Simply, face locating and pupil eye tracking cameras would simply paint an image via laser DIRECTLY ONTO YOUR RETINA.
      An array of cameras could easily do this for EVERYONE IN THE ROOM, AT THE SAME TIME.

      For you, it is a matter of not only thinking outside of the box, but recognizing that there is no box in the first place.

      Please look up direct (3D) image placement via LED laser to see more of this now 8 year old technology.

      2
  10. 14

    Jeffrey Sweeney

    March 2, 2013 3:52 am

    Basically every one of these Hollywood gestures breaks every aspect of Fitts Law, and with the possible exception of Microsoft, no company would be stupid enough to waste time and money in over-exaggerant methods to do simple productive things.
    Would you rather do jumping-jacks and shout: “COMPUTER! TELL MY WIFE THAT WE’RE OUT OF YOGURT!” over what you can do now?

    If anything, the future is in tracking eye movement and brain activity.

    0
    • 15

      “Would you rather do jumping-jacks and shout: “COMPUTER! TELL MY WIFE THAT WE’RE OUT OF YOGURT!” over what you can do now?”

      YES.

      YOU try telling a PMS’ing woman that you are out of ANYTHING and get back to me. :)

      Again, yes, I’d have the computer do it.
      Via remote Desktop.
      Some miles away.

      3
  11. 16

    Last year I had the pleasure of attending Nathan’s workshop on Make It So and I’ve read his book as well and if you have the means, I recommend you do both.

    It’s easy to look at sci-fi as gimmicky or just as a UI / Visual layer on top but what Nathan and Chris have done is dive deep into the User Experience of Sci-Fi asking the question of what can we take from the future and implement today taking into account our current customer’s mental model.

    Early on in the book there are some great examples of doing just so, e.g. Military Sandboxes used to plan out enemy engagements – have been redesigned based on X-men’s scene where they’re planning an attack on Magneto using a dynamic map that constructs topography on the fly via many tiny pins. Another example comes from Motorola StarTac phone which is based on Star Trek series communications device.

    So yes, ergonomics is still crucial, interfaces are critical but thinking about sci-fi is a great way to stretch your design (defined broadly) skills and think much further beyond current methods, implementations, competitors, etc. and imagine not what just the future looks but what the future experience might be like as well.

    2
  12. 17

    Great article.
    Great site.

    AWFUL low contrast design.

    WHY is this idiotic fad still alive in 2013?
    Isn’t there a creme for it?

    0
  13. 18

    (In a quick and highly unscientific test, at the time of writing, we typed [sci-fi movie title] + “interface” into Google for each of the movies in the survey and compared the number of results. “Minority Report interface” returned 459,000 hits on Google, more than six times as many as the runner-up, which was “Star Trek interface” at 68,800.)

    If you know that Trekkies know the interface technology by name (“LCARS”) then the results are different:
    788.000 Results ;)

    But of course the interface of Minority Report is much more sophisticated :)

    But LCARS is a nice example how they did see the computer interfaces in the future in the 90s.

    In the 80s they thought about different things, they were sure not far from there we would only use data gloves and data helmets with built-in monitor working only in cyberspace :)

    Would be also interesting to look on how the thoughts what could be and what would be a nice interface changed dramatically over time and why.

    1
  14. 19

    Very interesting article that is taking a different point of view on the growing-in-popularity topic of gesture recognition. It’s important and useful to deconstruct the types of gestures that humans make in their daily lives and which of these actually do translate into usable gestures for Natural User Interfaces.

    At Omek Interactive, we have posted recent articles on how to create intuitive gesture-based applications, including how augmented reality can offer valuable “feedback” to users. You can check it out here: http://www.omekinteractive.com/blog/

    0
  15. 20

    I attended a UX conference in Beijing last year where I was fortunate enough to hear a talk given by Aaron Marcus, entitled “The Past 100 Years of the Future: Human-Computer Interaction in Science-Fiction Movies and Television.” The topic itself caught me off guard, but the result was inspiring.

    He recently published a pdf of the talk here: http://www.amanda.com/wp-content/uploads/2012/10/AM+A.SciFI+HCI.eBook_.LM10Oct12.pdf

    1
  16. 21

    Did a bit of digging and found this! Gesture driven website

    http://reveal.rs.af.cm/

    -1
    • 22

      I saw a “gesture driven website” too!

      It was run by the GOP

      ( unsurprisingly, their gesture was returned…
      … by the voters! :)

      1
  17. 23

    I’m seriously getting impatient waiting for the brain implant interface to hit the market. I get impatient with my computer when it fails to understand my exact intention. Such an interface may well come along with my computer understanding too much, at which point it could probably blackmail me since it already knows more about me than my mother could ever imagine. :)

    Still, as I scrolled the article with my Magic Trackpad, stopping here or there to zoom in on an image before continuing to read, I couldn’t help but think that we are not so far away from a time when the keyboard will be pushed aside (as my mouse has been) and we will interact with our devices in a much more natural sort of way. One thing I question, though, is whether language is the only human medium especially evolved to handle abstraction. If we ever get to the brain implant stage, it seems abstraction at that level might make the spoken word as unimportant as handwriting has become over the past few decades.

    0
  18. 24

    Great topic! But essentially, all these gestures are mimicking real life experience right? I always get told off when I keep morning about these particular scenes in Si-Fi, hmmm, apparently, these people know absolutely what they are doing and interface know absolutely what these people are up to, and the ‘physical excises’ take more efforts than moving fingers…Hold on a sec, apart from the visual interface is projected in a future realistic style or X-ray 3D mesh look, the interaction itself or gestures are not new. For me, the hurdle for an advanced gesture creation is user needs first, and then avoiding overlapping which is extremely crucial when the physical space between real and virtual starts merging, not even mention multi-user scenarios. At the end of day, it still splits into ‘task gestures’ and ‘personal gestures’. Surely the whole thing is evolving thanks to these ‘Microsoft founded ‘ modern Si-Fi. They implant a image of how ‘gestures’ should be as a hollywood language.

    1
  19. 25

    Great article and research!

    -2
  20. 26

    Excellent article, i believe gestures will be more and more precise in the next years!

    -1
  21. 27

    Good article. I’m glad that commentor Brian saw my lecture at UPA12/UserFriendly12 in Beijing last Fall. I shall be giving my tutorial about UX in SciFi again, this time at CHI 2013 in Paris on 1 May 2013 and at DUXU 2013 in Las Vegas on 22 July 2013. Hope to see you there! In addition, I am publishing a special issue of User Experience Magazine about UX and Sci-FI due out about 1 May 2013, with interviews of noted sci-fi authors Bruce Sterling and Rudy Ruecker, and an article on this topic in Interactions Magazine due out in August 2013. Clearly, the topic is of increasing interest to the HCI/UX community since I first introduced it in 1992 at CHI. I am pleased at the progress.

    0
  22. 28

    Valentin Simonov

    March 18, 2013 3:22 am

    Great article!

    I am a huge fan of Minority Report and Iron Man interfaces. But as you pointed out they are hardly convenient for daily usage. I work for a company which develops interactive installations and from time to time we are asked to replicate these gestural controls using depth/IR cameras. Most of the clients don’t understand that these kinds of interfaces only look cool on video. They are very hard to use continuously for more than 5 minutes.

    For example here’s our speech at Unite’12 conference. We used a gestural interface to control the presentation. I coded it and I knew precisely what to do but it was hell lot of distraction to already nervous environment. Just clicking on “next slide” is waaaay easier. http://video.unity3d.com/video/6952739/unite-2012-creating

    But the types of interfaces I like are multi-touch interfaces on large surfaces. These are like huge iPads. You can make a surface as big as you want, you can even make it transparent as we did in one of our projects: http://www.interactivelab.ru/ELEKTRONNYI-UNIVERSITET
    Touching a solid surface is much easier than trying to place your hands in the air.

    0
  23. 29

    Remember the gorilla arm!

    As someone who was doing large scale multi-touch screen research when The Minority Report came out, I found myself having to say “yes, but…” a lot. While these interfaces can be nice for brief interactions and visually-centred tasks, they aren’t the kind of thing that you’d be wanting to do work on all day. There’s quite a lot of physical effort involved in raising your arms and waving them about in space or on a screen for long periods.

    Another option that there’s been some research into is a hybrid device approach, where you do work on a private device like a smartphone or tablet, and have some method for “beaming” it to a larger public display (interactive or otherwise; 2D or 3D) when you want to share some content with multiple people. It may not be “natural” interaction in the same way, but it sure is a lot more ergonomic.

    0
  24. 30

    Beautiful article. The truth is that fiction is no longer fiction when we learn so much from it as to apply our learning to any segment of our lives.

    0
  25. 31

    Great article – I also wrote something on gestures in film as wells as other ‘natural input’ methods like voice, emotion, posture and genetics here: http://neilclavin.com/2012/07/11/natural-input-in-film/

    I was inspired to do the article after a project I was doing around new input methods which I felt left a lot of possibilities still open.

    0

Leave a Comment

Yay! You've decided to leave a comment. That's fantastic! Please keep in mind that comments are moderated and rel="nofollow" is in use. So, please do not use a spammy keyword or a domain as your name, or else it will be deleted. Let's have a personal and meaningful conversation instead. Thanks for dropping by!

↑ Back to top