What Leap Motion And Google Glass Mean For Future User Experience

Advertisement

Editor’s note: Please note that this article explores an entirely hypothetical scenario, and these are opinions, some of which you may not agree with. However, the opinions are based on current trends, statistics and existing technology. If you’re the kind of designer who is interested in developing the future, the author encourages you to read the sources that are linked throughout the article.

With the Leap Motion controller being released on July 22nd and the Google Glass Explorer program already live, it is obvious that our reliance on the mouse or even the monitor to interact with the Web will eventually become obsolete.

The above statement seems like a given, considering that technology moves at such a rapid pace. Yet in 40 years of personal computing, our methods of controlling our machines haven’t evolved beyond using a mouse, keyboard and perhaps a stylus. Only in the last six years have we seen mainstream adoption of touchscreens.

Given that emerging control devices such as the Leap Controller are enabling us to interact with near pixel-perfect accuracy in 3-D space, our computers will be less like dynamic pages of a magazine and more like windows to another world. To make sure we’re on the same page, please take a minute to check out what the Leap Motion controller1 can do:


Introducing the Leap Motion2

Thanks to monitors becoming portable with Google Glass (and the competitors that are sure to follow), it’s easy to see that the virtual world will no longer be bound to flat two-dimensional surfaces.

In this article, we’ll travel five to ten years into the future and explore a world where Google Glass, Leap Motion and a few other technologies are as much a part of our daily lives as our smartphones and desktops are now. We’ll be discussing a new paradigm of human-computer interface.

The goal of this piece is to start a discussion with forward-thinking user experience designers, and to explore what’s possible when the mainstream starts to interact with computers in 3-D space.

Setting The Stage: A Few Things To Consider

Prior to the introduction of the iPhone in 2007, many considered the smartphone to be for techies and business folk. But in 2013, you’d be hard pressed to find someone in the developed world who isn’t checking their email or tweeting at random times.

So, it’s understandable to think that a conversation about motion control, 3-D interaction and portable monitors is premature. But if the mobile revolution has taught us anything, it’s that people crave connection without being tethered to a stationary device.

To really understand how user experience (UX) will change, we first have to consider the possibility that social and utilitarian UX will be taking place in different environments. In the future, people will use the desktop primarily for utilitarian purposes, while “social” UX will happen on a virtual layer, overlaying the real world (thanks to Glass). Early indicators of this are that Facebook anticipates its mobile growth to outpace its PC growth3 and that nearly one-seventh of the world’s population own smartphones4.

The only barrier right now is that we lack the technology to truly merge the real and virtual worlds. But I’m getting ahead of myself. Let’s start with something more familiar.

The Desktop

Right now, UX on the desktop cannot be truly immersive. Every interaction requires physically dragging a hunk of plastic across a flat surface, which approximates a position on screen. While this is accepted as commonplace, it’s quite unnatural. The desktop is the only environment where you interact with one pixel at a time.

Sure, you could create the illusion of three dimensions with drop shadows and parallax effects, but that doesn’t change the fact that the user may interact with only one portion of the screen at a time.

This is why the Leap Motion controller is revolutionary. It allows you to interact with the virtual environment using all 10 fingers and real-world tools in 3-D space. It is as important to computing as analog sticks were to video games.

The Shift In The Way We Interact With Machines

To wrap our heads around just how game-changing this will be, let’s go back to basics. One basic UX and artificial intelligence test for any new platform is a simple game of chess.

Virtual Chess5
(Image: Wikimedia Commons6)

In the game of chess below, thanks to motion controllers and webcams, you’ll be able to “reach in” and grab a piece, as you watch your friend stress over which move to make next.

Now you can watch your opponent sweat.7
Now you can watch your opponent sweat. (Image: Algernon D’Ammassa8)

In a game of The Sims, you’ll be able to rearrange furniture by moving it with your hands. CAD designers will use their hands to “physically” manipulate components (and then send their design to the 3-D printer they bought from Staples9 for prototyping.)

While the lack of tactile feedback might deter mainstream adoption early on, research into haptics10 is already enabling developers to simulate physical feedback in the real world to correspond with the actions of a user’s virtual counterpart. Keep this in mind as you continue reading.

Over time, this level of 3-D interactivity will fundamentally change the way we use our desktops and laptops altogether.

Think about it: The desktop is a perfect, quiet, isolated place to do more involved work like writing, photo editing or “hands-on” training to learn something new. However, a 3-D experience like those mentioned above doesn’t make sense for social interactions such as on Facebook or even reading the news, which are more becoming of mobile11.

With immersive, interactive experiences being available primarily via the desktop, it’s hard to imagine users wanting these two experiences to share the same screen.

So, what would a typical desktop experience look like?

Imagine A Cooking Website For People Who Can’t Cook

With this cooking website for people who can’t cook, we’re not just talking about video tutorials or recipes with unsympathetic instructions, but rather immersive simulations in which an instructor leads you through making a virtual meal from prep to presentation.

Interactions in this environment would be so natural that the real design challenge is to put the user in a kitchen that’s believable as their own.

You wouldn’t click and drag the icon that represents sugar; you would reach out with your virtual five-fingered hand and grab the life-sized “box” of Domino-branded sugar. You wouldn’t click to grease the pan; you’d mimic pushing the aerosol nozzle of a bottle of Pam.

The Tokyo Institute of Technology has already built such a simulation in the real world. So, transferring the experience to the desktop is only a matter of time.


Cooking simulator will help you cook a perfect steak every time12

UX on the future desktop will be about simulating physics and creating realistic environments, as well as tracking head, body and eyes13 to create intuitive 3-D interfaces, based on HTML5 and WebGL14.

Aside from the obvious hands-on applications, such as CAD and art programs, the technology will shift the paradigm of UX and user interface (UI) design in ways that are currently difficult to fathom.

The problem right now is that we currently lack a set of clearly defined 3-D gestures to interact with a 3-D UI. Designing UIs will be hard without knowing what our bodies will have to do to interact.

The closest we have right now to defined gestures are those created by Kinect hackers15 and John Underkoffler of Oblong Technology16 (the team behind Minority Report’s UI).

In his TED talk from 2010, Underkoffler demonstrates probably the most advanced example of 3-D computer interaction that you’re going to see for a while. If you’ve got 15 minutes to spare, I highly recommend watching it:


John Underkoffler’s talk “Pointing to the Future of UI17

Now, before you start arguing, “Minority Report isn’t practical — humans aren’t designed for that!” consider two things:

  1. We won’t likely be interacting with 60-inch room-wrapping screens the way Tom Cruise does in Minority Report; therefore, our gestures won’t need to be nearly as big.
  2. The human body rapidly adapts to its environment. Between the years 2000 and 2010, a period when home computers really went mainstream, reports of Carpal Tunnel Syndrome dropped by nearly 8%18.

Graph of Carpel Tunnel19
(Image: Minnesota Department of Health20)

However, because the Leap Motion controller is less than $80 and will be available at Best Buy, this technology isn’t just hypothetical, sitting in a lab somewhere, with a bunch of geeks saying “Wouldn’t it be cool if…”

It’s real and it’s cheap, which really means we’re about to enter the Wild West of true 3-D design.

Social Gets Back To The Real World

So, where does that leave social UX? Enter Glass.

It’s easy to think that head-mounted augmented reality (AR) displays, such as Google Glass, will not be adopted by the public, and in 2013 that might be true.

But remember that we resisted the telephone when it came out, for many of the same privacy concerns21. The same goes for mobile phones22 and for smartphones23 around 2007.

So, while first-generation Glass won’t likely be met with widespread adoption, it’s the introduction of a new phase. ABI Research predicts that the wearable device market will exceed 485 million annual shipments by 2018.24

According to Steve Lee, Glass’ product director, the goal is to “allow people to have more human interactions” and to “get technology out of the way.”

First-generation Glass performs Google searches, tells time, gives turn-by-turn directions, reports the weather, snaps pictures, records video and does Hangouts — which are many of the reasons why our phones are in front of our faces now.

Moving these interactions to a heads-up display, while moving important and more heavy-duty social interactions to a wrist-mounted display, like the Pebble smartwatch25, eliminates the phone entirely and enables you to truly see what’s in front of you.

pebble_smartwatch_dm_120418_wblog26
(Image: Pebble27)

Now, consider the possibility that something like the Leap Motion controller could become small enough to integrate into a wrist-mounted smartwatch. This, combined with a head-mounted display, would essentially give us the ability to create an interactive virtual layer that overlays the real world.

Add haptic wristband28 technology and a Bluetooth connection to the smartwatch, and you’ll be able to “feel” virtual objects29 as you physically manipulate them in both the real world and on the desktop. While this might still sound like science fiction, with Glass reportedly to be priced between $299 and $499 and Leap Motion at $80 and Pebble being $150, widespread affordability of these technologies isn’t entirely impossible.

Social UX In The Future: A Use Case

Picture yourself walking out of the mall, and your close friend Jon updates his status. A red icon appears in the top right of your field of vision. Your watch displays Jon’s avatar, which says, “Sooo hungry right now.”

You say, “OK, Glass. Update status: How about lunch? What do you want?” and keep walking.

“Tacos.”

You say, “OK, Glass. Where can I get good Mexican food?” 40 friends have favorably rated Rosa’s Cafe30. Would you like directions? “Yes.” The navigation starts, and you’re en route.

You reach the cafe, but Jon is 10 minutes away. Would you like an audiobook while you wait? “No, play music.” A smart playlist compiles exactly 10 minutes of music that perfectly fits your mood.

“OK, Glass. Play Angry Birds 4.”

Across the table, 3-D versions of the little green piggies and their towers materialize.

In front of you are a red bird, a yellow bird, two blue birds and a slingshot. The red bird jumps up, you pull back on the slingshot, the trajectory beam shows you a path across the table, you let go and knock down a row of bad piggies.

Suddenly, an idea comes to you. “OK, Glass. Switch to Evernote.”

A piece of paper and a pen are projected onto the table in front of you, and a bulletin board appears to the left.

You pick up the AR pen, jot down your note, move the paper to the appropriate bulletin, and return to Angry Birds.

You could make your game visible to other Glass wearers. That way, others could play with you — or, at the very least, would know you’re not some crazy person pretending to do… whatever you’re doing across the table.

When Jon arrives, notifications are disabled. You push the menu icon on the table and select your meal. Your meal arrives; you take photos of your food; eat; publish to Instagram 7.

Before you leave, the restaurant gives a polite notification, letting you know that a coupon for 10% off will be sent to your phone if you write a review.

How Wearable Technology Interacts With Desktops

Later, having finished the cooking tutorial on the desktop, you decide it’s time to make the meal for real. You put on Glass and go to the store. The headset guides you directly to the brands that were advertised “in game.” After picking out your ingredients, you receive a notification that a manufacturer’s coupon has been sent to your phone and can be used at the check-out.

When you get home, you lay a carrot on the cutting board and an overlay projects guidelines on where to cut. You lay out the meat, and a POW graphic is overlaid, showing you where to hit for optimal tenderness:

Augmented Meat

You put the meat in the oven; Glass starts the timer. You put the veggies in the pan; Glass overlays a pattern to show where and when to stir.

While you were at the store, Glass helped you to pick out the perfect bottle of wine to pair with your meal (based on reviews, of course). So, you pour yourself a glass and relax while you wait for the timer to go off.

In the future, augmented real-world UX experiences will be turned into real business. The more you enhance real life, the more successful your business will be. After all, is it really difficult to imagine this cooking experience being turned into a game?

What Can We Do About This Today?

If you’re the kind of UI designer who seeks to push boundaries, then the best thing you can do right now is think. Because the technology isn’t 100% available, the best you can do is open your imagination to what will be possible when the average person has evolved beyond the keyboard and mouse.

Draw inspiration from websites and software that simulate depth to create dynamic, layered experiences that can be easily operated without a mouse. The website of agency Black Negative31 is a good example of future-inspired “flat” interaction. It’s easy to imagine interacting with this website without needing a mouse. The new Myspace32 is another.

To go really deep, look at the different Chrome Experiments33, and find a skilled HTML5 and WebGL developer to discuss what’s in store for the future. The software and interactions that come from your mind will determine whether these technologies will be useful.

Conclusion

While everything I’ve talked about here is conceptual, I’m curious to hear what you think about how (or even if) these devices will affect UIs. I’d also love to hear your vision of future UIs.

To get started, let me ask you two questions:

  1. How will the ability to reach into the screen and interact with the virtual world shape our expectations of computing?
  2. How will untethering content from flat surfaces fundamentally change the medium?

I look forward to your feedback. Please share this article if you’ve enjoyed this trip into the future.

(il) (al)

Footnotes

  1. 1 http://www.youtube.com/embed/_d6KuiuteIA
  2. 2 http://www.youtube.com/embed/_d6KuiuteIA
  3. 3 http://thenextweb.com/facebook/2012/10/24/facebook-anticipates-growth-happening-in-mobile-usage-rather-than-through-personal-computers/
  4. 4 http://www.go-gulf.com/blog/smartphone/
  5. 5 http://commons.wikimedia.org/wiki/File:Agat-7_Chess.png
  6. 6 http://commons.wikimedia.org/wiki/File:Agat-7_Chess.png
  7. 7 http://algerblog.blogspot.com/2011/12/quality-time.html
  8. 8 http://algerblog.blogspot.com/2011/12/quality-time.html
  9. 9 http://www.staples.com/3-D-Printing/cat_CL205651?icid=SearchResults3-D
  10. 10 http://www.disneyresearch.com/project/surround-haptics-immersive-tactile-experiences/
  11. 11 http://www.businessesgrow.com/2013/03/27/what-a-blog-post-will-look-like-in-2020/
  12. 12 http://www.youtube.com/embed/F565MHCfsSo
  13. 13 https://www.youeye.com/how-it-works
  14. 14 http://www.awwwards.com/22-experimental-webgl-demo-examples.html
  15. 15 http://www.youtube.com/results?search_query=kinect+ui+hacks&oq=kinect+ui+hacks&gs_l=youtube.3...903891.906571.0.906817.15.15.0.0.0.0.134.938.14j1.15.0...0.0...1ac.1.hF5nT0Br_Ik
  16. 16 http://www.oblong.com/g-speak/
  17. 17 http://www.youtube.com/embed/b6YTQJVzwlI
  18. 18 http://www.health.state.mn.us/divs/hpcd/cdee/occhealth/indicators/carpal-tunnel.html
  19. 19 http://www.health.state.mn.us/divs/hpcd/cdee/occhealth/indicators/carpal-tunnel.html
  20. 20 http://www.health.state.mn.us/divs/hpcd/cdee/occhealth/indicators/carpal-tunnel.html
  21. 21 http://news.cnet.com/8301-1023_3-57573966-93/google-glass-and-the-third-half-of-your-brain/
  22. 22 http://www.maebrussell.com/Articles%20and%20Notes/Do%20cell%20phones%20cook%20cells.html
  23. 23 http://www.computerworld.com/s/article/9014118/Ten_dangerous_claims_about_smart_phone_security
  24. 24 http://www.abiresearch.com/press/wearable-computing-devices-like-apples-iwatch-will
  25. 25 http://getpebble.com/
  26. 26 http://getpebble.com
  27. 27 http://getpebble.com/
  28. 28 http://www.popsci.com/technology/article/2012-09/haptic-armband-improves-muscle-memory-helping-blind-athletes-train-better
  29. 29 http://www.popsci.com/technology/article/2010-07/fingertip-mounted-haptic-interface-lets-you-feel-virtual-3-d-objects
  30. 30 http://facebook.com/rosascafe
  31. 31 http://blacknegative.com/
  32. 32 http://new.myspace.com
  33. 33 http://www.chromeexperiments.com/

↑ Back to topShare on Twitter

Tommy Walker is is the host of two Youtube shows - Inside The Mind & The Mindfire Chats — both designed to mainstream the concepts of online marketing and push the industry forward. On the next edition of The Mindfire Chats he will be discussing "How To Spark Disruptive Innovation" with a Venture Capitalist, An Indie Filmmaker, and Techcruch Disrupt Finalist.

Advertising
  1. 1

    With the Leap Motion controller, you have to consider fatigue. It is very hard to hold your arms up in the air and outwards for an extended period of time. It definitely looks like a fun experience and maybe an addition to existing technology, but I certainly wouldn’t call it a potential replacement.

    22
    • 2

      I think it’s really going to depend on the application, right?

      Think about how much your hand is on the mouse now for general browsing. If you were to translate that to a movement, are your hands really going to be up and waving around, or are you just going to be pointing at things periodically?

      I don’t think the mouse is going anywhere soon, but if you analyze how you use, I don’t think it’s going to be quite as extreme as holding your arms up all the time either.

      11
      • 3

        I accidentally down voted when meaning to up vote you and apparently can’t change it??? Sorry

        0
      • 5

        Really interesting thought. I would imagine that for many day-to-day tasks, assuming that the motion detection is accurate enough, we could simply keep our arm in a relaxed position on the desk, as if using a mouse.

        The difficulty is that we have been using the same interface for so long that, while we all want innovation, we are scared to give up our traditional mouse and keyboard setup.

        If this kind of technology were available / possible in the early days of personal computing, no way would have opted for a mouse.

        Great article- gets me excited!

        0
      • 6

        I think the Apple Watch (Dick Tracy come in!) and Google Googles are the gayest things to ever be thought up. If average people were engaged in cyber espionage and were blowing up buildings from pay phones a la “Hackers” and rocking the Dade Murphy one lens spectacle whilst banging a teenage Angelina Jolie and attending rave parties I could get with it. That being said, however, things like http://www.chromeexperiments.com/detail/gesture-based-revealjs/ and http://bgr.com/2013/06/11/google-glass-contact-lens/ lead me to believe that this type of paradigm shift can be done right and done cool.

        -8
        • 7

          What it’s really going to come down to are the apps that are produced to go along with the tech.

          I think the Dick Tracy watch is one of the least practical use cases, for the reasons you’ve said, but think it’s much better to act as an “at a glance” tool. I would much rather glance at my watch to see a status update than dig my phone out of my pocket.

          The Gesture based experiment was exactly what I was thinking of when it came to the desktop, and now I want to see that work in multiple dimensions :-)

          3
      • 8

        I have a touch desktop that I originally purchased to enhance 3d modeling. I actually prefer using specialized mice now! The only thing that was made better was texture painting but, the only way it was better is when you tip the monitor and use it like a table.

        I think that the solution is eye tracking and abstract tools like specialized mice. I was surprised to see that google glass came with a touch surface on it. That functionality should be farmed out to a watch or phone. You know, a place that feels normal for your hands.

        Also, interacting with “depth” has no feedback. That is way more abstract than a mouse.

        1
        • 9

          I didn’t do a very good job at demonstrating the use case on that last point, but with haptics research where it’s at now, it’s not out of the realm of possibility that a smart watch would be able to stimulate the appropriate nerves to simulate touch feedback with virtual objects.

          “Add haptic wristband technology and a Bluetooth connection to the smartwatch, and you’ll be able to “feel” virtual objects as you physically manipulate them in both the real world and on the desktop.”

          0
      • 10

        I totally agree, gesture based interface at the moment seem to be trying to replicate the point and click model of interaction. The biggest culprit of this I feel is the Microsoft Kinect’s menu system on the Xbox. You have to reach out to elements and hover over them for them to be selected.

        A much better way (imo) would be to split the screen into regions (Top, Top-right, Right, bottom-right, Bottom, Bottom-left, Left & Top-left) that way you can gesture anywhere (beside you, or arm outstretched) and the camera can detect which region your hand/finger moves in (taking your arm as the ‘centre’ reference point) allowing for a much lower exertion from a user. This region method could also be applied to the thumbsticks and other applications outside of Xbox.

        I know that is a very specific example (and one i’m currently looking at researching further for funsies) but I think what’s holding us back is applying this technology to the point and click method rather than thinking of new ways of interacting. But gesture based control is in it’s infancy and it’s up to people like us to create these new methods, which is a pretty amazing prospect.

        0
        • 11

          That sounds like a facinating system :-) Do you have any working prototypes? I’d love to take a look!

          0
          • 12

            So sorry for the delay in reply Tommy, I assumed i’d get an email notification but apparently not haha. Unfortunately It’s not at that stage yet, that annoying old commitment of Work has put this on the backburner at the moment, hopefully i’ll have enough time over the next couple of months to get some of my ideas down into something tangible. I’ll definitely keep you in the loop.

            0
        • 13

          Yeah, to me, we have to really start throwing preconceived notions out the window. “Clicking” will be thought of in the same capacity as “Operator, can you get me…?”. We have to come up not only with new gestures, but step back and realize just exactly how we can interact in the 3D world. In time, we’ll realize that since the dawn of human existence, we’ve been interacting in the 3D world, and we’ve gotten to a place where we’re not “interacting” in an artificial sense, but simply “doing” as we do every day in our physical world.

          0
    • 14

      Totally. Tom Cruise reportedly had to take breaks while filming Minority Report as it was very tiresome. But that’s where we as designers can really think about how this technology can enhance someones life. Imagine a car audio player that you gesturally control, with voice compliment of course. So this entire interface is controlled with your arm comfortably on the arm rest and the extent of your motion is just moving your fingers and wrist around.

      Just a thought. I’m definitely excited about this.

      1
      • 15

        Something else to keep in mind is the rigors of working on a movie set, which is often made up of 14 hour – 16 hour days. Granted, I have days at work like that too, but I imagine “air selecting” a tab on my browser would involve less interaction with my hands, not more – making the “gorilla arm” argument a non issue in my mind.

        Your use case in the car is amazing, and the auto manufacturers are actually working on that right now, so you’re on the right path there!

        My question is, what do you think the 3D desktop would look like?

        0
        • 16

          Alexandre Sartini

          June 16, 2013 2:32 am

          actually I don’t think 3D Desktop alone has a real future. I have one reason for this : entropy. Creating a third dimension creates a greater disorder for retrieving information.

          At some point you always need to get back on 2D.

          In the video anytime he would represent the files or pictures in 3D it really gets messy and when it falls back on 2D, order follows naturally.

          I hate looking for a document in my file cabinet because documents hides each other for example. When viewing a 3D graph, I always take a look at 2 axes at a time to understand the mechanism.

          3D is good to get an overall view, but you need 2D to get to the details.

          2
          • 17

            Might that be because a good way of controlling a 3D workspace hasn’t truly existed yet?

            I think there’s merit to what you’re saying, but file systems & doc retrieval are only one element of computing. What if the file system were 2.5D and displayed like a HUD when it came to other interactions.

            Could it be that no one’s nailed it yet because we havent had a good measure of controlling it? Bumptop comes to mind as a 3D system, and I agree that was a mess, but mainly because it was a simulation controlled with a mouse. What if there was a hybrid model? And how could you evolve the controls?

            -1
        • 18

          Well, it is easy to test. Just sit behind your desktop for 4 hours and move your arms and hands as-if. Then see if you can keep that up… I could not. Keeping your arms up and outstretched for more than half an hour is extremely difficult.
          I see definitive advantages in some applications, but I doubt very much it will be a relaxed way of working for regular day-jobs like CAD draughtsman or 3D modelers who work hours upon hours doing constant manipulation.

          One other thing I am wondering… why has nobody brought a keyboard on the market that consistst of a touchscreen the size of a regular keyboard? The advantages would be that the kay’s can be displayed differently (for different languages or programs like Photoshop shortcuts or games) as well as have areas where you can have small OSD like screens).

          2
          • 19

            What a cool idea behind the touchscreen keyboard! You’re right, I wonder why no one has actually done that yet.

            As far as the four hour exercise is concerned, I actually did that when I was writing the article to see if this whole thing was really feasible. What I found was that instead of waving my arms around a whole lot there, there was period lifts to push “buttons” and tabs – which was far more convenient than having to move my mouse from one corner of the screen to the next. In all fairness, I work with 3 very large monitors, but still it was much easier to glance and point than it was to move my mouse from monitor one to monitor three.

            With your point about CAD and 3D modelers, you’re right, it’s not likely that someone who’s been in the space for a long amount of time is going to get ready to give up their “tools”. What it does mean though is a skilled clay artist could more easily manipulate their virtual model as though it were in the real world. Reducing costs on a production, because you wouldn’t need a physical modeler, digital modeler, & a rigger.

            Likewise, with CAD, it’s unlikely engineers are going to give it up, but what it does do is allows the Mechanic who says “You know what I would do differently?” to take things from his imagination and bring them into reality with fewer learning curves. With 3D printers becoming more available, getting workable prototypes built will also be more accessible.

            See, it’s not about totally “replacing” the tools (at least not right away) it’s about thinning the barrier for those with ideas to make those reality. It’s what Paypal did for small vendors, what Youtube did for musicians for filmmakers, and what these new forms of control are going to do for many others, in ways that we can’t even begin to imagine right now.

            And that, to me, is exciting :-)

            -1
        • 20

          And now start thinking how long a regular PRO works on a computer nowadays. Yeah, 8+ hours are totally RL.

          My regular time sitting in front of the computer is approx 12 -14 hours per day. Most of the time I’m working.

          And I DO like HAPTIC feedback – which is the reason why I switched back from an ultrasuperdupermodern, wireless desktop (keyboard + mouse) to a Cherry G80-3000 “clicky” and a cable-connected gaming mouse.

          The missing haptical feedback combined with the stressful, UNnatural position is, what makes this gadget useless.

          cu, w0lf.

          0
      • 21

        It’s true that Spielberg specifically wanted cinematic gestures—Tom Cruise as a conductor sifting though mass amounts of data—but it’s completely false that he had to take breaks while filming. (Confirmed moments ago by the designer of the interface who was lying at his feet during filming.)

        2
        • 22

          And.. if you’re interested in the specifics of that design process, you should check out this talk by the man behind that technology, John Underkoffler: https://vimeo.com/54027470

          The Minority Report bits start at ~19:30, but the whole talk is relevant to this discussion.

          0
  2. 23

    Leap Motion gonna delay again for sure. It’s less than a month left (after many pushed back), and there’s still no generic function showcase (i.e. using it as a mouse on Win 8 and OSX computer). Better not putting high hope on this device, by the time they finished it, the device already lost the hype and probably still full of bugs.

    I’m hold back my pre-order in Feb since they refuse to show how it work as mouse, two of my friends cancel pre-order in Apr. Never look back.

    3
  3. 27

    I’ll stick with carpal tunnel over gorilla arm any day. Arm Circle exercises tend to hurt after awhile.

    4
  4. 29

    Great article but I still think that the future of interacting with a computer will be through the mind.
    http://youtu.be/Qz2XR3xcx60?t=35s

    4
    • 30

      You know, I wanted to include that in here too, because I agree. The form factor is too perfect to not be including it with the rest of this tech mix :-)

      Imagine thought controlled web browsing?!

      I’d be in trouble for sure…

      2
    • 31

      Good, then I can goto sleep and let my dreaming do my work for me!

      2
  5. 32

    Leap motion looks like a touchscreen without touching the screen. I think most users would prefer to touch the screen.

    Also, I keep hearing that the mouse is an unnatural movement to control a computer. I really disagree with that statement. The action you perform on the table translate 1 on 1 to your screen, just like pressing a key on your keyboard.

    6
    • 33

      The mouse is a great tool, but it is learned behavior. Have you ever watched someone use a mouse for the first time? The interaction is not immediately intuitive, but that’s easy to take for granted after years of daily use. Think back to the first time you used a track pad, or maybe a track ball, or a console controller. These are all great tools and young users in particular learn them quickly, but the translation is not 1 to 1. These devices and their limitations stand between our bodily intentions and the virtual things we manipulate.

      The mouse affords us precision—much in the way using a pen is more precise than finger-painting. But in just a few years touch interfaces have come a long way in offering that same precision. I use a trackpad with ease for precision design work. And the same kinds of advances are now happening in real space. The Kinect is great at understanding the body—and the kinds of full-body gestures that really can make you tired—but better algorithms for 3D cameras (Leap and Oblong) are tracking individual fingers and more subtle gestures that make precision a reality.

      The article is considering a future where interacting with the digital world aligns with the way we interact with the physical world: pointing, grabbing, moving, and gesturing to control and add meaning to those interactions. It may be hard to comprehend now, but we shouldn’t discount the potential of these kinds of natural interactions.

      0
    • 35

      regarding touching the screen, its not always as easy as one might think.
      sit in front of your desktop and try to touch your screen to move objects around or select items. after a while you’ll find that you need to rearrange your work area and get closer to the screen.
      moreover, sometimes you just cant (or dont want) to touch your screen. e.g. a doctor in a surgery or while cooking and hands are dirty…

      0
  6. 36

    As happy as it would make me if my Leap was sent out in 3 days, unfortunately the company has said July 22nd (http://blog.leapmotion.com/post/48872742284/release-date-update).

    There are those who have whined about the delay as if something insidious is going on being that they’ve taken absolutely no money whatsoever from anyone, which makes no sense. I guarantee you – Leap wants that cash. Had they charged people then delayed, then you’d have a point, but being that they’re just not ready to ship something and haven’t charged anyone, it’s a safe bet to say there’s a reason. I work in software development and if I had a nickle for every deadline we had to push back, I’d own Guatemala as a second home.

    Back to the UI experience. Regarding fatigue – this is a legitimate issue, but also one I think is a little over-hyped. You can rest your arms between motions and different motions will have different exertion levels. You also do not need to keep your arms in the air the entire time for most applications the same way you generally do not need to be touching a touch-screen constantly. Yes, I realize there are games and other things where this isn’t necessarily the case, but people talk like holding up their arms is somehow an unnatural event that nobody does when many, many, many jobs require more than sitting in a chair slumped over all day.

    I’d like to see the Leap be able to see at ‘desk level’ because then you could still use the mouse, only without the mouse. Just move your hand on the desk and ‘click’ away (although click and drag would require a different paradigm).

    All in all, I’m excited about this tech. Mice will still offer precision for the time being and this is V1.0 of a new tech. I’m betting it takes off and gets integrated into tablets and laptops providing more ways of interaction rather than replacing all modes. Given time, there will be competition which will create improvements and new ideas. Hybrids of touch and gesture will arrive and the concept of interacting with data will no longer be 2D (or a poor simulation of 3D) which I have personally witnessed back in the 90′s and early VR that people *really* grok the concept of easier than anyone ever thought.

    The tech could fail too. It may be too imprecise, to uncontrollable, or simply not take off like expected. These are risks that are natural to any evolution, but they should be evaluated *after* the idea has been in use for a while, not before they ship the first units.

    3
    • 37

      I agree that the “fatigue” issue is a little overhyped.

      Just considering my general browsing today, I’m thinking how much “extra” work it is to push a button or click a tab, and it makes me wonder, for general purposes, wouldn’t I actually be using my arms less?

      I’d like to see the Leap tech eventually become integrated into the borders of my computer monitor, similar to the Wii sensor bar in order to get an accurate approximation of where my hands are at, and essentially create an invisible “field” around my monitor.

      The mouse shouldn’t go away, because single pixel manipulation is still necessary, but with things like scrolling/zooming/swiping I don’t see why that couldn’t be something gestures couldn’t handle pretty easily.

      1
      • 38

        I don’t think it can be argued that sliding one’s wrist and elbow across a desk is more work than lifting one’s wrist and elbow off of the desk.

        But why do we need to point directly at these (sometimes gigantic) screens? Why not a slightly inclined surface, behind the keyboard, where our motions are translated onto the screen above them?

        0
        • 39

          I think the technology would make that possible, you’d just have to position it in a different place. Not a bad idea!

          0
  7. 40

    Just a minor correction (I think) – Leap Motion’s website has July 22nd listed as the official ship date, not June 27th.

    0
    • 41

      Thank you, it looks like the ship date got pushed back while this was in editorial. I’ll forward that along right away :-)

      0
  8. 42

    Google Glass is definitely the first step towards great things, as is Leap Motion. Honestly I think the potential in the Myo armband exceeds that of Leap, though, due to its wearable nature.

    I did a little experiment trying to come up with an alternative to the Google Glass armband swiping gestures a while ago ( http://www.youtube.com/watch?v=Nsuw2t7nZwc ) and I think that there are a lot of different technologies that can come together to replace the mouse and keyboard. If they ever get anywhere with subvocal speech recognition ( http://en.wikipedia.org/wiki/Subvocal_recognition ) that will go a long way towards eliminating the need for commands to wearable computers to be spoken aloud.

    I agree that holding your arms in the air for extended periods of time might feel a bit awkward now, but there are so many places these technologies can go, what we have now are the inevitable first stumbles. We need to take them in order to learn to keep our balance in the field of human-computer interaction.

    Personally I can’t wait to see what the future holds.

    0
    • 43

      WOAAH I’ve never seen the Myo armband, though that’s almost exactly what I was thinking of when I was talking about Leap being integrated with the smartwatch. Thanks for adding that to the conversation! The subvocal stuff is really cool too!

      0
  9. 44

    As a geek, the very idea of this becoming a reality is tantalising and exciting, but as a human, I’m not so sure it’s going to be that welcome?

    The scenario with the guy waiting for his friend and playing angry birds…

    So, he’s in a social situation, but he isn’t engaging in the ‘real world’ around him, instead, he’s immersed in a fictional construct whilst gaming. Arguably, he comes back into the real world to take some notes. That’s great.

    But hold on, he isn’t actually interacting with anything in the physical world around him. Maybe chatting with the person behind the counter, or – heaven forbid – chatting with a complete stranger at the table opposite, even if it’s just to discuss the weather.

    He has this virtual world attached to his body every waking moment, when the real world is, quite honestly, so much more amazing.

    I have this horrible vision of 100 people in a room seemingly talking to themselves and making random looking gestures – yet nobody is interacting with anyone else in that social space.

    Sure, the opposite could be true – it could enable people to engage more readily, shared experiences and massively multiplayer games – but we can already do this without needing the tech. When the network fails, our batteries drain, our devices break, will we be able to resort to the social interactions we’ve had for millennia? Or will we just sit there like dumb terminals?

    Don’t get me wrong here, I’m pointing out that this grand vision of future social interaction shouldn’t be viewed through rose-tinted glasses. We need tech downtime every day, as we’re already so dependent on devices to the detriment of society as a whole. But lets wear those rose tinted glasses for a minute – could they in fact aid social interaction to bring us back to a point where we’re not isolated from each other in a crowd?

    9
    • 45

      I think that would ultimately be the idea.

      In the restaurant senario, if you were to suspend disbelief and imagine these are just as much in the wild as smartphones are now, the guy wouldn’t necessarily be the only one. And it’s possible that you could end up in a room of 100 flailing people situation IF everyone has their stuff set to “private” mode.

      On the social level, people are still going to fundamentally be the same. I am the type of person who would talk to a complete stranger about the weather, my wife is not.

      Technology won’t change that, unless (and it’s a big unless) the software created for that technology allows us to have a shared experience – like a public game of angry birds. If someone wanted to play with you they could sit across the table and you could be shooting birds at each other, or in co-op mode they sit on your side of the table and you fight off a horde.

      That’s really the point of this article too, it’s not really about discussing the technology, but rather the kinds of experiences we could create using it.

      Because the experiences are “flat” right now, and you have to look away from everyone else to have them, I believe that taking them off the screen is exactly what the collective “we” has been trying to do since the internet’s become mainstream.

      Now that it’s becoming a reality we’re taking a hard look at the behaviors we’ve created around “flat” anti-social/social experience, and wondering if we really willing to not escape into our phones for even just a minute. I think that’s where most concerns really are.

      2
  10. 46

    My first thought is that it looks cool, but will really suck if an EMP hits. Of course, that’s probably true about everything…

    0
    • 47

      Even now, it’s difficult when the power goes out.

      Saw a quote that said “We’re more prepared for a zombie apocalypse than we are a power outage.” too true.

      2
  11. 48

    “UX on the future desktop will be about simulating physics and creating realistic environments, as well as tracking head, body and eyes to create intuitive 3-D interfaces, based on HTML5 and WebGL.”

    It really won’t. That might be the future of some applications, but the desktop is the desktop, and has nothing to do with HTML5 or WebGL. People have been trying to develop 3D-ized, virtualized, gesturized desktops for decades, and they just don’t work.

    -3
    • 49

      It takes a long time to hone and develop technology. Google Glass comes from a *very* long line of technology – yet all the efforts before have been entirely valid.

      Technology builds on top of itself. I’m willing to bet Henry Ford would’ve loved to have rounded off the corners on the Model-T Ford to make it more ‘natural’ and appealing, but the tech wasn’t available at the time – he used what he had and instead, he was a pioneer of rapid, low cost mass production.

      It’s a no-brainer that the potential futures this article discusses will come to pass in one way or another – whether you’ll be ready for it is quite honestly the challenge.

      Your right that past experimental technology has fallen short of the ultimate goal of bringing the ‘virtual world’ inline with the ‘real world’ – but at risk of repeating myself, the attempts have been massively important. Try, fail, try, fail, try – succeed.

      It’s the story of invention. 10% inspiration, 90% perspiration – just keep hacking away at it until you end up with something that can shake the world.

      I’d love to be dictating this reply by speech – but not even ‘hearable’ speech, just by moving my mouth without sound coming out, via a simple wearable interface. You can bet there’s someone, somewhere, working on just that.

      The desktop has always been a compromise – just like Henry Ford’s Model-T, it’s the best we could do with the current technology at the time…

      … but technologies are converging at an ever rapid pace, making wearable computing, gesture based computing and speech based computing ever more viable.

      Whether these advances will benefit mankind ends up being a moot point, we’re inventive, they will happen regardless and society will have to deal with whatever consequences come to pass… OR…

      We’ll be in a post-nuclear winter, but I won’t go into that one :D

      0
      • 50

        Amazing!

        So what you’re saying is this is what we’ve been moving towards all along, and it’s inevitable.

        Love your point about constant iteration, I think that’s why we’re finally here now. This stuff has existed in labs for at least 20 years, but now we’re really starting to see what’s possible. I’m excited and scared all at the same time :-)

        0
  12. 51

    Great article! It’s not only fun, but thought provoking to think of the various potential use cases of Leap Motion.

    What about the health use case with Leap Motion? The doctor or nurse won’t need to take off their gloves to touch the screen, looking for information or using a Leap enabled application/device. If there are Leap Motion ‘like’ devices on iPads/iPhones/Androids, you won’t touch the screen to sign to pay for something, and potentially pass germs.

    I think the right way to look at Leap Motion, as has been said here in the comments and replies, is that it could compliment and augment user interaction, not necessarily replace touch or mouse, but be another user interaction ‘tool’ in addition to touch, mouse, keyboard, voice, accelerometer, etc.

    0
  13. 52

    I see the next big technological development being centered around interactivity and making everyday tasks easier and stuff like this is at the centre of that. Imagine being able to say “OK Glass, show bus route x” and the route shows with price, times and a GPS location of the nearest bus allowing you to decide whether you have time to go to the shop. Or browsing the menu for the Starbucks down the road, placing an order and the store being notified how far away you are and when you walk through the door.

    It’s up to us as people to decide where to draw the line between the “real” world and the virtual world. Just like you ignore a call and use the mute gesture to shut the ringtone up when you’re already talking to somebody face to face. I’m sure we can adapt to know when to take the glasses off.

    I think if this particular device doesn’t take off, it’s simply a matter of time before a similar/more improved one does.

    1
  14. 53

    Well written article but call me a traditionalist, I still think for at least the desktop productivity UI’s some sort of input device (mouse, pen, etc.) will not be replaced by motion controls. I think it’ll only enhance some stuff. For instance some quick loose swipes in between, could be usefull for certain interactions.
    Instead of travelling 10 years into the future like you do, i’d rather go a couple decades further and let my mind/brain control things :)

    0
  15. 55

    Ronnie Battista

    June 17, 2013 8:51 am

    Thanks Tommy, lot’s of cerebral food for thought. I just recently contributed to an article in UXmatters on retail UX technology trends where I talk about Frictionless Commerce (see http://uxmatters.com/mt/archives/2013/06/retail-ux-strategy-trends.php if you’re interested). Very aligned with what you’re talking about here. I strongly be to find NUI to be one of the most exciting areas of UX trending.

    “The best thing you can do right now is think”. I agree. What’s always interesting is seeing how we will collectively find novel ways to use what we see coming. I will never forget working in the mobile gaming industry in 2005 – 2006 and having a colleague share Jeff Han’s 2006 TED Talk. http://www.youtube.com/watch?v=QKh1Rv0PlOQ At the time, we had already seen Minority Report so we kinda sorta knew where things were headed. A year later the iPhone changed the game. I believe that in the next 3 to 5 years Google Glass, Leap Motion and others will herald in things that we’re not remotely thinking about now.

    The iPhone/touchcreen interface gave us both practical and entertainment services that literally ‘changed the game’. Consider that if I was to mention that there is a worldwide phenomena, raking in hundreds of millions a year, built on a touchscreen game that required angry birds to be intuitively, quite naturally launched by a slingshot at strange structures protecting egg-stealing pigs… wouldn’t that sound a little odd? Not now of course, but if I told you that around a mere 3 1/2 years ago in early 2010 (well, technically December 2009) you might wonder whether my lid was screwed on tight.

    I look forward to the what 3 1/2 years from June 17, 2013 will look like, and what things we’ll be taking for granted that now are only just germinating in the minds of the ‘thinkers’ out there.

    0
    • 56

      That’s really what it’s all about, isn’t it?

      Thinking? Dreaming? Taking what’s in our heads and putting it out into the world as soon as it’s able to be a reality?

      Even if this paradigm never does come to exist, it’s fun to dream!

      2
  16. 57

    I think the problem with people fully adopting 3D interaction is due to the mediums we currently have available. Think about Windows and Mac OSX file structures; these work on a strictly two dimensional interface which allows for fluid and natural interaction when combined with a mouse (also functioning on a two dimensional plane).

    If we are to be adopting these three dimensional interactive tools, we need to change the entire on-screen interface to mimic this. At the moment we’re slapping three dimensional interaction on-top of a two dimensional interface; it’s not going to work.

    When using your PC casually (ie not gaming or working on 3D modelling etc) how often do you zoom, spin, rotate or move through the content? This Leap Motion device, whilst undoubtedly cool, has no practical use because the Windows and Mac OSX interfaces are in no way designed to replicate the interaction.

    It will take a company (like what Apple have done with so many technologies in the past) to fully adopt this 3D interaction and replicate it throughout an entire product. From the look and feel of the physical device, to the file structure and interface we see and interact with on screen, to the movements/guestures that create this interaction before we fully digest and adopt this technology.

    0
    • 58

      That’s exactly right, and that’s the world this article is exploring :-) Thank you for bridging that gap!

      2
      • 59

        Exactly; whilst this technology is cool (and it is) it is only the beginning of the process. It’s strange that we have moved in this order though, creating the method before the means.

        0
  17. 60

    For a technologically more traditional immersion experience into how this emerging technology would shape our expectations of computing and fundamentally change the medium, one might enjoy reading Ready Player One (http://www.amazon.com/gp/aw/d/0307887448), though I’m not subscribing to its underlying premise of a world in societal decay.

    0
  18. 61

    “I fear the day that technology will surpass our human interaction. The world will have a generation of idiots”. Einstein supposedly speculated this long ago. Wonder how social UX is set to evolve in years to come. While virtual interactions may not replace natural interactions in every case, do virtual interactions really augment natural interactions? Or at least, is it really a goal of these futuristic technologies? The question is how best we can, if we really want to, use human-computer interaction to enhance human-human interaction.

    0
  19. 62

    It seems unlikely that the mouse will be going away any time in the next decade or three. Most of the scenarios you listed seem irritating and burdensome compared with the standard computer input with keyboard and mouse.

    Also, technology fragments as often as it replaces. We still have radio, which was never replaced by television, which still hasn’t been replaced by the Internet.

    The assumptions made here about technology adoption seem unrealistic. Many people I know still don’t even have smartphones, a few don’t even have cell phones. Programmers are still buying print books, while people who barely can use their computers are happily reading away on their Nooks and Kindles. Adoption of new technologies is getting more complex as people with different needs and preferences make different decisions, and there are too many technologies and gadgets for any one person to assimilate.

    Some of the technologies you speak of will get some degree of adoption, and the world of UX will continue to fragment and complicate.

    0
    • 63

      You’re right, and what I’m suggesting in the article is a division of how we use the technology, or rather how the technology will allow us to use it differently. Television didn’t replace the radio, however television did replace the radio serial. We’re no longer crowding around our radio sets at 7:30 to listen to “War of the Worlds” we go to the movie theater, or worse watch it on our cell phone. And it’s Spotify that plays while I work, not WERZ.

      The data for social media usage is already indicating that mobile will replace the desktop as a primary access point, and it’s a statistic that there more people have access to smart phones than they do clean drinking water http://www.bloomberg.com/news/2013-03-21/world-with-more-phones-than-toilets-shows-water-challenge.html

      These aren’t assumptions about tech adoption, these are data points drawn from reputable studies. The truth is, if this were the 1990′s we’d probably be arguing over why a family would need more than one computer. Yet, now the average household has 5.7 internet connected devices http://thenextweb.com/insider/2013/03/18/npd-us-homes-now-hold-over-500m-internet-connected-devices-with-apps-at-an-average-of-5-7-per-household/

      Adoption of this tech, in some form or another, is inevitable… it’s what we’ve been moving towards all along. Whether it’s 5-10 years out like I’m suggesting is yet to be seen, but in accordance with Moore’s law, this is the next big shift, and it’s following the exact patterns (arguments and all) as mobile tech wave, and the home computer wave that came before it.

      0
  20. 64

    I am rather surprised no one brought the implications of Adobe’s Project Mighty. http://xdce.adobe.com/mighty/ It is improving upon interactions we already know and are comfortable with.

    But, to the point of using a mouse is awkward, think about the first time you learned to use a pencil or any tool. None of these things are “natural” but they are what led to our evolution. It is the ability to adapt and create learned behaviors that will change the future of interaction, not just the technology.

    0
  21. 65

    as a designer at Omek, who works with the gesture recognition tech for the last 3 years, i would like to add some points to the discussion.

    1. multi-modality is the key for future interactions. i believe that the combination of controls and interfaces would bring the more intuitive interface.
    brain-control is a little far, but speech and gesture recognition is very close (see Intel’s perceptual computing sdk).

    2. google glass vs Vuzix AR:
    the google glass’s ux is very 2D. there is no easy way (if any) to create the augmented angry birds game you described. with Vuzix’s glass you could (but it would cost ya…). consider that to see the augmented content with Google Glass you need to look up, so its an advantage and a disadvantage, depends on what you want… i would rather have the world being augmented at all time, but have control on each and every UI element popping up on my screen (so they wont be able to pop ads without my control.

    3. some asked about a keyboard with a touch-pad. there are lots of these actually… and there’s one that actually lets you program the keys themselves. so that’s actually pretty cool.

    4. about a cool casual game AR enhancement: what about Plants vs Zombies where 2 players play on a table, one vs the other? one defender and one attacker.

    0
  22. 66

    Your opening sentence says it all. Because two experimental technologies without an actual real life application yet are (almost, in the case of leap) out, it is obvious that the way we interact with computers, which has survived all experimental technologies for 40 years so far, will become obsolete.

    0
  23. 67

    Tommy, just want to share with you that I am a Glass Explorer and today I was driving and decided to make a lunch reservation, so I did a head tilt to active my Glass display, said “Ok Glass” and googled a specific restaurant. The restaurant came on screen and I tapped the frame to place a call. An employee answered and took my reservation. It was that simple. Interactions like this using Glass are becoming my normal way of life, and are so intuitive that it is easy to adapt.

    I really appreciate what you have shared in your article and agree with your conclusions on the shift in the way we interact with machines and how wearable technology will affect our social experience in real world scenarios.

    1
    • 68

      That is incredible! Thank you so much for sharing that experience.

      It seems minor, but within it there seems to be so much potential in the way we interact with everything :-)

      0
  24. 69

    The fatigue issue is real.

    I have a PS3 Move setup, and bought the Move rifle Sony has to use in shooting games.

    You get to a point and it gets harder and harder to hold up the rifle over time, requiring specific motions and holding the rifle in specific positions.

    Since the experience of playing an intense shooting game makes you lose track of time, eventually that equals pain.

    0
  25. 70

    I’m just not really sold on this for the future.

    1. Google Glass requires speech to use. There are situations where you would want to use the technology and not want to talk, or if you talked you would look silly/crazy (cordless headsets anyone?).

    2. Of course, everyone else is saying this, but gestures are tiresome. I have a Wii, and having to use the WiiMote is really difficult to control, and gets tiring. Again, there’s also the factor of looking silly if someone else is watching you.

    3. Can you really make motion UI intuitive? I watched the TedTalk video, and it didn’t seem that intuitive to me. There’s just a black screen and you’d have to learn what some of the commands are… alternatively, when you pick up an iPad you instinctively know how to use it.

    Honestly, I think the future is going to be mostly touch-based, with perhaps some “instinctual” gestures. I think we’re going to see the desktop move to a table top (or perhaps a slight angle like a drafting table), where you use your finger/hands/objects or a pen to control.

    0
  26. 71

    I also think we’re going to see a lot of things that work/sync together… Pebble is a good example of that. Mobile platforms are the perfect way to sync all of these things together that we never could before. You’re seeing it with people controlling the lighting in their house, remotely, through their iPhone. The smartphone is more than just a phone, it’s integrating technology in every aspect of our lives.

    0
  27. 72

    I do think fatigue will be an issue, I can’t imagine moving my arms 8 hours a day, like going to gym.
    As a UX, we need to think comfort as the first solution for the end user. So when I am walking, google glass and smart watches out there are good, but while working, we will be in a smart-chair or multitouch table because this is the position we want to be.
    And a better one is using your thought to interact with your virtual environment such: http://www.ted.com/talks/tan_le_a_headset_that_reads_your_brainwaves.html
    This is the truly future that will provide, normal people and special need ones, to interact with the virtual world. Brain waves and thoughts are faster than our arms and less exhausting.

    0
  28. 73

    Honestly, this sounds kinda ridiculous.

    “In the game of chess below … you’ll be able to ‘reach in’ and grab a piece, as you watch your friend stress over which move to make next.”

    Really?

    If you’re actually sitting across from your friend, then um, you can do that right now.

    0
  29. 74

    Ok, a bit late into the conversation here, but just wanted to add a thought about the fatigue issue,

    Different users today configure their mouses (mice?) differently in terms of speed and other parameters. Similarly, a 3D interaction could be amplified or transformed. It does not need to be your hand moving in cyberspace, it could be your virtual hand. The movement of that hand could be an amplified version of any physical gesture, according to your own set-up.

    That is just the quick fix. Obviously, interacting in 3D will be done using many other technologies, including eye tracking, “thought control”, etc. So the issue of muscle fatigue, although relevant, will not stand in the way in the long run.

    0

Leave a Comment

Yay! You've decided to leave a comment. That's fantastic! Please keep in mind that comments are moderated and rel="nofollow" is in use. So, please do not use a spammy keyword or a domain as your name, or else it will be deleted. Let's have a personal and meaningful conversation instead. Thanks for dropping by!

↑ Back to top