What would you do if Minority Report interfaces became real?

As designers, we are always talking about minority report-type 3D gestural interfaces…what would you do with the technology if it was here?

Because it is pretty damn close now:

We’d all have awesomely jacked triceps…

Seriously though, as cool as the Ironman/Minority report stuff is, if you haven’t played with a Kinect already you’d be surprised how physically demanding it is. Keeping your arms constantly moving around requires a lot of effort, and for those of us who are desk jockeys I’m not sure if it would make as big of an impact in our daily workflow.

It’s exciting to see it develop, and I think there are specific applications where this technology makes a ton of sense, though I’m not sure if we’ll be pumping cad with all our appendages anytime too soon.

I fully agree…my problem with these interfaces is that I’ve never seen the real use for them. Now that it is becoming real, between Kinect and PC interfaces like Leap Motion…what will it REALLY get used for. Which is why I am asking the question.

I like their vision of “sculpting” with it. Software that would let me mold a virtual blob of clay, or…ahem…other hands on type of interfaces.

Somewhat related…

I had the chance to watch the Total Recall remake over the holidays. Again some very cool interface concepts - particularly around personal communication devices.
Some very BladeRunner-esque set design too.

Blade Runner - Fifth Element - Total Recall in order of progression of that environment. Total Recall stacks twenty years or depth and effect onto it gives you a nice long look at it, I have to watch it again just for that part. Great design team on that film.

**>**The Leap controller looks like fun and some definite applications for advertising and trade shows, machine controls, passive interaction. I agree with Mike that holding arms up in the air to work is not the answer, I think that depicted solution in Minority Report and Iron Man comes more from the ease of making that effect in a movie with overlays.

One answer is to have tables with holes cut out over the motion sensors, to be able rest our forearms on the table and manipulate hands on space over the void with the controller under. I’ll get one of those controllers and experiment with that. Alternately two voids for right and left hands to double control on each side of a keyboard.<

SHA256 Hash of above text: 16b2e7930342de97593ff2abb6facb778c49a44cc047939ab37c3e7b3c37e714
Time stamp of above hash.

CAD I don’t see beyond object rotation and spaceball replacement unless I learn sign language.

Virtual clay has been theorized in many forms over the years. Having worked extensively with clay I can say that the tactile feedback and material pushback is more than 70% of the development experience. When we can get to the point of bending physical splines in the air and having them imported I think that will be a big leap forward for shape development.

Interesting find…

I think a mix of interfaces may be useful for system …like Gesture + Voice (Siri).

Here is an app Flutterapp to play & pause your music with a hand gesture if you have a web camera.

Eyeball tracking could be a good solution for least input effort (widen eyes to select/or zoom in?).
But I have to admit the idea creeps me out if I think about the possibilities of a heartless future robotic A.I using it to interact ,understand or even pre -empt and influence us!?

Funny, my wife is a sign language teacher and that’s actually one of the most appropriate uses I can think of at the moment. It’s a very complex problem, but there have already been a number of schools using the Kinect and similar devices to try and develop ASL translators which I think is pretty sweet considering how much work has gone into voice/text translation. It’s also a challenge since ASL has a lot of regional influence and also requires interaction with facial recognition since facial expressions are part of communicating.

I think motion capturing interfaces are the future. It’s already seeded in gaming and entertainment. There are fidelity and resolution issues but those will resolve sooner rather than later. For true productivity, there are ergonomic issues, like supporting your arms. Trying to use a touchscreen monitor is exhausting. But there are solutions like, Nxakt’s desk mounted tracker. It could even live in the display bezel, right now Samsung puts them right on the top of the TV, like the Wii sensor.

I think there are some really interesting interaction possibilities though: sitting vs standing, arm waving vs finger wagging, using your feet, like in a car (a very complicated 3d interface). One current disconnect though is the notion of using macro and micro 3D gestures to navigate and manipulate 2D UI menus. I think a big breakthrough will be when 3d motion captured interfaces are conceived and structured in 3 dimensions. We’re all trained in 3 dimensional hierarchies so we know how to create dominant, sub-dominant, and subordinate 3D form relationships. Future UI designers will need to consider information organization in 3 dimensions, not just 2. Top to bottom, left to right, front to back, may become upper left background to lower left foreground. Who knows, maybe in the future we’ll have to print rapid prototypes of our 3D interfaces to check the ergonomics…

I wonder about this. I would argue that the majority of the population can’t think in 3D. The idea of a complex 3D interface surely can be learned…but it is going to scare the living snot out of a LOT of people.

I’m not so sure, there are a lot of people, specifically Gen Xer’s and Digital Natives who are already well versed in 3D manipulation simply because of 3D gaming, for them it will be almost second nature just that the input device has changed.

This is a good essay by Bret Victor (not me) from last year about current UI’s being about “objects under glass”

jon, Greenman, I think it comes down to better signals and affordances to inform usage. 2D or 3D interfaces can be really challenging. 2d and 3D input methods can also be really challenging. I think we’re at the infancy of mass adoption of these technologies and as 3D designers, we’re well positioned to influence and shape the future paradigms of input and interactions.

I considered that angle as well…the “younger generation” is surely going to drive this forward. I was thinking more in the short term uptake. Most over 40ish are not ready for this, methinks.

I would highly suggest stretching before going to use one of these interfaces, wouldn’t want to pull something.

Just stumbled upon this article: http://imprint.printmag.com/daily-heller/surgical-design-for-surgical-equipment/

It doesn’t say much about the actual technology in the article, but these two pictures tell most of the story:

The vision:

What they’re doing:

From looking through some more info on the company’s site, it seems like the purpose is to reduce the scale at which surgery can be performed, though I could also see it being useful in full scale as well, using digital filters to reduce any hand shaking.

It seems (from the first picture) that they would like to eventually eliminate the physical input parts of the device, however I’m sure there’s force feedback and other issues if that were the case.

I would highly suggest stretching before going to use one of these interfaces, wouldn’t want to pull something.

Many present applications seem to be developed or extended from the existing physical use.
If this is a deliberate approach to provide exercise or play scenario for the user then it just fits into it. (Kinect - games)
But for the new area, they should have the feeling of yesterday. They should look so effortless & magical. (Apple - zoom)
That will be a sweet spot of acceptance.
A function which is presently performed by finger movement …not going to replace by hand movement.

“sweet spot” for whom? In this age, “magical” conjures up full mind reading ability (no wires, no halo, and 90% reliability), not arm waving.

I watched “Cloud Atlas” last night and there is a lot of ‘Minority Report’ style interfaces in it. What struck me was these interfaces are used in movies to forward the story- the third person can see and understand what is going on, rather than have lots of text or spoken exposition. This style of interface may be best used for presentations or collaborative work rather than one person at a workstation.

I think for comfort, ergonomics and accuracy a tabletop is better. There is a precision when a hand (or a hand holding a pen) is against a tabletop, which is lost when a hand is waving around in front of you.

My uni is researching this:

Zippy, Can you play a music on the phone with full mind reading ability ? … :arrow_right: answer is NO.
These interfaces are developed for some functionality…

there it provides a relative relationship with a task.

I think these types of things (for the general public) is best when used WITH conjunction with the mouse and keyboard, not replace them . Imagine when reading a website you just swipe your fingers to the left to go back the previous page, or maximize a video by opening up your fingers. I use Fire Gesture add-on for Firefox, I find it to be really useful shortcuts for internet browsing. I just right-click drag to the left to go back a page, no need to move the cursor the the ‘back’ button or remove my hands off the mouse to press ‘backspace’ on the keyboard. This could be similar to finger gestures on tablets.