What would you do if Minority Report interfaces became real?

Interesting find…

I think a mix of interfaces may be useful for system …like Gesture + Voice (Siri).

Here is an app Flutterapp to play & pause your music with a hand gesture if you have a web camera.

Eyeball tracking could be a good solution for least input effort (widen eyes to select/or zoom in?).
But I have to admit the idea creeps me out if I think about the possibilities of a heartless future robotic A.I using it to interact ,understand or even pre -empt and influence us!?

Funny, my wife is a sign language teacher and that’s actually one of the most appropriate uses I can think of at the moment. It’s a very complex problem, but there have already been a number of schools using the Kinect and similar devices to try and develop ASL translators which I think is pretty sweet considering how much work has gone into voice/text translation. It’s also a challenge since ASL has a lot of regional influence and also requires interaction with facial recognition since facial expressions are part of communicating.

I think motion capturing interfaces are the future. It’s already seeded in gaming and entertainment. There are fidelity and resolution issues but those will resolve sooner rather than later. For true productivity, there are ergonomic issues, like supporting your arms. Trying to use a touchscreen monitor is exhausting. But there are solutions like, Nxakt’s desk mounted tracker. It could even live in the display bezel, right now Samsung puts them right on the top of the TV, like the Wii sensor.

I think there are some really interesting interaction possibilities though: sitting vs standing, arm waving vs finger wagging, using your feet, like in a car (a very complicated 3d interface). One current disconnect though is the notion of using macro and micro 3D gestures to navigate and manipulate 2D UI menus. I think a big breakthrough will be when 3d motion captured interfaces are conceived and structured in 3 dimensions. We’re all trained in 3 dimensional hierarchies so we know how to create dominant, sub-dominant, and subordinate 3D form relationships. Future UI designers will need to consider information organization in 3 dimensions, not just 2. Top to bottom, left to right, front to back, may become upper left background to lower left foreground. Who knows, maybe in the future we’ll have to print rapid prototypes of our 3D interfaces to check the ergonomics…

I wonder about this. I would argue that the majority of the population can’t think in 3D. The idea of a complex 3D interface surely can be learned…but it is going to scare the living snot out of a LOT of people.

I’m not so sure, there are a lot of people, specifically Gen Xer’s and Digital Natives who are already well versed in 3D manipulation simply because of 3D gaming, for them it will be almost second nature just that the input device has changed.

This is a good essay by Bret Victor (not me) from last year about current UI’s being about “objects under glass”

jon, Greenman, I think it comes down to better signals and affordances to inform usage. 2D or 3D interfaces can be really challenging. 2d and 3D input methods can also be really challenging. I think we’re at the infancy of mass adoption of these technologies and as 3D designers, we’re well positioned to influence and shape the future paradigms of input and interactions.

I considered that angle as well…the “younger generation” is surely going to drive this forward. I was thinking more in the short term uptake. Most over 40ish are not ready for this, methinks.

I would highly suggest stretching before going to use one of these interfaces, wouldn’t want to pull something.

Just stumbled upon this article: http://imprint.printmag.com/daily-heller/surgical-design-for-surgical-equipment/

It doesn’t say much about the actual technology in the article, but these two pictures tell most of the story:

The vision:

What they’re doing:

From looking through some more info on the company’s site, it seems like the purpose is to reduce the scale at which surgery can be performed, though I could also see it being useful in full scale as well, using digital filters to reduce any hand shaking.

It seems (from the first picture) that they would like to eventually eliminate the physical input parts of the device, however I’m sure there’s force feedback and other issues if that were the case.

I would highly suggest stretching before going to use one of these interfaces, wouldn’t want to pull something.

Many present applications seem to be developed or extended from the existing physical use.
If this is a deliberate approach to provide exercise or play scenario for the user then it just fits into it. (Kinect - games)
But for the new area, they should have the feeling of yesterday. They should look so effortless & magical. (Apple - zoom)
That will be a sweet spot of acceptance.
A function which is presently performed by finger movement …not going to replace by hand movement.

“sweet spot” for whom? In this age, “magical” conjures up full mind reading ability (no wires, no halo, and 90% reliability), not arm waving.

I watched “Cloud Atlas” last night and there is a lot of ‘Minority Report’ style interfaces in it. What struck me was these interfaces are used in movies to forward the story- the third person can see and understand what is going on, rather than have lots of text or spoken exposition. This style of interface may be best used for presentations or collaborative work rather than one person at a workstation.

I think for comfort, ergonomics and accuracy a tabletop is better. There is a precision when a hand (or a hand holding a pen) is against a tabletop, which is lost when a hand is waving around in front of you.

My uni is researching this:
http://wearables.unisa.edu.au/projects/digital-foam/

Zippy, Can you play a music on the phone with full mind reading ability ? … :arrow_right: answer is NO.
These interfaces are developed for some functionality…

there it provides a relative relationship with a task.

I think these types of things (for the general public) is best when used WITH conjunction with the mouse and keyboard, not replace them . Imagine when reading a website you just swipe your fingers to the left to go back the previous page, or maximize a video by opening up your fingers. I use Fire Gesture add-on for Firefox, I find it to be really useful shortcuts for internet browsing. I just right-click drag to the left to go back a page, no need to move the cursor the the ‘back’ button or remove my hands off the mouse to press ‘backspace’ on the keyboard. This could be similar to finger gestures on tablets.

I don’t play music on my phone at all, i don’t text on my phone, I don’t surf on my phone. I use my phone to send and receive audio communications.

From what I understand, the minority report interface was actually designed by a company called Oblong, check out their Mezzanine system, the guy from Oblong said that is what they’ve done with that movie concept.

A lot of the things we would have called people out for as being too blue sky, or not rooted in reality are existing, or beginning to exist. I think that the power of those blue sky ideas is that it gives someone who actually knows how to program things a goal to shoot for. They see this stuff and think that’s cool, and I might actually know how to figure that out, so I’ll give it a shot.

Like the thing in Iron Man, where he takes control of the monitors by taking a picture of them, that’s real, I saw a girl present that at a conference this summer, she made it happen.

I see the Wii, WiiU and Kinect as mass market versions of the minority report interface, and as with Wii and Kinect, the scope of their usability is usually that for select applications they are great, but they don’t do everything better. But what they do do better is a vast improvement. As long as they make things faster, easier, or more fun, they are amazing.

This is an interesting watch…mainly because it shows you what happens when things start going wrong. :slight_smile:

A mouse doesn’t have any real “Failure” mode…if you accidentally click something it’s usually pretty easy to avoid performing a mistaken action.

With gestures, even a slight twitch of the hand or a co-worker walking up behind you could launch your model off into space. I think figuring out the intuitive way of dealing with those exceptions still requires a lot of thought. For example to “wake up” your Kinect you usually need to yell at it or wave.

It’s not to say we won’t get there, but it’s going to be at least a few years I think before the gestures and technology start getting robust enough to use practically. I’m also a firm believer that eliminating the sensation of touch from the equation isn’t a good thing. It’s one of our most valuable senses, especially as designers, and eliminating it from the interface IMO is a step in the wrong direction.

Good observation there, Cyberdemon…

I have tried this once…I like the Kinect …good direction.
But It fails as there is no feedback loop when it stops following you.
may be…add little indication on the screen …suggesting " you are on or off the grid "