What would you do if Minority Report interfaces became real?

I wonder about this. I would argue that the majority of the population can’t think in 3D. The idea of a complex 3D interface surely can be learned…but it is going to scare the living snot out of a LOT of people.

I’m not so sure, there are a lot of people, specifically Gen Xer’s and Digital Natives who are already well versed in 3D manipulation simply because of 3D gaming, for them it will be almost second nature just that the input device has changed.

This is a good essay by Bret Victor (not me) from last year about current UI’s being about “objects under glass”

jon, Greenman, I think it comes down to better signals and affordances to inform usage. 2D or 3D interfaces can be really challenging. 2d and 3D input methods can also be really challenging. I think we’re at the infancy of mass adoption of these technologies and as 3D designers, we’re well positioned to influence and shape the future paradigms of input and interactions.

I considered that angle as well…the “younger generation” is surely going to drive this forward. I was thinking more in the short term uptake. Most over 40ish are not ready for this, methinks.

I would highly suggest stretching before going to use one of these interfaces, wouldn’t want to pull something.

Just stumbled upon this article: http://imprint.printmag.com/daily-heller/surgical-design-for-surgical-equipment/

It doesn’t say much about the actual technology in the article, but these two pictures tell most of the story:

The vision:

What they’re doing:

From looking through some more info on the company’s site, it seems like the purpose is to reduce the scale at which surgery can be performed, though I could also see it being useful in full scale as well, using digital filters to reduce any hand shaking.

It seems (from the first picture) that they would like to eventually eliminate the physical input parts of the device, however I’m sure there’s force feedback and other issues if that were the case.

I would highly suggest stretching before going to use one of these interfaces, wouldn’t want to pull something.

Many present applications seem to be developed or extended from the existing physical use.
If this is a deliberate approach to provide exercise or play scenario for the user then it just fits into it. (Kinect - games)
But for the new area, they should have the feeling of yesterday. They should look so effortless & magical. (Apple - zoom)
That will be a sweet spot of acceptance.
A function which is presently performed by finger movement …not going to replace by hand movement.

“sweet spot” for whom? In this age, “magical” conjures up full mind reading ability (no wires, no halo, and 90% reliability), not arm waving.

I watched “Cloud Atlas” last night and there is a lot of ‘Minority Report’ style interfaces in it. What struck me was these interfaces are used in movies to forward the story- the third person can see and understand what is going on, rather than have lots of text or spoken exposition. This style of interface may be best used for presentations or collaborative work rather than one person at a workstation.

I think for comfort, ergonomics and accuracy a tabletop is better. There is a precision when a hand (or a hand holding a pen) is against a tabletop, which is lost when a hand is waving around in front of you.

My uni is researching this:
http://wearables.unisa.edu.au/projects/digital-foam/

Zippy, Can you play a music on the phone with full mind reading ability ? … :arrow_right: answer is NO.
These interfaces are developed for some functionality…

there it provides a relative relationship with a task.

I think these types of things (for the general public) is best when used WITH conjunction with the mouse and keyboard, not replace them . Imagine when reading a website you just swipe your fingers to the left to go back the previous page, or maximize a video by opening up your fingers. I use Fire Gesture add-on for Firefox, I find it to be really useful shortcuts for internet browsing. I just right-click drag to the left to go back a page, no need to move the cursor the the ‘back’ button or remove my hands off the mouse to press ‘backspace’ on the keyboard. This could be similar to finger gestures on tablets.

I don’t play music on my phone at all, i don’t text on my phone, I don’t surf on my phone. I use my phone to send and receive audio communications.

From what I understand, the minority report interface was actually designed by a company called Oblong, check out their Mezzanine system, the guy from Oblong said that is what they’ve done with that movie concept.

A lot of the things we would have called people out for as being too blue sky, or not rooted in reality are existing, or beginning to exist. I think that the power of those blue sky ideas is that it gives someone who actually knows how to program things a goal to shoot for. They see this stuff and think that’s cool, and I might actually know how to figure that out, so I’ll give it a shot.

Like the thing in Iron Man, where he takes control of the monitors by taking a picture of them, that’s real, I saw a girl present that at a conference this summer, she made it happen.

I see the Wii, WiiU and Kinect as mass market versions of the minority report interface, and as with Wii and Kinect, the scope of their usability is usually that for select applications they are great, but they don’t do everything better. But what they do do better is a vast improvement. As long as they make things faster, easier, or more fun, they are amazing.

This is an interesting watch…mainly because it shows you what happens when things start going wrong. :slight_smile:

A mouse doesn’t have any real “Failure” mode…if you accidentally click something it’s usually pretty easy to avoid performing a mistaken action.

With gestures, even a slight twitch of the hand or a co-worker walking up behind you could launch your model off into space. I think figuring out the intuitive way of dealing with those exceptions still requires a lot of thought. For example to “wake up” your Kinect you usually need to yell at it or wave.

It’s not to say we won’t get there, but it’s going to be at least a few years I think before the gestures and technology start getting robust enough to use practically. I’m also a firm believer that eliminating the sensation of touch from the equation isn’t a good thing. It’s one of our most valuable senses, especially as designers, and eliminating it from the interface IMO is a step in the wrong direction.

Good observation there, Cyberdemon…

I have tried this once…I like the Kinect …good direction.
But It fails as there is no feedback loop when it stops following you.
may be…add little indication on the screen …suggesting " you are on or off the grid "

This conversation reminds me of Artefact project

[vimeo]http://vimeo.com/44969506[/vimeo]
[vimeo]http://vimeo.com/44954474[/vimeo]

Those videos are pretty cool; really just scratching the surface of the technology. The voice commands are a little annoying, but I could imagine some nice ways to integrate menus. Some small props would go a long way too in adding functionality. I could also envision some advanced user, Alias style marking menu type.

These are good efforts …Can they develop this for seating posture?
As the gesture based applications were done or shown are at standing posture & time required for modelling varies from a few hours to a few days. I think it will be tiresome for anybody to stand or semi-stand in front of the screen for the modelling process.

Yes, that is purely a function of the software and the field of view of the camera & IR sensor. The Kinect was designed with a certain range in mind (for a living room) but they also sell other devices using the same tech that are designed for close range use, and are better at disregarding your legs for seated users.