User avatar

ADD
step four
step four
 
Posts: 518
Joined: April 22nd, 2007, 6:54 am
I would highly suggest stretching before going to use one of these interfaces, wouldn't want to pull something.

Many present applications seem to be developed or extended from the existing physical use.
If this is a deliberate approach to provide exercise or play scenario for the user then it just fits into it. (Kinect - games)
But for the new area, they should have the feeling of yesterday. They should look so effortless & magical. (Apple - zoom)
That will be a sweet spot of acceptance.
A function which is presently performed by finger movement ..not going to replace by hand movement.


zippyflounder
full self-realization
full self-realization
 
Posts: 1703
Joined: July 1st, 2007, 11:27 am
ADD wrote:
I would highly suggest stretching before going to use one of these interfaces, wouldn't want to pull something.

Many present applications seem to be developed or extended from the existing physical use.
If this is a deliberate approach to provide exercise or play scenario for the user then it just fits into it. (Kinect - games)
But for the new area, they should have the feeling of yesterday. They should look so effortless & magical. (Apple - zoom)
That will be a sweet spot of acceptance.
A function which is presently performed by finger movement ..not going to replace by hand movement.
"sweet spot" for whom? In this age, "magical" conjures up full mind reading ability (no wires, no halo, and 90% reliability), not arm waving.

User avatar

sanjy009
full self-realization
full self-realization
 
Posts: 802
Joined: September 16th, 2009, 6:39 pm
Location: Adelaide, Australia
I watched "Cloud Atlas" last night and there is a lot of 'Minority Report' style interfaces in it. What struck me was these interfaces are used in movies to forward the story- the third person can see and understand what is going on, rather than have lots of text or spoken exposition. This style of interface may be best used for presentations or collaborative work rather than one person at a workstation.

I think for comfort, ergonomics and accuracy a tabletop is better. There is a precision when a hand (or a hand holding a pen) is against a tabletop, which is lost when a hand is waving around in front of you.

nxakt wrote:Virtual clay has been theorized in many forms over the years. Having worked extensively with clay I can say that the tactile feedback and material pushback is more than 70% of the development experience. When we can get to the point of bending physical splines in the air and having them imported I think that will be a big leap forward for shape development


My uni is researching this:
http://wearables.unisa.edu.au/projects/digital-foam/

User avatar

ADD
step four
step four
 
Posts: 518
Joined: April 22nd, 2007, 6:54 am
Zippy, Can you play a music on the phone with full mind reading ability ? ... :arrow: answer is NO.
These interfaces are developed for some functionality...
sanjy009 wrote:I think for comfort, ergonomics and accuracy a tabletop is better. There is a precision when a hand (or a hand holding a pen) is against a tabletop, which is lost when a hand is waving around in front of you.

there it provides a relative relationship with a task.


nicanor
step four
step four
 
Posts: 291
Joined: April 30th, 2009, 1:33 am
Location: Vancouver, BC Canada
I think these types of things (for the general public) is best when used WITH conjunction with the mouse and keyboard, not replace them . Imagine when reading a website you just swipe your fingers to the left to go back the previous page, or maximize a video by opening up your fingers. I use Fire Gesture add-on for Firefox, I find it to be really useful shortcuts for internet browsing. I just right-click drag to the left to go back a page, no need to move the cursor the the 'back' button or remove my hands off the mouse to press 'backspace' on the keyboard. This could be similar to finger gestures on tablets.


zippyflounder
full self-realization
full self-realization
 
Posts: 1703
Joined: July 1st, 2007, 11:27 am
ADD wrote:Zippy, Can you play a music on the phone with full mind reading ability ? ... :arrow: answer is NO.
These interfaces are developed for some functionality...
sanjy009 wrote:I think for comfort, ergonomics and accuracy a tabletop is better. There is a precision when a hand (or a hand holding a pen) is against a tabletop, which is lost when a hand is waving around in front of you.

there it provides a relative relationship with a task.
I don't play music on my phone at all, i don't text on my phone, I don't surf on my phone. I use my phone to send and receive audio communications.


carton
full self-realization
full self-realization
 
Posts: 785
Joined: January 26th, 2005, 2:19 pm
From what I understand, the minority report interface was actually designed by a company called Oblong, check out their Mezzanine system, the guy from Oblong said that is what they've done with that movie concept.

http://oblong.com/

A lot of the things we would have called people out for as being too blue sky, or not rooted in reality are existing, or beginning to exist. I think that the power of those blue sky ideas is that it gives someone who actually knows how to program things a goal to shoot for. They see this stuff and think that's cool, and I might actually know how to figure that out, so I'll give it a shot.

Like the thing in Iron Man, where he takes control of the monitors by taking a picture of them, that's real, I saw a girl present that at a conference this summer, she made it happen.
Just some guy, trying to figure it out too.

User avatar

Cameron
full self-realization
full self-realization
 
Posts: 1021
Joined: January 26th, 2008, 12:44 am
Location: San Diego
I see the Wii, WiiU and Kinect as mass market versions of the minority report interface, and as with Wii and Kinect, the scope of their usability is usually that for select applications they are great, but they don't do everything better. But what they do do better is a vast improvement. As long as they make things faster, easier, or more fun, they are amazing.
http://cargocollective.com/cameron-nielsen
http://www.linkedin.com/in/cameronnielsen
"there is an inherent intelligence to beauty" - Dori Tunstall

User avatar

Cyberdemon
full self-realization
full self-realization
 
Posts: 2710
Joined: February 7th, 2006, 11:51 pm
Location: New York
http://www.engadget.com/2013/01/08/inte ... computing/

This is an interesting watch...mainly because it shows you what happens when things start going wrong. :-)

A mouse doesn't have any real "Failure" mode...if you accidentally click something it's usually pretty easy to avoid performing a mistaken action.

With gestures, even a slight twitch of the hand or a co-worker walking up behind you could launch your model off into space. I think figuring out the intuitive way of dealing with those exceptions still requires a lot of thought. For example to "wake up" your Kinect you usually need to yell at it or wave.

It's not to say we won't get there, but it's going to be at least a few years I think before the gestures and technology start getting robust enough to use practically. I'm also a firm believer that eliminating the sensation of touch from the equation isn't a good thing. It's one of our most valuable senses, especially as designers, and eliminating it from the interface IMO is a step in the wrong direction.

User avatar

ADD
step four
step four
 
Posts: 518
Joined: April 22nd, 2007, 6:54 am
Good observation there, Cyberdemon..

Cyberdemon wrote:For example to "wake up" your Kinect you usually need to yell at it or wave.

I have tried this once..I like the Kinect ..good direction.
But It fails as there is no feedback loop when it stops following you.
may be..add little indication on the screen ..suggesting " you are on or off the grid "

User avatar

yopidjau
step two
step two
 
Posts: 58
Joined: August 13th, 2009, 1:54 am
Location: Indonesia
This conversation reminds me of Artefact project
http://www.artefactgroup.com/#/content/ ... m-the-past



Brett_nyc
full self-realization
full self-realization
 
Posts: 875
Joined: May 30th, 2006, 9:57 am
Those videos are pretty cool; really just scratching the surface of the technology. The voice commands are a little annoying, but I could imagine some nice ways to integrate menus. Some small props would go a long way too in adding functionality. I could also envision some advanced user, Alias style marking menu type.

User avatar

ADD
step four
step four
 
Posts: 518
Joined: April 22nd, 2007, 6:54 am
These are good efforts ...Can they develop this for seating posture?
As the gesture based applications were done or shown are at standing posture & time required for modelling varies from a few hours to a few days. I think it will be tiresome for anybody to stand or semi-stand in front of the screen for the modelling process.

User avatar

Cyberdemon
full self-realization
full self-realization
 
Posts: 2710
Joined: February 7th, 2006, 11:51 pm
Location: New York
ADD wrote:These are good efforts ...Can they develop this for seating posture?
As the gesture based applications were done or shown are at standing posture & time required for modelling varies from a few hours to a few days. I think it will be tiresome for anybody to stand or semi-stand in front of the screen for the modelling process.


Yes, that is purely a function of the software and the field of view of the camera & IR sensor. The Kinect was designed with a certain range in mind (for a living room) but they also sell other devices using the same tech that are designed for close range use, and are better at disregarding your legs for seated users.

User avatar

yopidjau
step two
step two
 
Posts: 58
Joined: August 13th, 2009, 1:54 am
Location: Indonesia
yeap, the Artefact guy mention that he is controlling it using his foot, no actual gesture recognition device used. basically what he shows is just prototyping the experience of what gesture control on CAD could be.

Previous

Return to general design discussion

©2013 Core77, Inc. All rights reserved
about | contact us | advertise | mailing list