Here someone shows how a smartphone is able to track a users face real-time:
If this isn’t a fake that’s some kind of pleasing and frightening to me.
The comfort of user interfaces is increasing a lot – e.g. tracking head motions and do some automatic things with that like shut off the display.
But e.g. global corporations are able to log your gestures online during watching films and with that data concluding to your intimate preferences and many more…
A nice new technology connects the visual advantages with the tactile ones and keeps being adjustable in both aspects.
So in the future e.g. one can feel a slider (graphical element to adjust e.g. volume) and so the volume level of mobile devices or more complex elements like smilies or whole text messages without looking at it.
In the next video a researcher is transferring human head movement data via gyrosensor to a roboter and his video data back to the human.
Is this a another new technology (of virtualization) towards alienation (robots taking over your representation in the world and therefor the public majority) and dissociation (knowing yourself | your body) as one can see in the film “Surrogates”?
As I wrote some time ago, human-machine interaction is going towards the interesting trends we saw in futuristic films years ago. Here is another example of a human-machine interface which reminds us particularly of “Minority Report”. A man is gesticulating with both his hands free in space and on a screen you can see a 3D scene performing appropriate actions.
That’s impressive!
—
But because I’m seeing some disadvantages with this technology for me it isn’t such impressive anymore. The disadvantages or let’s say challenges are:
One has to hold his arms continuously in free space, for sure that’s exhausting.
What if there are more than two hands / one user? Are the algorithms able to match the hands to the right persons or will there be false recognized (inter)actions?
In my opinion the user gestures were adapted to the technology not to real user needs / perceptions.
The actions are not accurate enough yet.
In the case of the deletion of files some other way would be much better than to move the files to the edges I think. Because this action suggests me that there will be a backup of the files and I#m able to load them again.
And beside the other details a main goal for future investigations has to be to remove the need that a user has to recheck visually what he is doing with his hands (red cross and borders in the video and so on). The user wants to e.g. translate a picture, commands that with his hands and that’s it. The extra necessity to visualize that destroys the / my sensations.
But the technology (free hand gestures and 3d scenes) is a really great step into future!
What a cute demonstration of a nice technology – acting as you have a computer mouse without a mouse:
Probably this gives us the opportunity to interact almost everywhere (in a traditional style) with a laptop (in bed, at kitchen table, etc.) without carrying around the (hardware) mouse.
One has the ability to feel the ‘click’ and so you have tactile feedback of initiating a interaction function. But what if one acts with the hand out of the camera sight? Whill there be false inputs or no one? How to inform the user of the right hand position?
Beside the hardware and software products in the video have a look to the gyroscope functionality in smartphones today (especially the precision). It’s amazing! Real cool things can be done with this feature, I think.
Providing such robust displays combined with tough unibodies (like the MacBook Pro or HTC Legend) surely is a way to reduce the need of cautiousness of the user, and therefore a sort of consistent stress, and to enhance the satisfaction. Nice devices and applications can arise from these things.
Beside the thoughtful aspects concerning human life:
Are we able to learn anything for human computer interaction based on the human brain workings (right hemisphere = parallel processing & left hemisphere = serial processing)?