The US Defense Advanced Research Projects Agency (DARPA) promotes research efforts to enable protheses near humane or in some way even better. So they try to connect them with human cerebral cortex to deliver information from brain to the protheses and also back again! So people can control protheses only by thoughts and will get sense impressions like the structure of a touched surface. The video (especially the first half) impressions were impressive to me:
Unfortunately there can be negative deployments but let’s think about all the good things which can be done with that technology! A lot of people can live a better life just because they will be able to do normal things.
Research efforts look at the human body as a medium for tactile input “technology” (and therefor they also need it as visual output surface).
That’s a really nice approach so that one don’t have to hold e.g. a mp3 player or a smartphone. But you have to have some free skin and that wouldn’t be accepted by users in winter.
In the video you can see Chris Harrison dialing with his hand – and than? Is he holding his hand at his ears and does he have a body built in microphone? ;D
Here someone shows how a smartphone is able to track a users face real-time:
If this isn’t a fake that’s some kind of pleasing and frightening to me.
The comfort of user interfaces is increasing a lot – e.g. tracking head motions and do some automatic things with that like shut off the display.
But e.g. global corporations are able to log your gestures online during watching films and with that data concluding to your intimate preferences and many more…
A nice new technology connects the visual advantages with the tactile ones and keeps being adjustable in both aspects.
So in the future e.g. one can feel a slider (graphical element to adjust e.g. volume) and so the volume level of mobile devices or more complex elements like smilies or whole text messages without looking at it.
In the next video a researcher is transferring human head movement data via gyrosensor to a roboter and his video data back to the human.
Is this a another new technology (of virtualization) towards alienation (robots taking over your representation in the world and therefor the public majority) and dissociation (knowing yourself | your body) as one can see in the film “Surrogates”?
He wants to tell us that “you are absolutely not distracted in any way from the road”. Sorry but: Are you kidding us? In that case what’s the purpose of the “new steering wheel”- to remind the driver to his duty of attention to the traffic?
As I wrote some time ago, human-machine interaction is going towards the interesting trends we saw in futuristic films years ago. Here is another example of a human-machine interface which reminds us particularly of “Minority Report”. A man is gesticulating with both his hands free in space and on a screen you can see a 3D scene performing appropriate actions.
That’s impressive!
—
But because I’m seeing some disadvantages with this technology for me it isn’t such impressive anymore. The disadvantages or let’s say challenges are:
One has to hold his arms continuously in free space, for sure that’s exhausting.
What if there are more than two hands / one user? Are the algorithms able to match the hands to the right persons or will there be false recognized (inter)actions?
In my opinion the user gestures were adapted to the technology not to real user needs / perceptions.
The actions are not accurate enough yet.
In the case of the deletion of files some other way would be much better than to move the files to the edges I think. Because this action suggests me that there will be a backup of the files and I#m able to load them again.
And beside the other details a main goal for future investigations has to be to remove the need that a user has to recheck visually what he is doing with his hands (red cross and borders in the video and so on). The user wants to e.g. translate a picture, commands that with his hands and that’s it. The extra necessity to visualize that destroys the / my sensations.
But the technology (free hand gestures and 3d scenes) is a really great step into future!
What a cute demonstration of a nice technology – acting as you have a computer mouse without a mouse:
Probably this gives us the opportunity to interact almost everywhere (in a traditional style) with a laptop (in bed, at kitchen table, etc.) without carrying around the (hardware) mouse.
One has the ability to feel the ‘click’ and so you have tactile feedback of initiating a interaction function. But what if one acts with the hand out of the camera sight? Whill there be false inputs or no one? How to inform the user of the right hand position?
Here is an amusing video about “beliefing (or not) in UFOs” which shows our boundaries of imaginative power and that optical illusions should be known as brain failures.
In my opinion we have to recognize these things “mental / perception problems” when designing human machine interfaces, in a way that new technology doesn’t confuse the users.
Beside the hardware and software products in the video have a look to the gyroscope functionality in smartphones today (especially the precision). It’s amazing! Real cool things can be done with this feature, I think.