Why do we need touch screens in cars instead of knobs or buttons to control volume and so on?
I think in one hand it’s a good idea because you can change the interface in one car very easily (from background to themes and whole controls or even the entire arrangement). But on the other hand the display in the video above is dazzling and the haptic feedback might be missing. So the driver could be more distracted than with conventional control elements.
Greg Kaufman tried something new for DJs so that they don’t have to carry their mixer console and vinyls all the time to the clubs and install the console in the dark clubs. So as Greg said in his video, we are now in the digital age and that’s why he (was able to) digitalized the console. So if all clubs have such turntables, DJs only have to carry their USB device with their music on it.
That’s a great idea for using so called natural user interfaces, especially because the consoles are easily customizable. But that’s also the critical point of this device. Different DJs have different visions for the perfect mutlitouch turntable, I think. One wants to modify the display and the other to change controls. And these controls seem to be a little bit tricky to me. You have to use one finger for ABC and two finger for DEF and three finger for GH? and four for ?U?. And that’s the thing for both hands and you have to approximately hit the right locations on the table and that without haptic feedback. Looks great but needs closer exmination.
It’s great to see that disabled persons are integrated in normal life. The video shows how a (blind)human-machine interface can be built so that the people are able to program software. (The video also shows how they are teached to do so.)
Research efforts look at the human body as a medium for tactile input “technology” (and therefor they also need it as visual output surface).
That’s a really nice approach so that one don’t have to hold e.g. a mp3 player or a smartphone. But you have to have some free skin and that wouldn’t be accepted by users in winter.
In the video you can see Chris Harrison dialing with his hand – and than? Is he holding his hand at his ears and does he have a body built in microphone? ;D
A nice new technology connects the visual advantages with the tactile ones and keeps being adjustable in both aspects.
So in the future e.g. one can feel a slider (graphical element to adjust e.g. volume) and so the volume level of mobile devices or more complex elements like smilies or whole text messages without looking at it.
As I wrote some time ago, human-machine interaction is going towards the interesting trends we saw in futuristic films years ago. Here is another example of a human-machine interface which reminds us particularly of “Minority Report”. A man is gesticulating with both his hands free in space and on a screen you can see a 3D scene performing appropriate actions.
That’s impressive!
—
But because I’m seeing some disadvantages with this technology for me it isn’t such impressive anymore. The disadvantages or let’s say challenges are:
One has to hold his arms continuously in free space, for sure that’s exhausting.
What if there are more than two hands / one user? Are the algorithms able to match the hands to the right persons or will there be false recognized (inter)actions?
In my opinion the user gestures were adapted to the technology not to real user needs / perceptions.
The actions are not accurate enough yet.
In the case of the deletion of files some other way would be much better than to move the files to the edges I think. Because this action suggests me that there will be a backup of the files and I#m able to load them again.
And beside the other details a main goal for future investigations has to be to remove the need that a user has to recheck visually what he is doing with his hands (red cross and borders in the video and so on). The user wants to e.g. translate a picture, commands that with his hands and that’s it. The extra necessity to visualize that destroys the / my sensations.
But the technology (free hand gestures and 3d scenes) is a really great step into future!
Providing such robust displays combined with tough unibodies (like the MacBook Pro or HTC Legend) surely is a way to reduce the need of cautiousness of the user, and therefore a sort of consistent stress, and to enhance the satisfaction. Nice devices and applications can arise from these things.
Here is some interesting stuff by combining pen input with multitouch (which gives the ability to realize “tools“):
Seems to be developed in extensive technical view. For example, (initially) it is not intuitive for the user to copy an image by holding the one hand down and drag the other with the pen (starting in video at about 1min). So some improvement work in human view on this good idea is still pending…