September 19, 2010

Automatically catch thoughts

This video shows the today’s circumstances to use a BCI (brain computer interface). You need a lot of tools and time to gather simple text messages.

But it’s impressive and I’m sure something like this is the human machine interface of the future! Maybe I saw to many films like “Matrix” and so on but seriously: The brain is the only body part we all have! So in fact e.g. deaf people will be able to speak. Maybe we can make an advance in communicating with sick people, think of paraplegic people or people in coma. Perhaps we will be able to communicate in some way with animals!

[engr.wisc.edu…: Department of Biomedical Engineering]

September 12, 2010

Protheses: realistic and better

The US Defense Advanced Research Projects Agency (DARPA) promotes research efforts to enable protheses near humane or in some way even better. So they try to connect them with human cerebral cortex to deliver information from brain to the protheses and also back again! So people can control protheses only by thoughts and will get sense impressions like the structure of a touched surface. The video (especially the first half) impressions were impressive to me:

Unfortunately there can be negative deployments but let’s think about all the good things which can be done with that technology! A lot of people can live a better life just because they will be able to do normal things.

[exhibitions.cooperhewitt.org: Modular Prosthetic-limb System]

September 5, 2010

Human body as interaction medium

Research efforts look at the human body as a medium for tactile input “technology” (and therefor they also need it as visual output surface).

That’s a really nice approach so that one don’t have to hold e.g. a mp3 player or a smartphone. But you have to have some free skin and that wouldn’t be accepted by users in winter.

In the video you can see Chris Harrison dialing with his hand – and than? Is he holding his hand at his ears and does he have a body built in microphone? ;D

[chrisharrison.net: Skinput]

August 28, 2010

Mobile realtime face tracking

Here someone shows how a smartphone is able to track a users face real-time:

If this isn’t a fake that’s some kind of pleasing and frightening to me.

The comfort of user interfaces is increasing a lot – e.g. tracking head motions and do some automatic things with that like shut off the display.

But e.g. global corporations are able to log your gestures online during watching films and with that data concluding to your intimate preferences and many more…

August 21, 2010

Tactile feedback touch interfaces

A nice new technology connects the visual advantages with the tactile ones and keeps being adjustable in both aspects.

So in the future e.g. one can feel a slider (graphical element to adjust e.g. volume) and so the volume level of mobile devices or more complex elements like smilies or whole text messages without looking at it.

Until today that was a really missing feature!

August 14, 2010

Virtualization of human body (1)

In the next video a researcher is transferring human head movement data via gyrosensor to a roboter and his video data back to the human.

Is this a another new technology (of virtualization) towards alienation (robots taking over your representation in the world and therefor the public majority) and dissociation (knowing yourself | your body) as one can see in the film “Surrogates”?

August 7, 2010

Absolutely distracted from the road

Here is a guy who invented some nice thing:

He wants to tell us that “you are absolutely not distracted in any way from the road”. Sorry but: Are you kidding us? In that case what’s the purpose of the “new steering wheel”- to remind the driver to his duty of attention to the traffic?

July 24, 2010

Smart displays

I think, these transparent displays are very cool

and when they are flexible and indestructible

they offer a great potential for interesting applications in the future.

July 17, 2010

Touchless multi-touch in 3D

As I wrote some time ago, human-machine interaction is going towards the interesting trends we saw in futuristic films years ago. Here is another example of a human-machine interface which reminds us particularly of “Minority Report”. A man is gesticulating with both his hands free in space and on a screen you can see a 3D scene performing appropriate actions.

That’s impressive!

But because I’m seeing some disadvantages with this technology for me it isn’t such impressive anymore. The disadvantages or let’s say challenges are:

  • One has to hold his arms continuously in free space, for sure that’s exhausting.
  • What if there are more than two hands / one user? Are the algorithms able to match the hands to the right persons or will there be false recognized (inter)actions?
  • In my opinion the user gestures were adapted to the technology not to real user needs / perceptions.
  • The actions are not accurate enough yet.
  • In the case of the deletion of files some other way would be much better than to move the files to the edges I think. Because this action suggests me that there will be a backup of the files and I#m able to load them again.

And beside the other details a main goal for future investigations has to be to remove the need that a user has to recheck visually what he is doing with his hands (red cross and borders in the video and so on). The user wants to e.g. translate a picture, commands that with his hands and that’s it. The extra necessity to visualize that destroys the / my sensations.

But the technology (free hand gestures and 3d scenes) is a really great step into future!

July 10, 2010

And here comes the (invisible) mouse

What a cute demonstration of a nice technology – acting as you have a computer mouse without a mouse:

Probably this gives us the opportunity to interact almost everywhere (in a traditional style) with a laptop (in bed, at kitchen table, etc.) without carrying around the (hardware) mouse.

One has the ability to feel the ‘click’ and so you have tactile feedback of initiating a interaction function. But what if one acts with the hand out of the camera sight? Whill there be false inputs or no one? How to inform the user of the right hand position?