Would be nice to test this technology which lets you feel surfaces of displayed images. I would like to know how realistic it can be.
via GIZMODO.de
Would be nice to test this technology which lets you feel surfaces of displayed images. I would like to know how realistic it can be.
via GIZMODO.de
A person is walking through different virtual rooms while in reality doesn’t notice that he walks in a nine by nine meter space only. To the person, the virtual space seems to be larger.
via heise.de/iX, see also tuwien.ac.at
It turns out, many people questioned if their color perception is the same as the perception of others. Let’s say as a child you saw something and e.g. your mom told you this is a red strawberry or some other thing is a yellow banana. So you got the idea of red and yellow.
But that does not mean the interpretation of your visual informations in your brain and the interpretation of your mom is the same. And no one knows…
via GIZMODO.de
Seems as TV is going to be extended to stimulate our peripheral vision.
via GIZMODO.de
That’s funny!
But can this be true? To trigger a blink of an eye with mini ‘electro-shocks’ to the temple, that’s possible in my opinion. You need at least 25 blinks / electro impulses for each of the eyes. That’s also feasable, I think. (I don’t know exactly if this is really enough but it has to be the minimum!) But when I try to blink with my eyes as quick as possible everything is getting dark. The amount of time I closed my eyes is bigger than the time I have my eyes open and the ‘open time’ isn’t long enough to process the content / images. So in my point of view this video is funny but not more.
In the name of art the New York University professor Wafaa Bilal had implanted a titanium connection base for a magnetically camera in the back of his head.
That’s a really nice thing for new use cases. For example think about hacking at the computer in the front and watching TV in the back. *fun* But seriously you can have more overview while walking in dark streets. I’m sure this is very interesting for military purposes.
The questions are: Is the human brain able to process this third eye? Actually W. Bilal is looking at the video data with his own two eyes via laptop. But what if one connects the camera with a brain computer interface directly with the mind? Why didn’t do nature such things? Is the human going crazy because not knowing where he is going (forward or backward)? Is this a new step towards cyborgs?
As I wrote some time ago, human-machine interaction is going towards the interesting trends we saw in futuristic films years ago. Here is another example of a human-machine interface which reminds us particularly of “Minority Report”. A man is gesticulating with both his hands free in space and on a screen you can see a 3D scene performing appropriate actions.
That’s impressive!
—
But because I’m seeing some disadvantages with this technology for me it isn’t such impressive anymore. The disadvantages or let’s say challenges are:
And beside the other details a main goal for future investigations has to be to remove the need that a user has to recheck visually what he is doing with his hands (red cross and borders in the video and so on). The user wants to e.g. translate a picture, commands that with his hands and that’s it. The extra necessity to visualize that destroys the / my sensations.
But the technology (free hand gestures and 3d scenes) is a really great step into future!
Here is an amusing video about “beliefing (or not) in UFOs” which shows our boundaries of imaginative power and that optical illusions should be known as brain failures.
In my opinion we have to recognize these things “mental / perception problems” when designing human machine interfaces, in a way that new technology doesn’t confuse the users.