Just in case you thought: “the developer is not the user“, so get to know them!
Keyboard with motion sensing
Five microsoft researchers recently published a paper in which they presented a new technology combining a normal keyboard with motion sensing. I like this idea because I like the gestures I’m able to use fluently with the Macbook touchpad so that I don’t need a mouse anymore. So when the keyboard can recognise my gestures we don’t need touchpads any more which can have an effect on laptop designs and more, I guess.
via windowsdeveloper.de, see also microsoft.com
Try Windows 1.01 (1985)
Try by yourself: Windows 1.01 emulated in Browser.
Interesting to see some HCI designs which didn’t become extinct until now, e.g. menus and the cursor. OK, right now we live in a changing HCI world but do not forget the origins.
[this blog] introducing new content type ‘link’
Right now I’m introducing a new category ‘link’ with which I want to share articles worth reading and suchlike. So I organized my existing articles into the categories ‘text’ and ‘video’ so that you can have a look on whatever you like (reading or viewing).
[this blog] Ongoing with appetizers (only)
I have no time to discuss any piece of information I find / have found in detail. So I think I will begin just to post some snippets. At least this ‘blog’ will be ongoing…
Which cloud belongs to me?
Today there are several cloud services (from Google, Amazon, Dropbox, Box.net, Mendeley, and so on) which offer some free webspace and in case some space extra for that you have to pay some money. And as time goes by there are joining more and more companies this trend.
That’s not what the users want or need, I think. Again it’s a technology / functional driven service design. All the cool new features are thrown to the users by thinking to bind them to the company. But no one asks what the users want nor what they are struggling with. These services are confusing the users after a while!
For example a user has a service A with 2GB free webspace of which he uses the half to store private images. With another service B he can synchronize all his digital files between different devices. But in order to do so he has to pay money because the free space of 5GB doesn’t fit his 20GB music data. So he pays for 50GB and 30GB of that are mindless (paid (minus that 5GB free)). And to do some research the user makes use of a cool service C which allows him to synchronize research papers for his devices with his notices in parallel. And because the 500MB aren’t enough the user pays for another 5GB of space in the cloud of which he needs only 2GB.
And as all this specialization and fragmentation goes along the user will be left alone with his countless accounts and paid subscriptions. He pays for a lot more of webspace then he really needs, falls in danger to loose the overview, and therefore get’s frustrated.
Many clouds are on the sky. Which one is mine?
That’s why I’m proposing another perspective to this problem aka cloud services. My concept is to allow a user to get some free and maybe some paid webspace and all from one provider. The amount has to be nearly freely customizable so that every user can select the best for him with regard to his needs of space and feasibility to pay for it. And on top of that one space for one user every cloud service company may be able to request the offer of their own product / service (by using the one before mentioned user space). Maybe the user then has to pay extra money for the services. But in my opinion he will be more choosier and therefore the companies have to build even better services because that and not the amount of space will be the unique selling point.
That’s user centered cloud service design.
Emotions and our brain
The following video can be a bit disgusting. It shows real human brains and how they are anatomized to analyze their structure even at genomic level.
For me it’s interesting if we can analyze genes and their cooperation in view of producing a mood or emotions. What are the preconditions for an emotion like happiness? Would that be possible? Or does people have to much differences in their genomic preconditions? Or are there any other influencing factors we couldn’t measure (yet)?
Design then develop HMIs
Recently I changed the title of this blog because I want to broaden the spectrum of articles in this blog from mainly technology oriented (HMI – Human Machine Interface) to add a bit of a psychological point of view (UX – User Experience). These are topics I’m interested in and so I’m reading a lot and want to write about them.
—
When we discover UX we have to do some design for web pages, in industrial sector and so on. Cameron Koczon wrote an interesting article at alistapart.com where he argues that it is An Important Time for Design. As he says there are too few startups with designers in a partner role rather than a required appendix or even no designers at all. In my point of view the output is primarily driven by developers and their technology and only “The products that take design seriously and incorporate it from the start are going to be the ones that connect with people in a way that really makes an impact in the world.”
We need to have some designers on board who knows how people perceive, interprete the informations and act based on that. So designers have to analyze user groups, e.g. build personas on that and extract user needs with this technique. Or let them do user interviews to gather what users really need or want to change. To get the most usability problems out of your product you only need five test users as Jakob Nielsen says. But “zero users give zero insights“. (Actually there may be a distinction between a user experience researcher and a designer but I’m considering both jobs in designer in this article.) Furthermore startups (beside all other companies) need to have designers to transform this knowledge into real prototypes in an iterative way so that the products will become more useful and hopefully more emotionally touching. Because emotions are the big part of UX, they connect people to technology – to products. Developers instead often think in functions which can cause bad user emotions which in turn are able to disconnect user and product. So let’s make an impact in the world – with designers and developers at eye level.
Future mobile interaction
Samsung is showing his version of future mobile interaction with life (video) chat, augmented reality, holograms and life translations for better international communication.
So, how long does it take until it isn’t future any more? Can’t wait.
But in some point it seems not really useful, there are some improvements possible. Because the display is transparent other people can see your content. And do you really want other to see e.g. your chat partners? And where comes the sound from? How to charge it or won’t it need a extern power source ever? And where is all the intern (hardware) technology which enables this user experience? How do i get my photos and so on to my private computer at home?
In my opinion this is only a vision which shows the way from today with tablets and smartphones to a further technology step. I think some years ahead all of us will weare a little thing like a wrist watch. This will contain all the hardware to connect us with the real and virtual world. It mainly uses holograms for our visual sense, there can be a camera and beamer included to enable augmented reality experiences. This little thing can observe our vital functions permanently and maybe even get its power from our own body (temperature?). To ensure data privacy we have to build it near the eyes. Maybe that’s why we will look like a cyborg in future (but with all hands free).