Ford (in co-operation with Microsoft) provides an interesting human-machine interface for their cars called Ford SYNC. The driver mostly controls this interface with his voice (apparently except the volume level) and with that he is able to call people from (connected smart)phone book, get through radio news, be led to a unknown business location and so on.
Ford doesn’t have a lot of visual output of navigational data nor tactile interface. And I think that’s because they don’t want to distract the driver in such way. But so they have to do a lot of communication processes in an auditive way and the driver has to remember many things (menu options, etc.). Thinking about his options is mentally demanding and this is also distracting the driver from the traffic, from my point of view. That’s also the case if the driver lets read out the options by the ‘car’ because he has to focus to that.
I don’t like this interface really well. In my view the perfect interface is multimodal and therefore balanced with more visual and also tactile elements.
[ford.com: Ford SYNC]
PS: Why the most of auditive machine outputs are female?