Nova Spivack points me to an interesting piece from TheFeature.com about new technology under development at NASA that will allow people to "speak silently". Nova explains succintly:
Their system intercepts nerve signals to the vocal cords before the speaker makes a sound and then figures out what words they signify. This technology will enable people to speak silently on the phone or to their computers, without moving their lips or making a sound. It's almost telepathy.
"Almost telepathy" because the system intercepts motor signals from the brain to the vocal area, and not the thoughts that precede the motor signals. Even so, it's an immensely impressive achievement just to be able to do this.
But there is one thing I'd like to know: could this technique work with Chinese and Vietnamese? These are languages that depend on tonal variations to convey meaning. My (completely uneducated) guess is that tonal variations aren't conveyed in the nerve signals prior to the word actually being spoken.
For that matter, even when it comes to English, tonal variations in speech convey tons of contextual information.
Polite: "Sit down."
Exasperated: "Sit down!"
Loving: "Sit down."
Commanding: "Sit down."
Were you to read that simple phrase without the prefatory descriptions, you'd have no idea what the context was.
So I think sound is essential before the electronic ventriloqust will be bundled with tomorrow's cellphones. Of course, there will be countless other applications that require limited or no audible context. Or I could be dead wrong in the first place about the system's ability to detect tonal variations!