£783 sub-vocalize and hear in tongues with subvocal, translations and
Let's talk! The computer can translate (subvocal speech)
30 Oct 2005 13:11:09 -0800
Let's talk! The computer can
Friday, October 28, 2005
By Byron Spice,
Stan Jou's lips were moving, but no
sound was coming out.
Mr. Jou, a graduate student in language
technologies at Carnegie
Mellon University, was simply mouthing
words in his native Mandarin
Chinese. But 11 electrodes attached to
his face and neck detected his
muscle movements, enabling a computer
program to figure out what he
was trying to say and then translate
his Mandarin into English.
The result boomed out of a loudspeaker
a few seconds later:
"Let me introduce our new prototype," a
announced. "You can speak in Mandarin and it
translates into English
"This is a bit of
science fiction," said Alex Waibel, director of the
Center for Advanced Communications Technologies, "but
it is a vision
that we think is very exciting." And where it once
seemed a distant
dream, it now is being actively developed thanks to
in machine translation.
This particular gadget, when fully
developed, might allow anyone to
speak in any number of languages
or, as Dr. Waibel put it, "to switch
your mouth to a foreign
It was one of several translation devices his research
demonstrated publicly for the first time yesterday in a
with reporters in Pittsburgh and at the University of
"We want to make language translation transparent,"
Waibel, a computer scientist who holds joint
appointments at Carnegie
Mellon and Karlsruhe.
centerpiece of the demonstration was the videoconference
Dr. Waibel spoke, computer software translated his speech
Spanish and German.
Previous computer systems have translated the
spoken word in limited
contexts, or "domains," such as travel or
medical information. But
yesterday's demonstration was of so-called
"open domain" speech-to-
speech translation, a technically difficult
feat to pull off because
the spoken word is often ungrammatical and
filled with colloquialisms.
"This is definitely a new frontier,"
said Kevin Knight, director of
the University of Southern
California's Information Sciences
Institute. "If you look in the
scientific literature, you couldn't
find too much today on open
domain speech translation."
What has made this possible has been a
dramatic change in how
computer translation programs are written.
In the past, most
translation software has been based on sets of
rules -- dictionary
definitions, grammatical rules and such. In
other words, programmers
tried to make a computer think like a
But increasingly, the trend in artificial intelligence is
the computers to think like computers, using statistical
draw meaning out of masses of information, said Randall
dean of Carnegie Mellon's School of Computer Science.
recognition programs began using these statistical methods 15
ago, Dr. Knight said. Only recently have they been applied to
translation "and that's why things have been improving a lot
availability on the Internet of large amounts of translated text
been a major boon, said Dr. Waibel.
The results aren't perfect.
When Dr. Waibel announced he would take
questions from reporters in
Germany and America, the computer heard
it as "so we glycogen it
alternating questions between Germany and
America." And the systems
don't really understand what they are
translating, so may have
trouble sometimes when a speaker tries to be
humorous or ironic.
he predicted open domain systems could be ready for use within
"As we make contact, people will be more likely to learn
languages," Dr. Waibel said. U.S. soldiers in Iraq, for
have handheld devices that repeat foreign phrases,
learned to speak those phrases themselves and
discard the machines.
For more information
on subvocal speech, visit:
Links > - Gadgets > Brain
fingerprinting > Subvocal speech http://groups.yahoo.com/group/cia_tradecraft/links/Biotech_00110954255