Officially, human already given their voice to machine, now machine speaks same as human or even more accurate than human.
A research paper published by Google this month—which has not been peer reviewed—details a text-to-speech system called Tacotron 2, which claims near-human accuracy at imitating audio of a person speaking from text.
The system is Google’s second official generation of the technology, which consists of two deep neural networks. The first network translates the text into a spectrogram (pdf), a visual way to represent audio frequencies over time. That spectrogram is then fed into WaveNet, a system from Alphabet’s AI research lab DeepMind, which reads the chart and generates the corresponding audio elements accordingly.
Here is the sample voice of AI and human. Its hard to find difference between which is of AI and which is of human.
Saying:
Saying:
George Washington was the first President of the United States.
The publish paper describes Tacotron 2, a neural network architecture for speech synthesis directly from text. The system is composed of a recurrent sequence-to-sequence feature prediction network that maps character embeddings to mel-scale spectrograms, followed by a modified WaveNet model acting as a vocoder to synthesize timedomain waveforms from those spectrograms. Our model achieves a mean opinion score (MOS) of 4.53 comparable to a MOS of 4.58 for professionally recorded speech. To validate our design choices, we present ablation studies of key components of our system and evaluate the impact of using mel spectrograms as the input to WaveNet instead of linguistic, duration, and F0 features. We further demonstrate that using a compact acoustic intermediate representation enables significant simplification of the WaveNet architecture.
More voices from here:
All of the below phrases are unseen by Tacotron 2 during training.
Tacotron 2 works well on out-of-domain and complex words.
“Generative adversarial network or variational auto-encoder.” |
“Basilar membrane and otolaryngology are not auto-correlations.” |
Tacotron 2 learns pronunciations based on phrase semantics.
(Note how Tacotron 2 pronounces "read" in the first two phrases.)
“He has read the whole thing.” |
“He reads books.” |
“Don't desert me here in the desert!” |
“He thought it was time to present the present.” |
Tacotron 2 is somewhat robust to spelling errors.
“Thisss isrealy awhsome.” |
Tacotron 2 is sensitive to punctuation.
(Note how the comma in the first phrase changes prosody.)
“This is your personal assistant, Google Home.” |
“This is your personal assistant Google Home.” |
Tacotron 2 learns stress and intonation.
(The speaker is instructed to stress on capitalized words in our
training set. So simply capitalizing some words will change the overall
intonation.)
“The buses aren't the problem, they actually provide a solution.” |
“The buses aren't the PROBLEM, they actually provide a SOLUTION.” |
Tacotron 2's prosody changes when turning a statement into a question.
“The quick brown fox jumps over the lazy dog.” |
“Does the quick brown fox jump over the lazy dog?” |
Tacotron 2 is good at tongue twisters.
“Peter Piper picked a peck of pickled peppers. How many pickled peppers did Peter Piper pick?” |
“She sells sea-shells on the sea-shore. The shells she sells are sea-shells I'm sure.” |
Additional Samples
“Talib Kweli confirmed to AllHipHop that he will be releasing an album in the next year.” |
“The blue lagoon is a nineteen eighty American romance adventure film.” |
“Tajima Airport serves Toyooka.” |
Tacotron 2 or Human?
In the following examples, one is generated by Tacotron 2, and one is the recording of a human, but which is which?
“That girl did a video about Star Wars lipstick.” | |
1 | |
2 | |
“She earned a doctorate in sociology at Columbia University.” | |
1 | |
2 | |
“George Washington was the first President of the United States.” | |
1 | |
2 | |
“I'm too busy for romance.” | |
1 | |
2 |
No comments: