このエントリーをはてなブックマークに追加
ID 57781
FullText URL
Author
Matsumoto, Kento Okayama University
Hara, Sunao Okayama University ORCID Kaken ID publons researchmap
Abe, Masanobu Okayama University ORCID Kaken ID publons researchmap
Abstract
In this paper, we propose a new algorithm to generate Speech-like Emotional Sound (SES). Emotional information plays an important role in human communication, and speech is one of the most useful media to express emotions. Although, in general, speech conveys emotional information as well as linguistic information, we have undertaken the challenge to generate sounds that convey emotional information without linguistic information, which results in making conversations in human-machine interactions more natural in some situations by providing non-verbal emotional vocalizations. We call the generated sounds “speech-like”, because the sounds do not contain any linguistic information. For the purpose, we propose to employ WaveNet as a sound generator conditioned by only emotional IDs. The idea is quite different from WaveNet Vocoder that synthesizes speech using spectrum information as auxiliary features. The biggest advantage of the idea is to reduce the amount of emotional speech data for the training. The proposed algorithm consists of two steps. In the first step, WaveNet is trained to obtain phonetic features using a large speech database, and in the second step, WaveNet is re-trained using a small amount of emotional speech. Subjective listening evaluations showed that the SES could convey emotional information and was judged to sound like a human voice.
Published Date
2019-11
Publication Title
Proceedings of APSIPA Annual Summit and Conference
Volume
volume2019
Publisher
IEEE
Start Page
143
End Page
147
ISSN
2640-009X
Content Type
Conference Paper
language
English
Copyright Holders
© Copyright APSIPA
Event Title
APSIPA Annual Summit and Conference
Event Location
Lanzhou, China
Event Dates
18-21 Nov. 2019
File Version
publisher
DOI