dc.contributor.author |
James, Jesin |
en |
dc.contributor.author |
Tian, L |
en |
dc.contributor.author |
Watson, Catherine |
en |
dc.coverage.spatial |
Hyderabad, India |
en |
dc.date.accessioned |
2019-11-01T01:12:40Z |
en |
dc.date.issued |
2018-09-06 |
en |
dc.identifier.citation |
Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. 2768-2772. 06 Sep 2018 |
en |
dc.identifier.issn |
2308-457X |
en |
dc.identifier.uri |
http://hdl.handle.net/2292/48788 |
en |
dc.description.abstract |
For further understanding the wide array of emotions embedded in human speech, we are introducing a strictly-guided simulated emotional speech corpus. In contrast to existing speech corpora, this was constructed by maintaining an equal distribution of 4 long vowels in New Zealand English. This balance is to facilitate emotion related formant and glottal source feature comparison studies. Also, the corpus has 5 secondary emotions and 5 primary emotions. Secondary emotions are important in Human-Robot Interaction (HRI) to model natural conversations among humans and robots. But there are few existing speech resources to study these emotions, which has motivated the creation of this corpus. A large scale perception test with 120 participants showed that the corpus has approximately 70% and 40% accuracy in the correct classification of primary and secondary emotions respectively. The reasons behind the differences in perception accuracies of the two emotion types is further investigated. A preliminary prosodic analysis of corpus shows significant differences among the emotions. The corpus is made public at: github.com/tli725/JL-Corpus. |
en |
dc.relation.ispartof |
Interspeech 2018 |
en |
dc.relation.ispartofseries |
Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH |
en |
dc.rights |
Items in ResearchSpace are protected by copyright, with all rights reserved, unless otherwise indicated. Previously published items are made available in accordance with the copyright policy of the publisher. |
en |
dc.rights.uri |
https://researchspace.auckland.ac.nz/docs/uoa-docs/rights.htm |
en |
dc.rights.uri |
https://www.isca-speech.org/iscaweb/index.php/archive/online-archive#faq |
en |
dc.title |
An Open Source Emotional Speech Corpus for Human Robot Interaction Applications |
en |
dc.type |
Conference Item |
en |
dc.identifier.doi |
10.21437/Interspeech.2018-1349 |
en |
pubs.begin-page |
2768 |
en |
dc.rights.holder |
Copyright: ISCA |
en |
pubs.end-page |
2772 |
en |
pubs.finish-date |
2018-09-06 |
en |
pubs.start-date |
2018-09-02 |
en |
dc.rights.accessrights |
http://purl.org/eprint/accessRights/OpenAccess |
en |
pubs.subtype |
Proceedings |
en |
pubs.elements-id |
753573 |
en |
pubs.org-id |
Engineering |
en |
pubs.org-id |
Department of Electrical, Computer and Software Engineering |
en |
pubs.record-created-at-source-date |
2018-09-24 |
en |