dc.contributor.author |
Niwa, K |
en |
dc.contributor.author |
Hioka, Yusuke |
en |
dc.contributor.author |
Uematsu, H |
en |
dc.date.accessioned |
2018-10-17T01:44:02Z |
en |
dc.date.issued |
2018-11 |
en |
dc.identifier.citation |
IEEE Transactions on Multimedia 20(11) 2871-2881 2018 |
en |
dc.identifier.issn |
1520-9210 |
en |
dc.identifier.uri |
http://hdl.handle.net/2292/42413 |
en |
dc.description.abstract |
In virtual reality, 360° video services provided through head-mounted displays or smartphones are widely available. Among these, some state-of-the-art devices are able to render varying auditory location of an object perceived by the user when the visual location of the object in the video moves along with the change of the user's looking direction. Nevertheless, an acoustic immersion technology that generates binaural sound to maintain a good match between the auditory and visual localization of an object in 360° video has not been studied sufficiently. This study focuses on an approach that synthesizes semibinaural sound being composed of virtual sources located in each angular region and the representative head related transfer functions of each angular region. To minimize the calculation cost on audio rendering and to reduce latency in downloading data from servers, the number of angular regions should be reduced while maintaining a good match between the auditory and visual localization of an object. In this paper, we investigate the minimum number of angular regions at which it is possible to maintain a good match by conducting subjective tests using a 360° video viewing system composed of virtual images and sound sources. From the subjective tests, it was confirmed that the acoustic field should be divided into more than six equispaced angular regions so as to achieve natural auditory localization that matches an object's location in 360° video. |
en |
dc.publisher |
Institute of Electrical and Electronics Engineers |
en |
dc.relation.ispartofseries |
IEEE Transactions on Multimedia |
en |
dc.rights |
Items in ResearchSpace are protected by copyright, with all rights reserved, unless otherwise indicated. Previously published items are made available in accordance with the copyright policy of the publisher. |
en |
dc.rights.uri |
https://researchspace.auckland.ac.nz/docs/uoa-docs/rights.htm |
en |
dc.rights.uri |
https://www.ieee.org/publications/rights/author-posting-policy.html |
en |
dc.title |
Efficient audio rendering using angular region-wise source enhancement for 360° video |
en |
dc.type |
Journal Article |
en |
dc.identifier.doi |
10.1109/TMM.2018.2829187 |
en |
pubs.issue |
11 |
en |
pubs.begin-page |
2871 |
en |
pubs.volume |
20 |
en |
dc.rights.holder |
Copyright: IEEE |
en |
pubs.end-page |
2881 |
en |
dc.rights.accessrights |
http://purl.org/eprint/accessRights/OpenAccess |
en |
pubs.subtype |
Article |
en |
pubs.elements-id |
734357 |
en |
pubs.org-id |
Engineering |
en |
pubs.org-id |
Mechanical Engineering |
en |
pubs.record-created-at-source-date |
2018-03-28 |
en |