Spatial sound rendering using measured room impulse responses

dc.contributor.authorLi, Yan
dc.contributor.supervisorDriessen, Peter F.
dc.contributor.supervisorTzanetakis, George
dc.date.accessioned2010-08-24T20:01:45Z
dc.date.available2010-08-24T20:01:45Z
dc.date.copyright2010en
dc.date.issued2010-08-24T20:01:45Z
dc.degree.departmentDept. of Electrical and Computer Engineeringen
dc.degree.levelMaster of Applied Science M.A.Sc.en
dc.description.abstractThis thesis presents a spatial sound rendering system for the use in immersive virtual environments. Spatial sound rendering aims at artificially reproducing the acoustics of a space. It has many applications such as music production, movies, electronic gaming and teleconferencing. Conventionally, spatial sound rendering is implemented by digital signal processing algorithms derived from perceptual models or simplified physical models. While being flexible and/or efficient, these models are not able to capture the acoustical impression of a space faithfully. On the other side, convolving the sound sources with properly measured impulse responses produces the highest possible fidelity, but it is not practically useful for many applications because one impulse response corresponds to one source/listener configuration so that the sources or the listeners can not be relocated. In this thesis, techniques for measuring multichannel room impulse responses (MMRIR) are reviewed. Then, methods for analyzing measured MMRIR and rendering virtual acoustical environment based on such analysis are presented and evaluated. The analysis can be performed off-line. During this stage, a set of filters that represent the characteristics of the air and walls inside the acoustic space are obtained. Based on the assumption that the MMRIR acquired at one "good" position in the target space can be used to simulate the late reverb at other positions in the same space, appropriate segments that can be used as reverb tails are extracted from the measured MMRIR. The rendering system first constructs an early reflection model based on the positions of the listener-source pair and the filters derived, then combines with the late reverb segments to form a complete listener-source-room acoustical model that can be used to synthesize high quality multi-channel audio for arbitrary listener-source positions. Another merit of the proposed framework is that it is scalable. At the expense of slightly degraded rendering quality, the computational complexity can be greatly reduced. This makes this framework suitable for a wide range of applications that have different quality and complexity requirements. The proposed framework has been evaluated by formal listening tests. These tests have proven the effectiveness in preserving the spatial quality while positioning the listener-source pair accurately, as well as justified the key assumptions made by the proposed system.en
dc.identifier.bibliographicCitationY. Li, P. F. Driessen, and G. Tzanetakis (2006). Spatial sound rendering using measured room impulse responses. Proc. IEEE Int. Symposium on Signal Processing and Information Technology, 2006.en
dc.identifier.urihttp://hdl.handle.net/1828/2961
dc.languageEnglisheng
dc.language.isoenen
dc.rightsAvailable to the World Wide Weben
dc.subjectSpatial Sound Renderingen
dc.subjectRoom Impulse Responsesen
dc.subject.lcshUVic Subject Index::Sciences and Engineering::Engineering::Electrical engineeringen
dc.titleSpatial sound rendering using measured room impulse responsesen
dc.typeThesisen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
yli_thesis.pdf
Size:
2.36 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.81 KB
Format:
Item-specific license agreed upon to submission
Description: