A Model for Data-Driven Sonification Using Soundscapes
A sonification is a rendering of audio in response to data, and is used in instances where visual representations of data are impossible, difficult, or unwanted. Designing sonifications often requires knowledge in multiple areas as well as an understanding of how the end users will use the system. This makes it an ideal candidate for end-user development where the user plays a role in the creation of the design. We present a model for sonification that utilizes user-specified examples and data to generate cross-domain mappings from data to sound. As a novel contribution we utilize soundscapes (acoustic scenes) for these user-selected examples to define a structure for the sonification. We demonstrate a proof of concept of our model using sound examples and discuss how we plan to build on this work in the future.
Item Type | Conference or Workshop Item (Poster) |
---|---|
Keywords | Multimedia UIs; Soundscapes; HCI; End-User Development; Cross-Domain Mappings. |
Departments, Centres and Research Units |
Computing Computing > Embodied AudioVisual Interaction Group (EAVI) |
Date Deposited | 22 Mar 2016 09:16 |
Last Modified | 05 Mar 2025 20:08 |
Explore Further
-
[error in script]
-
picture_as_pdf - WolfGlinerFiebrink_IUI2015 (1).pdf