Interactive Sound Texture Synthesis through Semi-Automatic User Annotations
We present a way to make environmental recordings controllable again by the use of continuous annotations of the high-level semantic parameter one wishes to control, e.g. wind strength or crowd excitation level. A partial annotation can be propagated to cover the entire recording via cross-modal analysis between gesture and sound by canonical time warping (CTW). The annotations serve as a descriptor for lookup in corpus-based concatenative synthesis in order to invert the sound/annotation relationship. The workflow has been evaluated by a preliminary subject test and results on canonical correlation analysis (CCA) show high consistency between annotations and a small set of audio descriptors being well correlated with them. An experiment of the propagation of annotations shows the superior performance of CTW over CCA with as little as 20 s of annotated material.
Item Type | Book Section |
---|---|
Keywords | sound textures, audio descriptors, corpus-based synthesis, canonical correlation analysis, canonical time warping |
Departments, Centres and Research Units | Computing > Embodied AudioVisual Interaction Group (EAVI) |
Date Deposited | 23 Jan 2015 12:17 |
Last Modified | 29 Apr 2020 16:05 |
-
picture_as_pdf - schwarz2014interactive.pdf
-
subject - Accepted Version