Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning & Concatenative Synthesis

Zbyszynski, MichaelORCID logo; Di Donato, BalandinoORCID logo; and Tanaka, AtauORCID logo. 2019. 'Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning & Concatenative Synthesis'. In: 14th International Symposium on Computer Music Multidisciplinary Research (CMMR). Marseille, France 14-18 October 2019. [Conference or Workshop Item]
Copy

This paper presents a method for mapping embodied gesture, acquired with electromyography and motion sensing, to a corpus of small sound units, organised by derived timbral features using concatenative synthesis. Gestures and sounds can be associated directly using individual units and static poses, or by using a sound tracing method that leverages our intuitive associations between sound and embodied movement. We propose a method for augmenting corporal density to enable expressive variation on the original gesture-timbre space.


picture_as_pdf
zbyszynski_gestureTimbre_final.pdf
subject
Accepted Version
Available under Creative Commons: Attribution-NonCommercial 3.0

View Download

Atom BibTeX OpenURL ContextObject in Span OpenURL ContextObject Dublin Core Dublin Core MPEG-21 DIDL Data Cite XML EndNote HTML Citation METS MODS RIOXX2 XML Reference Manager Refer ASCII Citation
Export

Downloads