/
synth WX data glove mapping  mapping  Control parameters Abstract parameters Synthesis synth WX data glove mapping  mapping  Control parameters Abstract parameters Synthesis

synth WX data glove mapping mapping Control parameters Abstract parameters Synthesis - PDF document

karlyn-bohler
karlyn-bohler . @karlyn-bohler
Follow
505 views
Uploaded On 2015-03-08

synth WX data glove mapping mapping Control parameters Abstract parameters Synthesis - PPT Presentation

P AP SP brPage 3br map SP space space space transitions events space parameters continuous event road brPage 4br data weights SP space warping time warping interpolation SP active segments distribution time parameters WX7 Dynamics breath lip fingerin ID: 42840

brPage 3br

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "synth WX data glove mapping mapping Co..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

This article presents ESCHER, a sound synthesisenvironment based on IrcamÕs real-time audio. ESCHER is a modular systemproviding synthesis-independent prototyping ofgesturally-controlled instruments by means ofparameter interpolation. The system divides into twocomponents: gestural controller and synthesisengine. Mapping between components takes place ontwo independent levels, coupled by an intermediateabstract parameter layer. This separation allows aflexible choice of controllers and/or sound synthesisHigh-quality sound synthesis methods are currentlywidely available, reflecting massive research over thelast several decades. Today, however, the emphasis isshifting onto expressive control of these methods.Traditionally, real-time control of sound synthesishas been performed by instrument-like gesturalcontrollers such as piano keyboards, windcontrollers, guitars, etc., all usually based on theNevertheless, gestural controllers may or may notfollow the design guidelines of acoustic instruments,leading to two basic gestural controller families:, modeled after acousticinstruments and allowing the application of apreviously learned gestural vocabulary; and , which allow the use of non-traditionalgestural vocabularies. Instrument controllers havebeen designed in order to extend the performancetheir expert gestural control techniques to synthesis.Alternate controllers, on the other hand, provide ameans of escaping from the idiosyncrasies of acousticinstrument playing techniques. They may thereforeallow the use of any gesture or posture, depending onthe sensing technologies employed. Thus, for thesame synthesis result, one type of controller may bebetter suited than another, depending on theperformer's background and specific musicalFurthermore, the facile control of sound synthesisdepends heavily on how the controller outputs arerelated to the available synthesis parameters. Sincethe outputs of the controller are not necessarilyanalogous to the inputs of the synthesis engine, anintermediate "mapping" stage is required, wherecontroller variables are related to available synthesisvariables [4] [5] [6]. Such mapping formalisms needto be devised in order to simplify the simultaneouscontrol of many variables; an example would be thecontrol of additive synthesis, where one needs tocontrol the real-time evolution of hundreds ofAn ideal system designed for real-time soundsynthesis should therefore be able to provide anenvironment where a performer may experiment withdifferent gestural controllers and select theappropriate mapping strategies for the chosencontroller and specific sound synthesis method. Ourapproach to this question is a system called ESCHER,2. SYSTEM OVERVIEWThe ESCHER system is an audio environmentdesigned to provide an intuitive control of soundsynthesis in real-time. ESCHER's modular designpermits easy adaptation to a wide range of differentgestural controllers and/or different sound synthesis For more information on gestural controllers, check In ESCHER the notion of a generic "composedinstrument" is used to describe an instrument wherethe gestural controller is independent from the soundsynthesis model, both related by intermediatemapping strategies. "Composed instruments"typically use two layers of parameter mapping on topof a more-or-less arbitrary synthesis engine to matchvarious controller devices played by the performer toAn abstract gestural model is used, whichdifferentiates between continuous features of sounddevelopment and transitions between them. Thismodel derives from acoustic instruments, wherecontinuous modulations of the sound by the player(pitch variations, dynamics, etc.) alternate withabrupt changes (note attacks, bow changes, etc.),The modular concept of composed instruments provesuseful in choosing the interaction level desired by theuser. Depending on the user's technical skills orBeing strongly dependent on the controller one uses(number of available output parameters, nature of theavailable parameters - continuous/discrete, range,etc.), the mapping can be configured in either a micro, where the performer has access to eachindividual parameter in detail, or in a where some parameters may be kept constant todefault values, allowing the user to concentrate onhigher-level sound features. Examples of composedinstruments have been implemented in ESCHER usingA controller is connected to the ESCHER systemthrough an initial mapping layer. This layer canconsist of simple one-to-one relationships,instrument-like mappings, where one considersparameter dependencies specific to an instrumentmodel, or any other metaphor for simplifying themanipulation of simultaneous parameters, such as theone presented in [6]. In general, this mappinginvolves the transformation and conjunction of theraw controller data via functions (typically by tableThis first mapping represents an adapter between aparticular controllerÕs outputs and a set of abstractparameters defined by the user during the composedinstrument definition. These abstract parameters canbe understood as the interface to the composedinstrument synthesis module. While continuousvalues control the development of sound featuresaccording to continuous controller input, eventstrigger transition processes and abrupt changes.Events are often accompanied as well by abruptchanges of the continuous parameters and can thus In the context of a simple musical instrument model,continuous parameters could be pitch, dynamics,inharmonicity, density, formant specifications, etc.The beginning and end of a note would be discreteevents, each having individual attributes in order todifferentiate various transition types such as legato orTimbre evolution in ESCHER is accomplished bynavigating through different spaces by means ofcontinuous movements and transitions. The secondmapping layer thus consists of a linear interpolationof parameter sets determining the synthesized soundfor continuous movements. A set of D continuousparameters forms a D-dimensional space, where thea grid each representing a particular quasi stationarysynthesized sound. When the vector of values of thecontinuous abstract parameters point to a position inthe D-dimensional parameter space, the actualsynthesized sound is determined via linear interpolation of the synthesis parameters associated nearest nodes of the grid.While continuous parameters of the composedinstrument control the continuous movement withinone parameter space in this manner, discrete eventsare handled by a "road-map" that governs navigationbetween multiple spaces, possibly triggeringappropriate transitions. Thus a complete definition ofthe composed instrument for a particular synthesisengine consists of the abstract parameters, thedistribution of the synthesis parameter sets toVarious sound synthesis methods can be used forcomposed instruments. Whatever the method,however, the synthesis should be defined in such away that the continuous linear interpolation of thedevelopment of sound features. Furthermore, thesynthesis engine must implement the transitions forBecause of their intuitive relationship betweensynthesis parameters and produced sound, synthesismethods using prerecorded and pre-analysed soundsegments--like additive re-synthesis and granularsynthesis[7]--are particularly interesting for buildingcomposed instruments. Additive synthesis is well-suited because of its attributes concerningtime/frequency scaling as well as transformations andmodulations in the frequency domain. In the contextof a composed instrument controlled by a performer,additive re-synthesis can be used to sculpt prerecordedFor additive synthesis the synthesis parameters arethe sets of partials (frequencies and amplitudes)corresponding to a certain sound considered as quasi-stationary. The partial sets are obtained from aprevious analysis of recorded sound segments usingAs we already mentioned, ESCHER implements theconcept of "timbral spaces" [8] where additive soundmodels are distributed in a virtual orthogonal D-dimensional space, each axis corresponding to anvia the synthesis parameter values hard-coded in eachExamples of a similar systems using the concept oftimbral spaces are the ISEE [9], where four predefinedparameters define a timbral space in FM synthesis,and the system developed by L. Haken and hiscolleges at the University of Illinois [10]. The latteris based on the Kyma environment and usestraditional additive analysis/synthesis extended by map spacespaceSince the parameterized segments after the analysisstage still carry unextracted features of the soundimage -- including temporal modulations -- it isinteresting to maintain the synthesis parameters assegments with an embodied temporal developmentIn this case the processing of the interpolation of thesynthesis parameter segments must already take intoaccount the temporal treatment -- "time-warping" -- ofthe segments. Thus at least one more value will beadded to the continuous parameters entering thecomposed instrument to control the time-warping ofthe segments. This might be, for example, theduration of a loop in which the segments in aparticular space are read before the interpolation, or aof the segment to control "scratching" through the weights S.P.space warpingtimewarpinginterpolationS.P.DDDactive segmentsdistributiontime parameters 3. APPLICATIONSOne application based on the ESCHER composedinstrument model, an additive re-synthesis model of aclarinet, was built using a YAMAHA WX-7 windcontroller. The controller device provided MIDIpressure, which were connected via various mappingsto loudness, dynamic spectrum and pitch. Eventsdetermined the beginning of a note parameterized bythe strength of its attack and whether it or not it wasand sustain parts of the clarinet. Twenty-oneappropriate pre-analysed segments--three differentdynamics for each of seven different pitches-- wereplaced on a seven-by-three grid to cover the twodimensional parameter spaces. While the segments inthe attack space were read straight through with aduration scaled inversely proportional to the strengthof the note attack, the segments associated with thespace of the sustain part were read in loops in order tokeep the embodied modulations of the analyzedDifferent transitions were defined between asustaining note and a new attack (legato note) as wellas for newly attacked notes and to smoothly join theattack to the sustain part (transition from attack spaceto sustain space). The road-map in this case basicallyconsists of the mutual switching and initialparameterization of the two parameter spaces drivenby the incoming note events. In addition, the sustainsegments are automatically joined after the attack andeach switch is associated to the triggering of an WX7 Dynamics Additional instrument modeling has been developedto simulate amplitude variations in each partial of aclarinet spectrum due to variations in distancebetween the instrument and a fixed microphone. Amodule in ESCHER simulated the effect of the firstreflection (from the floor, in this case) in the soundsuch as that which occurs when an instrument movesbasic ESCHER environment and was controlled by aposition sensor placed on the controller. The"virtual" microphone position could be chosen in theA second application implementing the paradigm ofcomposed instruments using ESCHER is a granularsynthesizer processing segments of speechrecordings. The granular synthesizer engine iscontrolled by four parameters -- grain period, pitch,duration, output channel -- plus a statistical variationfor each one, making a total of eight parameters forThe interface for the instrument are the continuous and , which aremapped to the eight synthesis parameters viaparameter spaces. Note-like events trigger a syllable.Each syllable is treated like a musical note, withFinally, a parameter space consists of a speechsegment and an individual mapping for the In this paper we presented ESCHER, a modular soundsynthesis environment designed for an intuitivecontrol of different sound synthesis methods byvarious gestural controllers. It has been designed inorder to allow the control of user-defined composedinstruments, where the user is able to select thesynthesis method, the controller device and the levelof control he desires. The system has been devisedwith the goal of providing a flexible tool for human-computer interaction in a real-time musical context.In its current state, ESCHER runs natively on SGIworkstations, and a port is being prepared for LinuxFuture additions to the system will include theintegration of analysis and re-synthesis of noisecomponents in additive synthesis, the use of spectralenvelopes for partial transformation andmanipulation, and different synthesis methods, suchsuch5. ACKNOWLEDGMENTSThe authors would like to thank Shlomo Dubnov,Xavier Rodet and Diemo Schwarz for their comments[1]J. Paradiso, "New Ways to Play: Electronic[2]A. Mulder, ''Getting a Grip on AlternateControllers: Addressing the Variability ofGestural Expression in Musical Instrument, vol. 6, 1996,[3]C. Roads, Computer Music Tutorial, chapter 14,[4]T. Winkler, ''Making Motion Musical: GesturalMapping Strategies for Interactive ComputerProceedings of the InternationalConference on Computer Music, ICMCÕ95, pp.[5]J. Rovan, M. M. Wanderley, S. Dubnov, and P.Depalle, ''Instrumental Gesture MappingStrategies as Expressivity Determinants inComputer Music Performance,'' Proceedings ofKANSEI - The Technology of Emotion[6] A. Mulder, S. Fels, and K. Mase, ''Empty-HandedGesture Analysis in MAX/FTS,'' Proceedings ofKANSEI - The Technology of Emotion[7]N. Schnell, GRAINY - Granularsynthese in, BeitrŠge zur Elektronischen Musik,No. 4, Institut fŸr Elektronische Musik, Graz,[8]D. Wessel, ''Timbre Space as a Musical ControlStructure,'' in C. Roads and J. Strawn,Foundations of Computer Music, MIT Press,[9]R. Vertegaal and E. Bonis, ''ISEE: An IntuitiveSound Editing Environment,'' , Vol. 18, No. 2, Summer/1994, pp.21-[10]L. Haken, E. Tellman, and P. Wolfe, ''AnIndiscrete Music Keyboard, '' , Vol. 22, No. 1, Spring/1998, pp.30-[11]G. Peeters, ''Analyse et Synthse des SonsMusicaux par la MŽthode PSOLA'', of the JournŽes dÕInformatique Musicale