The red kinematic object represents a vector field which can be adjusted to cover a specific area. By controlling the path of this object the user is able to make numerous gestures to bring the dynamic spherical objects in and out of its vector field. When outside of this field, the dynamic objects are more predictably controlled by the user. However, once they enter the vector field they lend themselves to more chaotic behaviour. As a consequence, the audio produced by the yellow object is not only unique to each gesture but can also dynamically shift between random and deterministic control.
Hunt and Hermann (2011) suggested that ‘it is good practice to render short, looped sonifications, so that the effect of mapping and parameter changes become clear with the next few seconds at most.’ (pp. 290-291). Using a kinematic spawn point we have a fixed location at which to spawn an object with a specified configuration. Introducing scripting to this scenario permits the user to define regular intervals at which this relocation event takes place. Providing both the environmental parameters remain constant, and any obstacles it encounters remain unresponsive, the dynamic object will follow the same predetermined path each time it appears. This creates a looping scenario where the visuals and underlying data remain constant. The user is now free to actively adjust the mapping configuration at any point during the simulation, which will then cause the audio to immediately reflect the effects of their changes.
Hunt, A. & Hermann, T. (2011) Interactive sonification. In: Hermann, T., Hunt, A. & Neuhoff, J. G. (eds.). The Sonification Handbook. Ch. 11. Berlin, Germany: Logos Publishing House, pp. 273-298
A voxel is a volumetric pixel which assumes the form of a box, typically with uniform dimensions. Any number of voxels can be combined to form a hollow mesh which can then contain other objects. This provides a visually defined constant for quantifying virtual Euclidian space, making it easier for the viewer to assess relative distance under various environmental conditions.
It can be argued that one can procure tighter rhythmic scenarios by using this uniform approach to determine object movement. If we align a number of kinematic meshes of various lengths and place a single dynamic object inside each one we can establish the maximum relative distance for which these objects can travel. By creating corresponding gravitational field objects, and restricting their field volumes to envelop a particular mesh, the system grants individual control over each dynamic object. Changing the strength and polarity of each field will induce motion in the dynamic object causing it to travel towards the boundaries of its container, whereupon a collision event is generated. With the ability to script this parameter the user can command the timing of each collision, and subsequently, the rhythm of the corresponding sound.
In the second demonstration the spawn point object allowed for the realisation of repeatable loops. This demonstration shows that the user can create a script for each kinematic obstacle which randomises their position when the audible yellow sphere comes into contact with the green trigger volume. This will result in a virtual space that is populated with the same objects as the second demonstration but the yellow dynamic object will now act in a unique manner each time it is spawned.
Cadoz (2009) submits the notion of an energy continuum arising from our interaction with traditional instruments where ‘the energy of the sound is a transformation of the energy of the gesture.’ (p. 218). He describes the two functions of gesture that influence this continuum as production and modification, which he illustrates through the playing of a violin. In this scenario, the right hand produces the energy by either bowing or plucking the string while the left hand modifies the energy by adjusting the length of the vibrating string. By demonstrating bimanual camera control in conjunction with the available objects we can appreciate how this can be realised within the context of this auditory display. Here, the left hand controls the volume level by translating the camera towards and away from an object while the right hand adjusts the orientation of the camera to pan the audio based on the object’s relative location.
Cadoz, C. (2009) Supra-instrumental interactions and gestures. Journal of New Music Research, 38 (3), pp. 215-230.
Mhyusics Auditory Display was developed using: