Historically, a deliberate play with the spatial has had a very unimportant role in music. It has certainly been during this last century that the compositional utilization of musical space has seen its major achievements. Through an almost unrestrained exploration on the concept of musical sound, altering the perspective on the sound phenomenon, the dynamicity and complexity of interrelationships of its different attributes opened up to our apprehension. Time, the spectral and space entered into the concept of sound, and the compositional activity operated with all aspects conforming its basic matter.

The introduction of the electronic medium into the musical domain turned out to be of crucial importance both for the extension of the concept of sound as for the integration of space into the musical flow. As with so many other aspects of the electronic music world, from two pioneering centres rose two distinct, and to an extent confronted, general attitudes towards the spatial: the "elektronische" which embedded space into the structural of the composition, and the "concrète" – or, more generally, the "acousmatic" – which assigned a performing role to spatialization.

With its related notions of "sound fixed on a support" [Chion 1991] and of "diffusion" and its associated "orchestra of speakers of different sizes and characteristics" [Bayle 1986], the acousmatic approach, embracing a large number of artists and institutions [Bayle 1993; Harrison 1998; Vande Gorne 2002; Clozier 1997 etc.], has achieved a richness of auditory scene comparable to that of the ensemble of acoustic instruments, opening up the electronic music flow to interpretation (see, for example, [Vaggione 1994]). Let us underline, that it is the expansion from a compacted container (e.g. stereo suport) to a flexible extended heterogeneous transducer array that permits this subtle adaptation to different spaces and interpretation through diffusion. For this reason, acousmatic music has rested mainly on a stereo format, in contrast to other approaches, notably the "structural", which needed a multichannel support and ad hoc, relatively precise, speaker-setups.

Beyond this performative dimension, spatialization has been mainly concerned with the virtual positioning of the sound source on some point in space delimited by speakers. Chowning’s contribution [Chowning 1971], extended by Moore [Moore 1983] later, introduced a robust strategy to control also other spatial aspects. Since then a number of algorithmic and technological strategies have developed to control virtual positioning and added artifical spaces. Ambisonics [Gerzon 1973, Malham 2003], VPAB (Vector based amplitude panning [Pulkki 1997]) or DBAP (Distance Based Amplitude Panning [Lossius 2007]), address only the question of localization, whilst DirAC [Pulkki 2006], Wavefield Synthesis [Berkhout 1988], ViMic (Virtual Microphone Control [Braasch 2005]), or Binaural Rendering [Blauert 1983, Moller 1992] – this one only for listening via headphones –  include also the possibility to create a virtual space (See [Lossius, 2007] for a discussion about some of these approaches).

Each of these approaches has its peculiarities and limitations; some depend on very precise speakers-setup, others more flexibly so. The composer should be free to combine and make interact different spaces, and for each of them one or other of the previous spatialization strategies could be better suited. In the piece Streams, Extremes and Dreams [Gonzalez-Arroyo 2002] for 24 channels non uniformly distributed following a virtual cube around the audience, the composer combined different spatialization techniques not only to follow the desired artistic results, but also as a more suited way to express certain ideas. Not many specialized environments take into consideration this aspect in their design; the Spat [Jot 1992], a refined library of tools to spatialize sound, allows, to an extent, this possibility, whilst Jamoma [Place 2006] seeks explicitly to combine approaches, being their authors fully conscious of its potential importance to increase expressive power.

The sound object, as a general rule, has been considered a source-point in the space, eventually describing trajectories. To an extent SSP (Sound Surface Panning), based on VPAB, and implemented in the Zirkonium platform [Ramakrishnan 2006], addresses the question of sound objects as surfaces, and Scatter [McLeran 2008], because of its granular, dictionary-based approach, takes this question into consideration. The increase of spatial dimensionality is not only a way to enrich the spatial aura of the sound object, but a first step towards a conception of sound which opens up a perspective to the plasticity of sound. Toiles en l’air [González-Arroyo, 2008] uses 12 speakers on 4 height levels around and inside the audience space to give form to sound surfaces of different qualities and behaviours stimulating a plastic perspective. There is an increased interest in this direction. The Sounding Object project [Rocchesso & Fontana 2003], though technically oriented, is a milestone in this process, Truax [1999] addresses directly the question, and Smalley [1997, 2007] has very wisely built a refined system of concepts related to quality and space.

To integrate spatialization in the compositional process, not leaving it as an "external" procedure to be applied after, is, however, of utmost importance. The spatialization tools should seamlessly merge in the compositional software environment so that they can be subject to the government of formal laws. It has not been until computer languages developed to a certain standard and that the speed of processors achieved a proper magnitude, that the more general question of integrating synthesis and composition could be addresed. Certain sound synthesis software permitted slowly to advance into a true formalization of the musical ideas. Csound [Vercoe 1991], Cmusic [Moore 1990], Chant-Formes [Rodet 1984a,b], and later Chroma [Stroppa 2002] or Stella-CLM [Taube 1993; Schottstaedt 1994] are amongst the most significant. Foo [Eckel, González-Arroyo 1994; Rumori 2003], conceived as a model of environment for the composition of music using synthetic sound, married in a unique platform a fluent language of signal processing modules and a set of abstractions for the composition of music. Amongst its aims, to offer the possibility of an expressive formalization of the musical ideas, and to give a first step towards the integration of the spatial in composition.

Performance, or better said, interpretation through performance1, has been a difficult question for our musical domain. So-called real-time music had to partly sacrifice richness and control over the synthesis of sound in order to produce pieces with an important performative side; technology simply did not allow it. It is only since rather recently that the processing power of computers has made possible for certain musical software platforms (PureData [Puckette 1996], Max/MSP [Pucket 1991; Zicarelli 1998], SuperCollider [McCartney 1996]) to allow powerful, structurally controlled, real-time synthesis. This development has made possible the idea of Generative Music, an algorithmic process, generating in real-time a variant of its model. Gehäuse [Eckel 2001a], created with Max/MSP a model generating in real-time a flow of sound, in 16 channels, which would use the physical properties of four connected rooms in a building to create a plenitude of phantom sources moving from room to room.

The idea of translating body movement into electronic sound – as old as the first applications of electronics in music –, but with the sensitivity and accuracy achievable nowadays, directs us to a kind of performance which potentially may control all aspects of the musical flow with the refinement typical for traditional musical instruments. A new dimension to the composition of electronic music has been opened. From 2001 to 2003 Eckel and González-Arroyo with an international team designed and developed the compositional environment for the LISTEN2 platform [Diehl 2003]; this environment permitted to author immersive audio-augmented environments, including the geometrical description of the site and the set (i.e. added physical bodies of relevancy to the piece). Raumfaltung [2003] ([Adolphs 2003; González-Arroyo 2005]), realized by these same two authors in cooperation with the poet O. Egger and the artist B. Zoderer using the LISTEN technology, was a walk-in installation controlled via radio-headphones, where the movement patterns performed by each the visitors/auditors would yield a different variant of a composed musical world.

Concerning performance, we are focusing in this project on a collaboration with the medium of dance and choreography, since mainly, though not exclusively, we will concentrate on bodily movement and its possible relationships with sound and space as a source of artistic and conceptual inspiration. In a similar way than in the EGM project, dance and music are expected to mutually enrich each other. EGM has shown that dancers – through an advanced approach towards motion mapping – can take the role of musicians, performing and interpreting electroacoustic music. An example of this approach can be seen in the intermedial dance and music piece Bodyscapes  [Eckel 2009a], a case study of a (still very rudimentary) form of an embodied generative music. Another approach closely related to this proposal has been developed at IEM recently: Motion-Enabled Live Electronics (MELE)  [Eckel 2009b]. MELE is a special approach towards live electronic music aiming at increasing the degree of the performers' embodiment in shaping the sound processing. This approach is also characterized by the combination of a high-resolution and fully-3D motion tracking system with a tracking data processing system tailored towards articulating the relationship between bodily movement and sound processing. The technological results and the artistic experiences achieved with the MELE project will also serve as valuable input to this project.

This brief presentation of the vast research context this proposal is situated in as well as the distinguished cooperation partners who are committed to invest in this project documents the relevance of the proposed topic within our artistic and academic community, its potential for artistic innovation (for the expected impact please refer also to section Impact).

1 There is much of performance in the composition/realization of electroacoustic music, and, besides, its presentation, will always be a performance, though not necessarily an interpretation.

2 LISTEN – Augmenting everyday environments through interactive soundscapes; European Commission's IST Programme (IST-1999-20646); [Eckel 2001b]