DESERT RAIN: DEAF Discussion Notes/ Outline
Matt Adams and Scott deLahunta (18/11/00)
The following are some starting/ trigger points for an upcoming discussion based on the Desert Rain project. The discussion will take place with the workshop participants as part of the FM3-TT workshop at DEAF [http://www.v2.nl/deaf/00/time_tracking -- 14-19 November 2000] on the Saturday, 18 November. There will be a wrapup discussion/ presentation for the public on Sunday 19 November mid-day. These notes will be produced as part of a reader for participants. Other reader material will include some materials from the contributors below.
Some Links to Workshop Contributor and Desert Rain sites:
Desert Rain: Finding One's Way
Desert Rain is a large-scale event installation and the result of a collaboration between performance group Blast Theory working with eRENA partners University of Nottingham's Computer Research Group/ Mixed Reality Lab and ZKM, Karlsruhe. Nominated for a BAFTA in Interactive Arts last month, the piece involved the creative implementation of MASSIVE, a multi-user distributed virtual reality system developed at the Mixed Reality Lab in combination with the development of specially designed interface technologies at ZKM. In this presentation, Matt Adams (co-direction Blast Theory) and Scott deLahunta (researcher into new media and performance and Desert Rain audience member) will present a short description of the piece with the intent to provide as vivid a portrayal as possible. An ambitious cross-disciplinary collaboration across a diverse base of knowledge and expertise, Desert Rain sustains at its core a clear understanding and manifestation of the processes of making performance. Some of these processes will be articulated and analyzed in a discussion of the making and producing of Desert Rain touching on various details such as: 1) testing of the work on groups and the nature of development of sonic and visual cues in the virtual world; 2) additional development of the MASSIVE virtual reality software including collision detection and terrain following; 3) the development and evolution of different layers of interactivity and their effectiveness within the context of the work, e.g. audience member to audience member, audience member to virtual entity, audience member to 'off' screen' performer, etc.
This is what it is (from a press packet): Desert Rain sends six participants on a mission into a virtual world. Each player is zipped into a cubicle and stands on a moveable footpad that controls their journey through this world. Together, they explore motels, deserts and underground bunkers, communicating with each other through a live audio link. The world itself is projected onto a screen of falling water, creating a 'traversable interface’ through which performers can visit the player at certain key moments. Players have thirty minutes to find the target, complete the mission, and get to the final room, where others may have a very different idea of what actually happened there.
A few thoughts on how it works and why…
Dramaturgy of Instruction: [Scott] I attend Desert Rain in Bristol by entering a large warehouse beside the water and waiting in a receiving area where we are given our first set of basic instructions. Desert Rain unfolds in stages, each carefully scripted in order to give us just enough of these instructions each time to enable us to get through. One set of instructions lie at the core of the experience that is how to move in the virtual world. How to move forward and back and, crucially, how to turn. Technically (in the sense of Marcel Mauss’ Techniques of the Body), this is accomplished by the same set of skills one might develop to use a skateboard, to surf or ski, by shifting the centre of gravity forward, back, to the right and to the left. Other instructions give information as to the significance of various objects, virtual as well as actual. Others come later from the performers who, for the most part, remain unseen only to be heard giving me personalized instructions over my headset. Instructions are also coming to me from the other audience members. Further and final instruction comes in the shape of a performer who materialises through the water screen and ushers me into the final chamber.
[Matt] In total there are six distinct 'pedagogical phases’: a laminated instruction card while participants are waiting outside, a briefing from a performer, a lightbox containing graphics, a magnetic swipe card containing instructions, a performer who leads participants into the virtual environment and a third performer giving audio support via headphones.
The Lowest Tech Principle: [Scott] Sensors under the moveable navigation footpads send a data signal to the MASSIVE-2 software committing it to the usual calculation overdrive in order to feedback the impression to the user that he or she is ‘moving’ through this virtual space. The original version of these sensors were developed and tested in collaboration with and at the Centre for Arts and Media (ZKM) in Karlsruhe. These original sensors provided a continuous (or analog) data signal to be sent to the MASSIVE-2 software. One could imagine that this would allow the user much more control of their movement within the virtual environment in particular the illusion of speed could be accomplished through variable application of weight in any direction. This would seem to be the optimal and preferred technology for a fully immersive experience. In the case of Desert Rain, these analog sensors were to fail before the premiere of the show (the testing had not been tough enough). A solution was quickly devised by Ian Taylor of the Mixed Reality team by dismantling a joystick and building sensors to send a simple digital or ON and OFF signal to the VR software. These sensors were more reliable and more than adequate to deliver the experience of the work. While the result of a technical failure, the lesson to be extrapolated from this is that technical sophistication can so easily be mistaken for necessity without a fuller understanding of the context for its use.
Metaphors and Cues: [Matt] The piece went through three major iterations in the last 9 months of the 27 month development process. One of the key areas of focus during our testing (combining focus groups, written questionnaires and direct observation) was: how do participants orientate themselves within Desert Rain, both spatially, and conceptually? What level of information is required for participants to make sense of the world, to feel immersed within it while also keeping the pace as dynamic as possible (especially given the restricted polygon budgets). We questioned people closely about their interpretations of the environment. While acknowledging the inevitable subjectivity and thus diversity of responses, this process allowed us to tweak the design of, for example a bunker, until it had sufficient correlation to a bunker or held a sufficient number of bunker-like properties: does it have any military connotations? (which might be an artistic concern) or is it an object that would have an inside and therefore an entrance? (which might be a spatial line of enquiry).
The Time of Things: [Scott] Desert Rain time is consistent with everything from the need to make something that could be managed as a touring performance event to the development of its artistic content associated with the mediatized circumstances of the Gulf War. There are no holes in the Desert Rain time no places to take a detour or sit out and watch beyond the watchers. There is no waiting in the airport for your delayed flight time, no time for nostalgia to creep in. Desert Rain skirts the perimeters of conventional theatre viewing time (whilst keeping it in the frame) and overlays gaming time (including ready set go, stopwatch time, last chance, decision making time), waiting room time, travel time (including lost and wandering time), task and countdown time, walking through sand time, amusement park rides time, narrative time (including documentary making and tv watching time) and unfolding interactive time.
Polygon Budgets: [Scott] From the beginning, Blast Theory had to accept that there could be relatively few adjustments to the basic architecture of the VR software, largely because of the human time factor involved in doing this. They could work with the built-in scripting language to design the virtual environment, but had to accept the polygon budget. This was limited partially as a result of dedicating more then 80% of the network traffic to sound that meant MASSIVE has a built in limitation on the number of polygons making up the changing visual imagery it can generate in realtime. Therefore, the simple visual systems and look of the landscape are part of adhering to this principle of working with what was there (bricollage) and finding the appropriate vehicle, form or context for that. However, there were still constant negotiations over what forms and functions could be added to the software. When negotiating for something additional, Blast Theory working on the principle of defining its absolute necessity to the audience experience and the Mixed Reality Lab working on the combined principles of 1) what could be done within a limited time frame and 2) what things would be technically interesting to do.
Sonic Spatiality: [Matt] Audio creates a sense of place, an atmosphere, an orientation tool.Because of its power to inflect otherwise neutral spaces it provides cues about everything from the time of day to the level of urgency required.In recognition of this, MASSIVE 2 - the software created by the Computer Research Group at the University of Nottingham - devotes 80% of network traffic to audio.It uses three concepts to deliver a complex, immersive sense of sound.Firstly, it attributes an aura to each avatar (a circular zone of sound generated that diminishes in volume in concentric rings).Secondly, it attributes a focus to each avatar (a conical zone extending forwards which enhances any audio source falling within it) so that as you turn to face a sound, you hear it more clearly.Finally, MASSIVE 2 generates a nimbus from the intersections of aura and focus as two avatars meet. Building on this sensitive treatment of sound Blast Theory constructed soundtracks for every area in Desert Rain (3 real spaces and 7 virtual spaces).
Audience, Players, Team: [Scott] “You have 20 minutes” the game in Desert Rain has given me an overall goal, to find my way out of this virtual world within which I am currently ‘trapped’. This condition of entrapment has already begun forming in my mind as a result of the information received so far, the instructions on the way into these individual cubicles. The imaginary condition is further heightened by the reality of the hooded coat I have been given to wear, the dark murky and pixelated quality of the VR imagery being generated by MASSIVE-2, the water on the floor surrounding the navigation footpad I am standing on and the atmospheric ambient music coming over my headset. A further layering of experience occurs in the purposive construction of a social dynamic between myself and the other 5 audience members, one that makes it clear it is my choice to either find the exit on my own or with the help of and/ or by helping the others in the audience. In the end, I play the helpful one and go back to rescue those as the time counts down. I do not escape I assume I have perished. In the final room, I meet the other members of my team … one or two I have saved, but the hero sensation is fast fading.
Communications Research Group, University of Nottingham [Steve Benford, Chris Greenhalgh, Boriana Koleva and Ian Taylor] for providing many of the ideas and evidence/ illustrations for this presentation.