Previous Entry
Next Entry

Posted by Andrew - January 16, 2007 - 16:52

Inspect the Unexpected

This morning for our second presentation we had a surprise: the System behaved in an totally unaccustomed way, degenerating into a most infernal rusty-flames and shadows thing that we've never seen before -- and which we liked better than anything we'd achieved deliberately to date! This is actually encouraging, since the whole idea of the project is a system which does the unexpected and evolves.

Recent Cosmosis images can be seen here, where they will continue to accumulate.

To recap on some of the points raised in discussion:
  • [Paul] try to give participant the ability to grab preferred phenomena and keep them from dissipating
  • [Alan] maybe force the participant to interact in unusual ways, for instance swimming, or kneeling
  • [Sheelagh] recommends to try using seperate systems which blend the seams, rather than depending on Chromium/WireGL, which would probably also increase the system diversity
  • [Amy] wonders if we're planning to record interactions, and what kind/depth of recording? (there's natural video, but it's also quite easy to record the input history)
  • [David O.] consider making our circular agent deformable under collision
  • [Sheelagh] the spiky world-eaters could present their effects in a more obvious way, for instance growing or zapping ...
  • [Amy] ... or be impaled on the thorns, forming clusters which roll and transform
  • [Mary and Alan] try to ask ourselves what we would like to be noticed about our system (critical eye, ear, language); also what do we like about, and dislike about, the system?
  • [Amy] what would a 12-year-old say after a Cosmosis experience? they would probably love it -- but also, what about it might detract from their experience?
  • [Paul] you might be able to introduce other media, like text, without violating your artistic aim of a system which doesn't depend on external references
  • [Alan] have you considered stereo (3D viewing)?
Also discussed: the performance aspect of our presentation, and how it suggests possibilities for VJs (video jockeys), as well as "wizard of oz" (a.k.a. "evil genius", a.k.a. "man behind a curtain") enhancement for the public participants. Speaking of wizards, Alan has generously salvaged two impressively-heavy IR filters which should be of great use given an IR-sensitive camera. (Using IR carries dual benefits of being non-distracting, and subverting feedback noise from video illumination: noise in the EyesWeb input causes annoying abherrations on the sensed gestures.)

Although we felt much better about this presentation that the first one at Banff, we were still at a considerable disadvantage in that we don't have input working, we don't have tiled display working, and the sound needs to be custom (cellular automata and audio texturing) and exploit surround (OpenAL). Our goal is to have a decent solution to these problems by our third presentation.

A shout out to my man David for awesome particle-physical code and a solid grasp on the future of the project, yaar.

Previous Entry
Next Entry