Cosmosis



Posted by Andrew - April 14, 2007 - 19:08

Morphing Colourmaps from Natural Source

Notice that the cat doesn't appear to be at all mindful of the encroachment of the spline. Isn't it fun to create documents? Almost as fun as implementing algorithms, which in turn has to defer to dreaming and scheming. Where do you suppose all this text will end up? Does it serve as mortar, or does it look like it was on the receiving end of one!


Posted by David - April 1, 2007 - 23:08

Lighten up, it's just IR!

The IR arrays

I read the 'schematics' of the IR arrays more closely, and it seems that a certain lesser-than sign is a typo and ought to read greater-than to match the textual description of the device. I took this as permission to greatly exceed what I feared were the limits of the IR arrays capacity, and so I present some operational - perhaps not optimal - arrays hooked up to some extra domestic transformers I had lying around.

Image the leftmost: The makeshift IR arrays hooked up to power supplies of alarmingly varied volt- and amp-age.

Center image: Here the IR arrays are seen through the IR-enabled webcam. The spotlight effect is quite pronounced when they irradiate a surface so close, it works much better to cast the IR on a faraway surface so the light will be much more diffuse. Part of the problem is, of course, the webcam's stupid software adjusting itself to the relative brightness of the image rather than sticking to objective levels. (Many devices marketed to consumers do this, like when birthday candles flood the rest of an image with darkness on a home video. How annoying!)

Rightmost image: And just to show that I'm telling the truth, here's the same scene from a slightly different angle through a non-IR enabled webcam. What's interesting about this is that you can just see where the IR arrays light up the carpet in front of them, but looking with my eyes I see nothing. Clearly the IR filter on the webcam is not completely effective!

I stress tested them for a bit and none melted; their heat stabilized is a range from "somewhat warm" to "quite warm". Next step is sticking them in the VR lab to seeing how they work, perhaps it will reduce input-noise (but realistically, only some reprogramming can do that, I think).

Fortune favours the who?

So at the Art + Science symposium thing at ACAD, along with Alan I talked rather too briefly about Cosmosis, my collaboration with Andrew generally, and ASTecs very generally. We pretty much needed about twice the time we had to talk to really get the point across, but the deed is done.

I blew up the important part of the event's poster here:

See where my name is above the David Hoffos? That's just great! Anyway, at least we did a good deed and plugged the ASTecs program.

Preparing for this event struck quite clear in my mind our need to better document our work. Regardless of how good it actually turns out to be, people love hearing about the hip new techno-art stuff like it, so we need overview shots of people interacting with it (we had only a couple!), and a video thereof, shots of the code perhaps, and above all more flowcharts! So I'm going to be bringing my camera around from now on, let me tell you.

(Also I liked my line in the presentation above regarding Cosmosis as a "meta art object", a system we built in which a piece of art is created when the participant undergoes the act of viewing. Again, perhaps it makes it sound better than it is, but I shouldn't let humility cause me to forgo a good line.)



Posted by David - March 21, 2007 - 21:08

The Aesthetic of the "Default"

Upon reflection it appears that we have unequestioningly worked within a number of aesthetic assumptions. Even if we do not, ah, rise to the occasion and break new ground in generated aesthetics I may as well comment on what we have(n't) done.

  • The black background:
  • The default "canvas" of computer graphics is black (though sometimes a 50% gray). This is natural enough because blackness is the absense of light. If we have not commanded a pixel to activate, it does not. But what if it were otherwise, why should we constrain our graphics generation to default assumptions?

    To compare, a painter's canvas is white, though historically the gesso on a canvas was not necessarily as blindingly white as it can be. Some artists will mix pigments with their gesso to give a different color to the 'field' and perhaps this allows them to approach their painting with an entirely different mood than that of the "blank white paper". The walls of art galleries are also painted white. Sure, it's a neutral color, but are is gray and black. White has a high albedo, therefore makes for a lot of ambient light from relatively little illumination (though I would argue that most rooms are far over-illuminated). Compare this with the lack of ambient light in the black-painted VR lab designed to maximize the relative brightness of the projected image.

    If we drew our agents on white instead of black, the default image in our space would essentially be a large, bright rectangle. I don't think this would lend itself to a "tactful" use of the space, but perhaps we could use a very dark color for the background:

    The light color is way too washed out. I think the dark purple is nice, gives it a night-time feeling. I shall have to try it on the big screen. And you can also see in these shots a new agent rendering function, which brings us to...

  • The "OpenGL look"
  • There is a particular "look" of OpenGL primitives in graphics demos that I really hope we're avoiding. Sheelagh did say something positive to that effect, but ... still, anyone really familiar with OpenGL can probably name the calls we used for each visualization, though Andrew's "Starhoods" may pass the test best as he -was- asked how they were done. Still, there is very much a fireworks show feeling to the whole thing, you know, bright points of light glittering and moving around. I'd like to get away from this "thousand points of light" look as much as we can. But it's so easy, and so .. pretty.

And with the project deadline fast approaching I fear that we don't have nearly as much time as we'd like to split between implementing new features and polishing old ones. Suffice to say, it feels like the technical demands (read: sci/tech) are taking precedence over conceptual development (art). But then the first semester was easily dominated by conceptual matters with a deal of technical effort siphoned off onto the mini-projects.

Modulating Input

The signal to noise ratio with the webcam input is such that I don't think that the expression of the interactor's movement much matters at all - yes, technical demands over conceptual. Still, we've had many suggestions about how participants can be given a context for their interaction with the system.

  • Lines could be put on the floor with tape designating where a person should and shouldn't move. Such authoritarianism may put people into a restrictive mindset, though, or maybe they could use the 'bounds' as an area where permission is given to allow expression. I smell a user-study.
  • It's been said that a dancer would think of movement in entirely different ways than geeks like us, and to see how a dancer would interface with the system could tell us something about how it should see -- unfortunately, with regard to the previously stated problem, I don't think the system would make a distinction between a dancer and one of us jumping around like a fool.
  • We could use props, like IR-reflective/emitting balls or sticks. This evokes the "magic phallus" idea I recall bringing up in our original proposal presentation at Banff. And it'd probably be a bad idea to throw balls around in the VR lab; Still, interesting.

And there's no damn power supplies for the IR lights

Looks like it's a trip to Radioshack and/or Home Despot. Enough said.

Performance?

The mob still wants a performance out of us because watching us talk and get into our project is apparently irresistably amusing. I'm not sure what a good time would be to do this, though. And in a way, every time we present the project it is like a performance and perhaps we could play up ourselves in the roles.



Posted by Andrew - March 19, 2007 - 0:00


Posted by David - March 6, 2007 - 23:16

Groovy Blob Detection

or: "Come rain or snow...?"

IR Lighting:

The IR LED arrays arrived in the mail. They're really cool-looking, it's just too bad the power supplies aren't here so we can't see 'em in action

Blob Detection

Ya'know, image analysis can be a lot of fun. I won't go into great detail, but once I figured out how to work a particular recursive function to propagate a Blob's identity through a group of attached pixels, it all came together.

See - for this visualization, separate groups of pixels from my scowling mug are assigned random colors:

There's a value to disregard cells without a certain number of neighbours to keep the number of blobs sane. The image below shows how the raw input (1) is interpretted as distinct blobs (2) and some simple information about these blobs is displayed (3) whereas the radius of the circle refers to the total number of pixels, the vertical line to the min and max Y range, and the horizontal line to the min and max X range.

The interaction doesn't really "feel" like anything right now, unfortunately. More information must be extracted from the blobs because, as-is, these are ambiguous blobs, not meaningful shapes. For example, I'd like to check the ratio of the X and Y bounds as well as the percentage of area which is 'active' and use those to decide whether to sub-divide the blob for more accuracy and create a, ah, "quad-tree".

I think some of the instructors urged us to scan some of our "analogue" notes, and I now recall Andrew saying I should do the same, so I finally did. Click on the helpful button below to see the full page of unreadable handwriting and unclear diagrams I scrawled today.

I have some very early notes around here somewhere as well. It may be fun to scan those to see how much our concept has changed. (And oh, has it ever - as observed last semester by instructors, the full scope of our project is incredibly ambitious. To get everything polished would take a long, long time, so necessity dictates that the scope of our project must be rather more limited than we may have hoped & dreamed.
Whereas n = number of features and t = time: n^2 = t.

A quick list of things to implement:

  • Sub/Mega Agents (tree structures of agents-within-agents)
  • Proper collision grid (per cell, not per agent!) + rendering of agent interactions, ie: bolt2a()
  • Render-lists to reduce unnecessary GL calls + gain framerate
  • More variety w/ particles + fun particle calls for visualizing input & agent events
  • Get more information from Input!: velocities, trees of sub/mega-agents
  • which requires: Better Input Parsing (ie: 8 ints, first? of which is for setting a tree structure?)
  • make a Video-In "lite" w/o PyGame visualization & update it to use new Numpy rather than outdated Numeric (so it works on linux)
  • Perhaps eliminate need for EyesWeb entirely-- do image processing w/ video direct from WebCam, so, again, it works on linux
  • Make GravWell(s) more interesting - link to agents? as well as input?
  • Sound render-lists?

And so on.



Posted by David - February 15, 2007 - 17:05

To err; to input.

Input from Eyesweb:

Andrew and I met and got the EyesWeb-to-Python input working.

And it ALL works! We could get full color video input if we really wanted - but 8 bit will do just fine. We were just interpretting the data poorly by basing our video input (fewer large packets) on our previous data input protocol (lots and lots of tiny packets). So we'd stick -all- the data into a big list and interpret it as we could, which worked for that, but, as written about a couple entries ago, the video images we were getting were enourmously screwy.

What we're doing now is checking if the packet is the correct size (6340 bytes), if it isn't, ignore it -- and the first six packets EyesWeb sends are always wrong, and as well there are intermittent glitches. If the packet is 6340 bytes, read it in! As mentioned before, Pygame has some display functions that take Python Numeric classes, so I'm using Pygame rendering to 'debug' input.

(A sidenote: Pygame only renders 8 bit images with indexed color, so we made a quick-and-dirty grayscale palette by iterating through 256 positions with 256 levels. Easy! We're receiving 8 bit images from EyesWeb that, strictly speaking, only contain bit information, white-or-black, so for the purpose of blob-analysis it may be fun to change the image array to boolean, True/False. Or whatever.) - Next step: From arrays to agents!

Go to the light!

I started poking around with one of the webcams, one thing lead to another, and I ended up removing the IR filter and picking up IR light!!!

  1. The silly ball-shaped camera is a-cloven in twain!
  2. Removed the lens casing thingy - turns out the good stuff is actually in the lens (meta) casing mounting bit, circled here
  3. Lens sub-casing removed from meta-casing. I wasted a good deal of time poking around with this.
  4. I delicate smashed the IR filter in the lens-meta-casing mounting and got little pieces of IR-filter glass all over my desk.

The above images are vaguely cannibalistic (or is it incestuous?) because I took them using the other webcam. Best not to think too much about it

Success!

For lack of another IR source, this is my TV remote flashing into the now-modified webcam. The image is strange because the TV remote is flashing faster than the camera's framerate. Once I get some film to make a visible light filter, I'll get some really cool pictures.

For the MADT Festival:
  1. working motion input (I hope our IR LED arrays arrive soon!)
  2. sound, any sound
  3. more interesting agents


Posted by David - February 8, 2007 - 0:23

Shopping Spree!

Grant me the grace of infrared video

I applied for a grant for our project. Whether or not we get that fat wad of cash, we're going to need some hardware for this project and we're going to need it soon. That which separates Cosmosis from a glorified screensaver is interactivity (through vision hardware of one sort or another), so our goal is to get input working.

I'll quote my own grant application:

"...ACAD's VR Lab has a good setup of computers, projectors, and a sound system. We have mounted a webcam over the screen at the longest string of USB extension cables that the signal can support. When we attempt to run the system with the ceiling lights off (so that the projected images can be seen clearly) the webcam cannot pick up enough light to make any kind of image. Minimal lighting directed away from the screen but kind-of [did I really write "kind-of" in a grant application!?] on the viewers works poorly because only highlighted edges of people can be seen, and the light from a directed lamp is blindingly bright in a dark space. ...Also, software image enhancement[/interpretation] of the very dark images creates far too much video noise to be of use.

We need to illuminate the viewers so that their motion can be sampled, but we need to do it without visible light."

The solution is light that cannot be seen, infra-red!

The sensors of digital cameras of all types, including webcams, actually can pick up IR, but IR is unwanted for most applications which aim to replicate images in visual-light. So webcams and digital cameras have been built with IR filters to block IR light. For our purposes we need to remove the IR filter and replace it with a visible light filter that allows IR to pass and blocks visible light.

Webcam modification

A good site on modifying webcams for infrared use is the aptly-titled 'How to make a webcam work in infra red'. Choosing from the list of working webcams on the above page, I ordered two "Logitech quickcam messenger" cameras from Memory Express. They were especially cheap and I wouldn't want to saw apart an expensive webcam; too much pressure.

Reading IR light is only half the story - the space needs to be illuminated with IR light for anything to be "seen".

IR Illumination: dark light?

The market for IR illumination is geared toward security, outdoors-adventurism, and a few scientific applications. People who have something important enough to protect with infrared security cameras seem to be rich enough to pay outrageous prices for pre-built LED arrays with nice mountings. We are students; artists; scientists! And can afford no such luxury.

IR LEDs can be bought per-piece quite cheaply but they must be soldered together with resistors and other little bits of electronic hardware that are lost on me. I'm no electrical engineer and we've got quite enough tangential subject-areas on this project to last years, so I found some small pre-built yet relatively inexpensive IR LED arrays here; I hope five of them will produce sufficient illumination for our purposes. If you look closely at the picture you may note that the array's power cable ends in bare wires. After some freaking out then a brief online chat with a friend who has built electronics, I found some appropriate AC/DC power supplies and jacks from his favorite parts-supplier that will work quite nicely with the IR LED arrays.

Everything should come in over the next couple weeks, and any other of sundry cables, hubs, connectors, and mountings we can surely purchase locally.

Render me this

There are two new forms above which I've called "circles" (poetic, no?) and "urchin". Maybe the first should be "crystal spheres"; that'd be nice. The complex lines produce a very different aesthetic look than the cloudy-blobs do, and we shall want to combine all of these rendering functions with others, dynamically, to produce yet more novel effects. Especially pertinent is re-structuring the program to allow for Andrew's "megagents". I do worry about looking like an OpenGL graphics demo, though. And there's only so much we can do, I guess, and the appeal of this project shall lie in the behaviour and interaction, not the graphics themselves. Enough said.

Grid Collision Detection

While discussing the problem of collision detection in Banff we threw a lot of seemingly crazy ideas back and forth, and for a time at least dismissed most and settled for filtering the set of agents which would attempt collisions at all. Then, over our discussion-over-tea two weeks ago, Andrew pointed out that Kaye Mason had used a collision method that we had dismissed as too ridiculous to use.

From Kaye Mason's paper, Negotiating Gestalt: Artistic Expression by Coalition Formation Between Agents:

"When an agent is placed or moves, it registers with the grid. Each grid cell keeps a list of registered agents in its territory. An agent can query its own cell or neighbouring cells for a list of registered agents."

As Sheelagh pointed out during our presentation, this grid collision method keeps the size of collision detection process more-or-less uniform over a space/set filled with any number of agents. A conceptual illustration of this process appears below: The viewport is divided into a 20x20 matrix of cells with which agents register. We then consider agents which share a cell to have collided - the "collision radius" is the (adjustable) cell size. This actually works, contrary to our expectations, because it gives the impression of physical simulation, but doesn't require the processing expense of performing a complex and more "realistic" collision detection.

It still needs working-on and optimization, of course; would you believe that we're iterating through the agent list rather than the grid? Madness! And surely it can act as more than just a grid (to address Paul's concern). David O. suggested we use a quad tree, which is roughly equivalent to an adjustable grid, but is much more clever, and kindly sent us a lengthy explanation. We shall see how far we need to take this for our application. (I'll say it again: This project has so many applicable areas that we could work for a long time on just collision detection, just agent rendering, or just video input. As is, we've got to jump from place to place to try to put together a coherant and operable whole that works well-enough.)

Image Analysis

It isn't working yet. It's hard work, especially dealing with the large image-data arrays efficiently (and getting the stupid Numeric module to work properly with everything). And we're going to work on it.

I show this next image because I labored far too long to code a random "video" generator that would give ever-changing shapes for interpretation by our image-analyzer in lieu of input from EyesWeb (curse its inscrutable network-transmission format!). It works with a motion reminiscent of Cosmosis, actually. The raw values of the pixels from the image surface can be seen displayed in the console in the background there.

"You! Yes, you. I shall see you producing useful information to input to the virtual ecosystem. Soon."



Posted by Andrew - January 24, 2007 - 21:18

cave!

I have not yet addressed aesthetics here, and although David's posts are informed by discussions we've had, they are not particularly representative of my views. I'm reluctant to write about aesthetics since I've never studied them formally ... however, I have very stong feelings about the power of aesthetic sensibility to elevate the soul (as it were), and can stay silent no longer.

Aesthetic response has precious little to do with the acquisition of knowledge. Reading critical analyses of artworks may be interesting, but the knowledge can not induce the experience, it can only reflect on it, and generally quite ineffectually since it does so through expository language, and the most acutely aesthetic art forms are not discursive. Giving them a narrative context, even if it was actually the artist's own, does not even in the best case add much to the aesthetic experience, and as often damages it.

It is true that the seeker of knowledge by keeping an attentive mind and fresh impressions may be more receptive to new or deeper transports, but this is an indirect (and far from necessary) tool. Let us consider three of the most immortal and universally admired artists in Western history:
  • Music: Beethoven ("Music is the one incorporeal entrance into the higher world ... which comprehends mankind but which mankind cannot comprehend.")
  • Poetry: Emily Brontë (or Dickinson as you like!) ("Stern Reason is to judgement come / Arrayed in all her forms of gloom: / Wilt thou, my advocate, be dumb?")
  • Painting: Constable ("Painting is but another word for feeling.")
None of these were highly educated or cultured, although their irresistable greatness has finally won them perpetual recognition in such circles. If anything, there is an inverse correlation between creativity and acquired knowledge. (Obviously I'm not talking about experience and technical excellence though -- these are building character and honing expressivity, and are inevitable with predilection and practise.) And indeed, most artists who achieved greatness had scant patience for intellectualisations. Some outright scorn to write about their art, others do so in a way which appears to be a mockery of our critical expectations.

I definitely maintain that modern scientific attitudes, and art-critical ones, are a threat to aesthetic sensibility and anyone who values art deeply should be wary of knowledge-mongering and polemics. Not everything is enhanced by rationalisation, great art being a prominent case in point.


Posted by David - January 24, 2007 - 11:01
A few more quick thoughts on aesthetics

Andrew and I met over tea (which, I daresay, was excellent) and discussed a great many things last night. A common theme in our conversation was a question of aesthetics, perhaps starting in part from what I wrote yesterday about the known and unknown, and how I said I changed my mind, a bit, after only a day and would like to reword some of what I said.

To start: I quite like the imagery of following the "horizon of the unknown", and that it is the possibilities or potential of the unknown, based on the known, which makes something interesting. We talked about how so many systems work like this -- Human vision, for example, is based on actually seeing only a very few details while the brain builds an impression of the scene that one believes is detailed but is actually largely interpolation. Just so regarding the computer graphics method I believe Jeff Boyd mentioned yesterday about, what's it called, adaptive polygon something? - in which the world closer to the camera gains more detail while those areas that aren't seen, or aren't seen well, are rendered with little detail. This gives the impression that the whole world is built in great detail because someone can only ever look in one place at once and their mind builds a detailed impression of the world.

And just so in literature! An author uses language to construct a world by giving just the right touches of description, then it comes alive as a rich and detailed impression of a world in the mind of the reader.

"Good" art perhaps gives the right touches of description (or 'information', 'the known') so that the viewer actively wonders what lies "beyond the horizon of the known". It is not about keeping things hidden or withholding information but rather about telling in a manner that reveals beyond what is told. To do this well, perhaps, is good art.

I can only hope that our project can be so suggestive beyond itself.



Posted by David - January 22, 2007 - 19:17

Looks can be deceiving!
... and glRotate means that I don't need trigonometry


Video Input:

Frankly, getting video from EyesWeb is a pain in the ass due to EW's weird formatting. Andrew's lovely "Loom" program allowed us to examine the raw data output and attempt to figure out how to format the mess back into an image. Here is a link to an image page he made with some of these images, which we showed at our first presentation.

At the bottom of that page one can see that we can get undistorted image data if the video from EyeWeb is black and white. That's not grayscale: black or white. But this is quite good enough for our purposes because we just want a quick and dirty method of finding "active space".

I won't get too much into my frustration with attempting to extract a renderable array of information from the Eyesweb output. It was a bad week to quit drinking soda/pop.



There's something in those pictures; more noise would appear when I waved my hand around faster in front of the webcam. I rendered the images by stuffing 8bit per pixel EW output into Numeric Python arrays which, in turn, can be rendered more or less directly using Pygame's surfarray module. As I said, we're supposed to be getting black/white images from EW, but the above images are drawn in some pretty awful colors. The data is getting mixed up and Pygame.surfarray's 8bit surface expects a certain type of compressed color information, not B/W.

Then Andrew gave (tacit) permission to work on something else for a while, thank god.

Agent Rendering Methods:



The first two sections of the diagram above show a couple approaches to multi-agent renderings to make interactions/relationships visible. They're pretty straightforward.

The third section shows what came to me yesterday after freaking out about the possibility of having to use SIN and COS calls, and in hindsight it's obvious. Instead of calculating each point of a complex shape at angle X, just draw the complex shape then rotate it by angle X. In this way, we-the-coders can precalculate polygon-groups or make (kinda) simple polygon-drawing-algorithms then fit them to the coordinates of the agents calling the render functions. It's just like how games/applications don't calculate the angle for every line when drawing a 3d model -- they transform/rotate the coordinates, draw the model, then reload the identity aka default settings. It's hardly any work at all!

This calls to mind the way our visual rendering approach appears to be evolving: There will be a number of rendering functions that take one, two, or a whole list of agents as input and Meg/agents will know what rendering function(s) to call based on their particular "species" or what-have-you. Visual complexity will arise from the unique combinations and settings of these renderings.

(I don't think it'd be reasonable for agents to actually modify the code of their openGL rendering functions -- GL can be finicky. Agents' choices will have to take place one step above the building blocks we provide.)

And a bit on Aesthetics
A quote from some book meta-quoted from Rudy Rucker's (we)blog (He's a math prof. at San Jose State who's into cellular automata):


"Here’s a quote along these lines from David Kushner, Masters of Doom, (Random House, 2003) p. 295. Kushner is describing the programmer John Carmack, who developed most of the code for the first-person-shooter computer games Doom and Quake.

“...after so many years immersed in the science of graphics, he [John Carmack] had achieved an
almost Zen-like understanding of his craft. In the shower, he would see a few bars of light on the wall and think, Hey, that’s a diffuse specular reflection from the overhead lights reflected off the faucet. Rather than detaching him from the natural world, this viewpoint only made him appreciate it more deeply. ‘These are things I find enchanting and miraculous,’ he said...”
"



I'm of a mind that believes that having more knowledge of a subject makes a subject more interesting and grants greater and more meaningful depth of experience than a subject I know less about or that there is less to know about. If knowing about the subject "ruins" it then the subject may well have been the aesthetic equivalent of a "one-liner" anyway.
So if things get more interesting with more context, history, background and content, then because the universe is effectively infinite to the scale of human knowledge, so to is the possibility for beauty (if you'll pardon the term). The unknown is so interesting, in this case, because it's the ever-receding horizon of the known. The best art never stops, maybe, and maybe I just can't see that it ends right past the horizon.

To bring this back around, the Cosmosis project is, to me, a (comparatively small and dull) reflection by us-the-creators on the natural systems that complexity arises from in the universe. And these are the most amazing things of all, I think.

The next day: On further consideration, I think I've changed my mind. I consider aesthetics differently than I did yesterday, and will probably change my mind again tomorrow -- hopefully this is a constructive and progressive process.

Next time: Blob analysis and Infrared!


Posted by Andrew - January 16, 2007 - 16:52

Inspect the Unexpected

This morning for our second presentation we had a surprise: the System behaved in an totally unaccustomed way, degenerating into a most infernal rusty-flames and shadows thing that we've never seen before -- and which we liked better than anything we'd achieved deliberately to date! This is actually encouraging, since the whole idea of the project is a system which does the unexpected and evolves.

Recent Cosmosis images can be seen here, where they will continue to accumulate.

To recap on some of the points raised in discussion:
  • [Paul] try to give participant the ability to grab preferred phenomena and keep them from dissipating
  • [Alan] maybe force the participant to interact in unusual ways, for instance swimming, or kneeling
  • [Sheelagh] recommends to try using seperate systems which blend the seams, rather than depending on Chromium/WireGL, which would probably also increase the system diversity
  • [Amy] wonders if we're planning to record interactions, and what kind/depth of recording? (there's natural video, but it's also quite easy to record the input history)
  • [David O.] consider making our circular agent deformable under collision
  • [Sheelagh] the spiky world-eaters could present their effects in a more obvious way, for instance growing or zapping ...
  • [Amy] ... or be impaled on the thorns, forming clusters which roll and transform
  • [Mary and Alan] try to ask ourselves what we would like to be noticed about our system (critical eye, ear, language); also what do we like about, and dislike about, the system?
  • [Amy] what would a 12-year-old say after a Cosmosis experience? they would probably love it -- but also, what about it might detract from their experience?
  • [Paul] you might be able to introduce other media, like text, without violating your artistic aim of a system which doesn't depend on external references
  • [Alan] have you considered stereo (3D viewing)?
Also discussed: the performance aspect of our presentation, and how it suggests possibilities for VJs (video jockeys), as well as "wizard of oz" (a.k.a. "evil genius", a.k.a. "man behind a curtain") enhancement for the public participants. Speaking of wizards, Alan has generously salvaged two impressively-heavy IR filters which should be of great use given an IR-sensitive camera. (Using IR carries dual benefits of being non-distracting, and subverting feedback noise from video illumination: noise in the EyesWeb input causes annoying abherrations on the sensed gestures.)

Although we felt much better about this presentation that the first one at Banff, we were still at a considerable disadvantage in that we don't have input working, we don't have tiled display working, and the sound needs to be custom (cellular automata and audio texturing) and exploit surround (OpenAL). Our goal is to have a decent solution to these problems by our third presentation.

A shout out to my man David for awesome particle-physical code and a solid grasp on the future of the project, yaar.


Posted by Andrew - January 10, 2007 - 19:44

Rude Posey

I'm trying to keep in the habit of saving occasional images of the system. It's important when developing graphics to have a convenient way to get a window capture without interrupting the work flow too much. Can't say we have that at the moment, but I have been able to capture some images. There are some charming artifacts which I cannot yet post because the screen grabber I've always used suddently ("bitrot") turns corrupt!! On the right is shown a portion of a corrupt capture. I think we can fix these...

The following represent only about one day's development, so the images are not particularly diverse. Rather than clog the blog with these in the future, I'll redirect to another page where I'll be trying to organise a visual history of development.



Posted by David - January 4, 2007 - 22:44

The Universal Question:


"So what's it gonna look like?"

...Or more sophisticated variations of the above seem to have been asked every time that I've had the pleasure of presenting our project. I suppose it is only natural with all the talk of agents and ecosystems and things flying around, sticking to one another, interacting, and building complexity, but ... to me, at least, what is so amazing (and what sustains me through the periods of "dry" development) is the concept. That is, the theoretical model that novel and intricate structures and systems can construct themselves from very simple forms. No matter what the aesthetic of the rendering is, just the idea of this process is fascinating to me.

Anyway, our simplest answer to the question is: "We don't know."

While we choked on that (though I exaggerate) Alan was helpful enough to point out that what we have shown is very much in-progress (and really, more fresh a realization than anyone probably knew), so he asked instead for us to talk about our -process- of building an aesthetic. The comparison was made to painting -- a painting is not just a transferance of image from the mind to the canvas, it's often a series of experiments as well as, well, playing with the paint.
Just so we find aesthetic forms by playing with the parameters and functions of the simulation and with OpenGL! Sure, I can (speak for myself and) say that I have vague images in my head, but I accept that the final product will have to simply be inspired by those but be realized by the process of the making of the system. Along similar lines I've accepted that I can never draw from the images in my head, what I make is always something quite different, but perhaps equally (or more?) valuable.

And that's my answer to the question.

(And I must apologize to Andrew for posting so much; I'm not trying to make him look bad. On the contrary, I'm trying to make myself feel good as I consider the amazing amount of coding he has pushed himself to do. So there!)


Posted by Andrew - January 4, 2007 - 15:36

Some Thoughts about Dimensions

I think our main reason for wanting to keep the graphics perspective fixed is to avoid the frustrations associated with navigation in a 3D space using free gestures. However, it would be misleading to say that we are committed to a 2D environment. There was never any intention to use stereo imaging, so the graphics are always 2D, even when the models and graphics engines are supposed to simulate 3D. If we use layered 2D OpenGL, but use slight perspective rotations, this can be an efficient way to achieve enhanced perception of the layering at no cost, although this effect could also be achieved laboriously by perturbing the coordinates of the agents in software.

There are single unbroken 1D curves which fill space. Fractals are so-called because their dimension is not an integer, but is fractional (for example, lying between 2D and 3D). There are definitions of dimension which give different dimension values for the same models.

The gestural space is 3D, but the perspective of the camera projects this to 2D, and the gestures themselves are more like 1D curves. The abstract ecosystem space inhabited by the agents could end up being 4D or higher. Some agents may manifest visually as fractal textures, so that they have fractional dimension.

The audio waveforms are essentially 1D, but we are using 5.1 surround so there is a 3D spatiality to the sound as well. Also, we are considering using 2D cellular automata to generate audio patterns, so in this sense the audio is not 1D. Furthermore, repeated motifs form a sort of aural tapestry which is more like a textile or a landscape than like a string of beads.


Posted by David - January 3, 2007 - 13:44

In the Grind


I know what we did in mid-December...



EyesWeb Video Interpretation
Fig. 1: EyesWeb running our video interpretation patch

Webcam video input is clipped and shrunk to a 90x90 pixel square to decrease the processing necessary with acceptable losee of image quality because we will only deal with broad movements at this point and the video input is not meant to be viewed, just interpretted. It's then greyscaled, crudely background-subtracted, and thresholded so that movement produces white pixels.

Subfield Sampling
Fig. 2: nine points of motion being tracked on webcam video.

We expanded on our small Eyesweb "Conductor" project to allow multiple points of motion sampling.
The webcam video input is cut into a nine-piece grid of subfields. Each subfield is sampled for the amount of motion (# of white pixels) and for the center (in X and Y coordinates) of the area of motion.

(I should mention that this is an extremely crude way to try to interpret multiple points of motion from video because, well, it's limited to nine fixed subfields which output just one center of motion each. Not only is this a very small amount of information, it is inaccurate, especially if motion crosses over multiple fields. Eyesweb doesn't /easily/ support really nice interpretation for our purposes. Sure, there is a function meant to track a human body, but it only tracks -one- human body, and if Eyesweb doesn't have clear video it will give some very strange data. We'll ultimately have to output simplified video from Eyesweb and write our own interpretter, I think. But for now, this makes things run.)

Three numbers (X position, Y position, and magnitude) are taken from each of the nine subfields and assembled into a 1x27 matrix (because Eyesweb 3 has very limited arrays) which is sent to a proxy program running on the same machine that, in turn, sends this information over the network to the machine running the ecosystem/renderer.

OpenGL Rendering
Fig. 3: a simple rendering of the nine points sent from EyesWeb to the renderer

The ecosystem/renderer receives data from the network and, at this point, renders it pretty much as-is, without a simulated ecosystem.

But the point is that it all works; The data flows!

A Modest Proposal for Simple Simulation



Fig. 1: Particle Decay
A point will perhaps be the simplest form of agent in the multi-agent ecosystem, and for now are the basic form of input. Points may decay over time with regard to how interesting they are, and if they are given a magnitude based on the sampled sub-field's area of motion, then clearly points created with more motion will have more magnitude and more life.

Fig 2: Field of Interaction
A point-agent should interact within a certain field. This interaction can be a bounce, or it could combine the properties of the two point-agents into one more interesting agent. From the standpoint of reducing processing time it is useful to eliminate agents while creating interest.

Fig 3: Interactions
Again, to reduce processing, we may allow only energetic "hot" particles to detect collisions and produce interactions. So "hot" can interact with "hot" and "cold", but "cold" particles won't even bother running the collision calculations.

Fig 4: Up the evolutionary ladder...
From simple particles we can build up primitive shapes and behaviours, hot particles will collide to form "larger" particles which may combine to form line segments, which may collide and collect to form polygons, which will act in more complex ways, and so on...


Posted by David - December 7, 2006 - 18:29

Networking with EyesWeb, Staring into the Abyss




Andrew appears to discuss something important in this entirely spontaneous picture.


No, I'm not trying not to laugh - that's just what I look like when I'm thinking deeply.


Simple sample output of, what was it?, 1000 sets of 40 integers or so .. as colored lines! drawn between points of a grid.

We've been working in the VR room (where else?) with the additional small pressure of my upcoming MADT 401 revue, then need to have something displayable in Banff.

Small EyesWeb Project
Our small EyesWeb project (Webcam tracking to MIDI output) was a bit lacking in presentation, I think, when we showed it at the U of C. The lighting and background conditions were entirely different; the VR room is painted black, has a grey floor, and a white screen; simple. The interactions lab has computers, lights, posters all over the place (not to mention a crowd of onlookers), so our previously clean-cut motion detection was filled with noise.
Clearly our environment in the VR room is insufficiently hostile to produce the motivation for an extremely adaptable background-subtraction system. This is good because it means our job (Cosmosis) is easier in the controlled environment, but it makes me want to invest effort in doing a better job at interpretting the video to extract motion. Time to re-read that paper!

And what's it mean to Cosmosis?
Which brings us to the topic of what exactly we're going to get EyesWeb to send to the various Cosmosis components -- in short, it looks to be arrays of integers.
How do you transform video into a meaningful array of integers? And how do you reduce distortion produced by image artifacts - like someone's shirt which happens to be the same tone as the carpet, or the back wall. Perhaps there is some way to make sure that whatever people walk in with is -always- different from the environment.

Decoding EyesWeb Output
To illuminate the undocumented gloom of EyesWeb network output Andrew used his lovely "Loom" program to examine the raw data spit out by Eyesweb. The opening shot of the connection is a bunch of mysterious junk (but did Uta find documentation on it?), with sets of 4 bytes following.

An int: 8,0,0,0 / x,x,x,x / 0,0,0,0
The 8 tells how many bytes will follow the initial set of four bytes, X is the integer (reading from left to right, mind you, so if the int is "2", the set is "2,0,0,0"... but in hex or something), and that last set of bytes is just zero. Maybe it's what always ends a packet of data, maybe it's used for something else. I'm pretty sure we didn't find out, but I'd have to check with Andrew.

An array with 3 ints, [x,y,z]: 12,0,0,0 / x,x,x,x / y,y,y,y / z,z,z,z
So there are 12 bytes following the initial 4, and no trailing set of 0's. I think.

Fascinating, 'eh?

Network Instability
Networking programming is a pain. Or maybe I should say that getting it to work without messing up is a pain. Is there an issue with ACAD's network security? We don't think so, and don't really know. I'm not sure how positively the network people would respond if we told them about what we were doing because institutions love to be network "security" fascists. Do we fully understand how socket networking operates -- in Window? Not really. But we'll learn.

What's clear is we don't trust EyesWeb all that much, so the plan is to have EyesWeb to send data to a program (the "Repeater") running on its localhost, and that sends data across the network to the (presently combined and bare-bone) Ecosystem/Display program.
It works with simulated EyesWeb output!

Short-term Goals

  • Get it working with actual EyesWeb output
  • Make the display more interesting - Andrew did well to change the colors from Ugly Green to the more interesting set you see in the above screenshot
  • Perhaps get a basic ecosystem running, on the level of "Life"?
  • Refine reading list, and divide it into two sections for scientific papers and art/philosophy


I think that covers it.


Posted by David - November 29, 2006 - 22:09
Here is the flowchart Andrew mentioned:

If I were to do it again (and I will) I would make it clear that processing would be taking place over a number of networked machines.

It is difficult to get OpenGL working with Python - on Windows. There is always another pile of errors and some kind of module missing. And documentation is poor to non-existent.
Speaking of poor documentation - try EyesWeb! It's actually kind of fun to puzzle out exactly what does what. Well, it's frustrating too, depending on my mood.

IR may not be necessary considering the silly things we can do with adaptive background subtraction (which makes it sound a lot more clever than the actual effect), but light levels should be changing quite a lot throughout the Cosmosis process, so IR is definitely desirable.
We'll work on it tomorrow.

I should like to get a simple data pipeline working by next week for the MADT 401 reviews, just any kind of input from EyesWeb to a Python program to a display (or sound?) will do. And from that skeleton, we'd simply build up the complexity!


Posted by Andrew - November 29, 2006 - 13:51

Ah, mustn't neglect the blog.

David and I visited the VR lab at ACAD together last week, where we found to our relief that my webcam is supported without the driver disc (which I had forgotten). In fact, since our tour of the facilities with Alan Dunning, the lab has been equipped with its own (face-tracking!) webcam, and also ample USB extension cords which we will definitely be needing if we place the camera in front of the participants. David may wish to say more about this experience, but basically it was an important and somewhat painful first step where we familiarized ourselves with the equiptment and tried to get each thing working independently -- with middling success. One thing we have yet to determine is whether it will be possible to use infrared illumination with the webcam -- we tried using the IR lights in the lab, but it seems that at least those frequencies are blocked by both cameras.

So far I've gotten Python OpenGL rendering working on a linux platform, and python networking (server/client) betweem remote machines. Python is a fine programming system and I've no regrets about abandoning initial Java plans in favour of it! Also, Chromium is written in python which should make our interactions with it that much easier. Chromium is a system for distributed rendering on multiple workstations, capable of targeting multiple displays or projectors. This site also refers to a SIGGRAPH paper which should be in our reading list; Figure 2 illustrates the configuration appropriate for our system.

We aim to have the basic system (I hope David will post the image of our system diagram from his MADT proposal), sans any sophistocated multi-agent ecosystem, functioning before we adjourn for Christmas break.



Posted by David - October 27, 2006 - 22:19

Poetry, Programming, and Links

To consider in terms of a particle mode of generative aesthetics:
From Lederman's The God Particle, pg 342:
And the Lord looked upon Her world, and She marveled at its beauty - for so much beauty there was that She wept. It was a world of one kind of particle and one force carried by one messenger who was, with divine simplicity, also the one particle.
And the Lord looked upon the world She had created and She saw that it was also boring. So She computed and She smiled and She caused Her Universe to expand and to cool. And lo, it became cool enough to activate Her tried and true agent, the Higgs field, which before the cooling could not bear the incredible heat of creation. And in the influence of the Higgs, the partcles suckled energy from the field and absorbed this energy and grew massive. Each grew in its own way, but not all the same. Some grew incredibly massive, some only a little, and some not at all. And whereas before there was only one particle, now there were twelve, and whereas before the messenger and the particle were the same, now they were different, and whereas before there was only one force carrier and one force, now there were twelve carriers and four forces...


I've been working on a personal project to learn Python and programming in general; I've worked for hours and hours for the last three days and I think I understand the rush Andrew was talking about. You know, the agony and ectasy of coding.
Here's a screenshot for the interested. If you really break it down, it's basically a cellular simulation.

I found a few links on/through Gamedev.net; Some of this stuff takes some really serious math:

Genetic Programming in C/C++ (Hans Kuhlmann / Mike Hollick)

Application of Genetic Programming to the "Snake Game" (Tobin Ehlis)

Neural Network FAQ (Warren S. Sarle)

Neural Netware (Andre' LaMothe)

And it seems we've got to pull some kind of "phidget" system out of our... hats. I don't think they could apply to our project. Andrew did suggest something about controlling lights or something, but how could a simple moving/changing light compare to the real lightshow going on in the screens? We'll see.

Also, we checked out the ACAD VR room and it looks like we're aiming to use it for the installation of the project. As a venue it's terribly isolated and no one who doesn't already go to ACAD would probably see it, but the hardware is amazing, if perhaps a little clunky. Of course our goal is to make the program scalable to all sorts of systems, from a personal laptop with headphones to a projector/tower/speakers system to the powerful setup in the VR room.
VR room first, of course, we've got a deadline. Somewhere.


Posted by Andrew - October 16, 2006 - 22:51

More about the Science (and the Art)...

If the Goal of Science can be summarised in a sentence, it might be the collating and refining of cognitive models of reality, formalised rigorously with careful technical language (preferably mathematical), and corroborated by experiment, for the purpose of increasing our understanding about the true nature of our World. Paul Feyerabend, an influential philosopher of science, in his seminal work Against Method (1975) argued that science is fundamentally anarchistic, and rejected the notion of any universal scientific method, although certain methodologies have passed in and out of vogue through history.

Science was not originally practised for benefit of its fruits, in contrast to our present times, but rather for very joy of it. One is tempted to speculate that what gives pleasure correlates with truth and usefulness. This is a personal philosophy I hold, that beauty leads us to truth. I take exception to the notion of an "ugly fact" -- a fact is only ugly if the beholder of the fact has previously committed themselves emotionally to an erroneous view.

Hence, the scientific content of our proposal, aside from the relatively banal technical issues (and technique is as much an Artist's problem as a Scientist's), is an enquiry into empirical intentionality, a concept which will be illuminated over the course of the project, but which will involve investigating human aesthetic response in the context of an interactive generative system. This is cognitive and behavioural science more than it is physical science or engineering, despite the prominent dependence of our system design on the latter.

The transferable value of these sorts of potential insights include at least two important contemporary areas:
  • mapping of mood and taste
  • improving human-machine interfaces
These are not independent threads since it is easy to imagine a system responding better because it knows the moods and tastes of the interactors, and system knowledge will likewise depend on efficacy of the interface, a sort of dynamic and autonomous customisation. This is a step towards an intelligent system which adapts to the user, and improves its interaction abilities with accumulated usage.


Posted by David - October 15, 2006 - 18:55

Whorls upon Worlds (in three sentences)


or: A reductionist summary of the artistic aims of this project

The themes that inspired us and the ideas that came out of subsequent discussions are quite numerous, so to even try to fit them into three sentences I'm going to start by breaking the project into three major artistic components:


  1. human action and reaction: interaction/observation is made identical within this system: the fisherman gazes into the sardine can and the can gazes back into the fishman, changing both -- Heisenberg's uncertainty principle? + something about kinetic intuition

  2. generative art: how is it an art which is different than evolution/physics/geology (or any other natural generative process), aside from human construction -- possible theological implications? natural aesthetics?

  3. media output: the process of using a meta-brush (the abstract rules and laws of how the "brush" is to be used rather than direct "painting"); how much of the art-piece is from our parameters versus the generative system versus the participant? - "art as a process of interaction between the viewer and art-piece", not just a static art-object

  4. (then there is the matter of what is going on in the human subject's head, but this ruins my three-point format, so we'll leave that to them and hope to cover this section by the overflow into points 1 and 3)


I actually don't feel like making those into coherant sentences tonight, so I won't. The ideas are there.

To bring up the other three sentences and a point I keep coming back to in my thought: A lot is being made of the (stereotypical) qualities of natural science in relation/opposition to Art, but shouldn't the subject be computer science? I admit limited knowledge of the subject, but doesn't computer science have much akin to 'pure math', engineering, and art, more than natural science? It's a very different beast than the empirical natural sciences that everyone talks about when the word "science" comes up, the two should not be equivocated.

And to bring up Andrew's thoughts: I don't think that he and I come from such different worlds. The art students who entered this program (mostly) have an interest and ability with computer science, and the compsci students all have an interest and ability with art (and maybe they must, by definition, as I said in the above paragraph), but we are asked to oppose and distinguish our specialities. Maybe I'm speaking in terms which are too strong; positing a duality between Art and Science is a necessary dialectic step in approaching the issues brought up by this program. Maybe we've just got to all talk about it more.


Posted by Andrew - October 14, 2006 - 23:41

"Where's the Science, Bub?"

We've been asked to prepare about three sentences, each, addressing "where's the science?" and "where's the art?". That excercise seems a bit contrived to me, in our case -- I really think David and I come at things more from the same side than from radically different perspectives. I imagine maybe Helen and Dave experience similar reaction.

Attempting to address the "science side": We can only ask, "what does it mean, specifically, to most scientists, when we say `scientific'?".
  • Is it enough just to find something mensurable (quantifiable)?
  • Does it also need to be reproducible?
  • Need we have a motive for studying a particular measurable?
The total experience of a system+participants session is nondeterministic by nature and by design. However, certain statistical behaviours can be systematically elicited, at least frequently if not dependably -- we could quantify those qualities, and measure them reproducibly.

If we measured physical human gestures, as sensed by the camera(s), we would also know quite concretely what our numbers meant, especially if our model was "anatomical". If we measured agent coalition dynamics, it might be hard to ascribe meaning to the numbers, since agents partake of the unknown -- perhaps of the unknowable, if chaos is implicated ...

It is theece very ambiguities that verge into artistic zones, since the participant who is selectively-guiding the evolution depends very much on sensible feedback in order to feel engaged, which is a prerequisite (if not a synonym) for intentional. And, as we mention in our Proposal, we are trying to incur a study on empirical intentionality.


Posted by Andrew - October 11, 2006 - 13:23

Generative Art

In the IEEE Computer Graphics and Applications (60th anniversity special issue), Gary Singh in talking about the cover quotes from Philip Galanter as describing generative art as
[referring] to any art pracise where the artist uses a system, such as a set of natural language rules, a computer program, a machine, or other procedural invention which is then set into motion with some degree of autonomy to or resulting in a complex work of art.
I think this designation (among others) applies to our project.

Also, Mary Scott has reminded us that a local artist, Arlene Stamp, has been developing a generative art system in collaboration with Tobias Isenberg (from our own Interactions Lab), which you can play with (requires Java).


Posted by David - October 3, 2006 - 15:35

First post! And some G.E.B.



To start this off let's have a quote from "Godel, Escher, Bach" (henceforth, for my sanity, GEB):

[This book] approaches these question by slowly building up and analogy that likens inanimate molecules to meaningless symbols, and further likens selves (or "I"'s or "souls", if you prefer - whatever it is that distinguishes animate from inanimate matter) to certain special swirly, twisty, vortex-like, and meaningful patterns that arise only in particular types of systems of meaningless symbols. It is these strange twisty patterns that the book spends so much time on, because they are little known, little appreciated, counterintuitive, and quite filled with mystery.


And to quote the header of a section on the facing page: "Meaningless Symbols Acquire Meaning Despite Themselves"

This idea is not quite the essence of our project, but I think it is essential to it.

I hope this web-log can be a record of not only what we're doing as we're doing it, but also our thoughts and ruminations, if you will, that come up as we make this thing happen. (And it'll certainly generate lots of great filler for our final paper!)

And here's that concept art of our project in action.