Cosmosis

Previous 10 entries
Archive
Info
Post


Posted by Andrew - April 14, 2007 - 19:08 (Edit)

Morphing Colourmaps from Natural Source

Notice that the cat doesn't appear to be at all mindful of the encroachment of the spline. Isn't it fun to create documents? Almost as fun as implementing algorithms, which in turn has to defer to dreaming and scheming. Where do you suppose all this text will end up? Does it serve as mortar, or does it look like it was on the receiving end of one!


Posted by David - April 1, 2007 - 23:08 (Edit)

Lighten up, it's just IR!

The IR arrays

I read the 'schematics' of the IR arrays more closely, and it seems that a certain lesser-than sign is a typo and ought to read greater-than to match the textual description of the device. I took this as permission to greatly exceed what I feared were the limits of the IR arrays capacity, and so I present some operational - perhaps not optimal - arrays hooked up to some extra domestic transformers I had lying around.

Image the leftmost: The makeshift IR arrays hooked up to power supplies of alarmingly varied volt- and amp-age.

Center image: Here the IR arrays are seen through the IR-enabled webcam. The spotlight effect is quite pronounced when they irradiate a surface so close, it works much better to cast the IR on a faraway surface so the light will be much more diffuse. Part of the problem is, of course, the webcam's stupid software adjusting itself to the relative brightness of the image rather than sticking to objective levels. (Many devices marketed to consumers do this, like when birthday candles flood the rest of an image with darkness on a home video. How annoying!)

Rightmost image: And just to show that I'm telling the truth, here's the same scene from a slightly different angle through a non-IR enabled webcam. What's interesting about this is that you can just see where the IR arrays light up the carpet in front of them, but looking with my eyes I see nothing. Clearly the IR filter on the webcam is not completely effective!

I stress tested them for a bit and none melted; their heat stabilized is a range from "somewhat warm" to "quite warm". Next step is sticking them in the VR lab to seeing how they work, perhaps it will reduce input-noise (but realistically, only some reprogramming can do that, I think).

Fortune favours the who?

So at the Art + Science symposium thing at ACAD, along with Alan I talked rather too briefly about Cosmosis, my collaboration with Andrew generally, and ASTecs very generally. We pretty much needed about twice the time we had to talk to really get the point across, but the deed is done.

I blew up the important part of the event's poster here:

See where my name is above the David Hoffos? That's just great! Anyway, at least we did a good deed and plugged the ASTecs program.

Preparing for this event struck quite clear in my mind our need to better document our work. Regardless of how good it actually turns out to be, people love hearing about the hip new techno-art stuff like it, so we need overview shots of people interacting with it (we had only a couple!), and a video thereof, shots of the code perhaps, and above all more flowcharts! So I'm going to be bringing my camera around from now on, let me tell you.

(Also I liked my line in the presentation above regarding Cosmosis as a "meta art object", a system we built in which a piece of art is created when the participant undergoes the act of viewing. Again, perhaps it makes it sound better than it is, but I shouldn't let humility cause me to forgo a good line.)



Posted by David - March 21, 2007 - 21:08 (Edit)

The Aesthetic of the "Default"

Upon reflection it appears that we have unequestioningly worked within a number of aesthetic assumptions. Even if we do not, ah, rise to the occasion and break new ground in generated aesthetics I may as well comment on what we have(n't) done.

  • The black background:
  • The default "canvas" of computer graphics is black (though sometimes a 50% gray). This is natural enough because blackness is the absense of light. If we have not commanded a pixel to activate, it does not. But what if it were otherwise, why should we constrain our graphics generation to default assumptions?

    To compare, a painter's canvas is white, though historically the gesso on a canvas was not necessarily as blindingly white as it can be. Some artists will mix pigments with their gesso to give a different color to the 'field' and perhaps this allows them to approach their painting with an entirely different mood than that of the "blank white paper". The walls of art galleries are also painted white. Sure, it's a neutral color, but are is gray and black. White has a high albedo, therefore makes for a lot of ambient light from relatively little illumination (though I would argue that most rooms are far over-illuminated). Compare this with the lack of ambient light in the black-painted VR lab designed to maximize the relative brightness of the projected image.

    If we drew our agents on white instead of black, the default image in our space would essentially be a large, bright rectangle. I don't think this would lend itself to a "tactful" use of the space, but perhaps we could use a very dark color for the background:

    The light color is way too washed out. I think the dark purple is nice, gives it a night-time feeling. I shall have to try it on the big screen. And you can also see in these shots a new agent rendering function, which brings us to...

  • The "OpenGL look"
  • There is a particular "look" of OpenGL primitives in graphics demos that I really hope we're avoiding. Sheelagh did say something positive to that effect, but ... still, anyone really familiar with OpenGL can probably name the calls we used for each visualization, though Andrew's "Starhoods" may pass the test best as he -was- asked how they were done. Still, there is very much a fireworks show feeling to the whole thing, you know, bright points of light glittering and moving around. I'd like to get away from this "thousand points of light" look as much as we can. But it's so easy, and so .. pretty.

And with the project deadline fast approaching I fear that we don't have nearly as much time as we'd like to split between implementing new features and polishing old ones. Suffice to say, it feels like the technical demands (read: sci/tech) are taking precedence over conceptual development (art). But then the first semester was easily dominated by conceptual matters with a deal of technical effort siphoned off onto the mini-projects.

Modulating Input

The signal to noise ratio with the webcam input is such that I don't think that the expression of the interactor's movement much matters at all - yes, technical demands over conceptual. Still, we've had many suggestions about how participants can be given a context for their interaction with the system.

  • Lines could be put on the floor with tape designating where a person should and shouldn't move. Such authoritarianism may put people into a restrictive mindset, though, or maybe they could use the 'bounds' as an area where permission is given to allow expression. I smell a user-study.
  • It's been said that a dancer would think of movement in entirely different ways than geeks like us, and to see how a dancer would interface with the system could tell us something about how it should see -- unfortunately, with regard to the previously stated problem, I don't think the system would make a distinction between a dancer and one of us jumping around like a fool.
  • We could use props, like IR-reflective/emitting balls or sticks. This evokes the "magic phallus" idea I recall bringing up in our original proposal presentation at Banff. And it'd probably be a bad idea to throw balls around in the VR lab; Still, interesting.

And there's no damn power supplies for the IR lights

Looks like it's a trip to Radioshack and/or Home Despot. Enough said.

Performance?

The mob still wants a performance out of us because watching us talk and get into our project is apparently irresistably amusing. I'm not sure what a good time would be to do this, though. And in a way, every time we present the project it is like a performance and perhaps we could play up ourselves in the roles.



Posted by Andrew - March 19, 2007 - 0:00 (Edit)


Posted by David - March 6, 2007 - 23:16 (Edit)

Groovy Blob Detection

or: "Come rain or snow...?"

IR Lighting:

The IR LED arrays arrived in the mail. They're really cool-looking, it's just too bad the power supplies aren't here so we can't see 'em in action

Blob Detection

Ya'know, image analysis can be a lot of fun. I won't go into great detail, but once I figured out how to work a particular recursive function to propagate a Blob's identity through a group of attached pixels, it all came together.

See - for this visualization, separate groups of pixels from my scowling mug are assigned random colors:

There's a value to disregard cells without a certain number of neighbours to keep the number of blobs sane. The image below shows how the raw input (1) is interpretted as distinct blobs (2) and some simple information about these blobs is displayed (3) whereas the radius of the circle refers to the total number of pixels, the vertical line to the min and max Y range, and the horizontal line to the min and max X range.

The interaction doesn't really "feel" like anything right now, unfortunately. More information must be extracted from the blobs because, as-is, these are ambiguous blobs, not meaningful shapes. For example, I'd like to check the ratio of the X and Y bounds as well as the percentage of area which is 'active' and use those to decide whether to sub-divide the blob for more accuracy and create a, ah, "quad-tree".

I think some of the instructors urged us to scan some of our "analogue" notes, and I now recall Andrew saying I should do the same, so I finally did. Click on the helpful button below to see the full page of unreadable handwriting and unclear diagrams I scrawled today.

I have some very early notes around here somewhere as well. It may be fun to scan those to see how much our concept has changed. (And oh, has it ever - as observed last semester by instructors, the full scope of our project is incredibly ambitious. To get everything polished would take a long, long time, so necessity dictates that the scope of our project must be rather more limited than we may have hoped & dreamed.
Whereas n = number of features and t = time: n^2 = t.

A quick list of things to implement:

  • Sub/Mega Agents (tree structures of agents-within-agents)
  • Proper collision grid (per cell, not per agent!) + rendering of agent interactions, ie: bolt2a()
  • Render-lists to reduce unnecessary GL calls + gain framerate
  • More variety w/ particles + fun particle calls for visualizing input & agent events
  • Get more information from Input!: velocities, trees of sub/mega-agents
  • which requires: Better Input Parsing (ie: 8 ints, first? of which is for setting a tree structure?)
  • make a Video-In "lite" w/o PyGame visualization & update it to use new Numpy rather than outdated Numeric (so it works on linux)
  • Perhaps eliminate need for EyesWeb entirely-- do image processing w/ video direct from WebCam, so, again, it works on linux
  • Make GravWell(s) more interesting - link to agents? as well as input?
  • Sound render-lists?

And so on.



Posted by David - February 15, 2007 - 17:05 (Edit)

To err; to input.

Input from Eyesweb:

Andrew and I met and got the EyesWeb-to-Python input working.

And it ALL works! We could get full color video input if we really wanted - but 8 bit will do just fine. We were just interpretting the data poorly by basing our video input (fewer large packets) on our previous data input protocol (lots and lots of tiny packets). So we'd stick -all- the data into a big list and interpret it as we could, which worked for that, but, as written about a couple entries ago, the video images we were getting were enourmously screwy.

What we're doing now is checking if the packet is the correct size (6340 bytes), if it isn't, ignore it -- and the first six packets EyesWeb sends are always wrong, and as well there are intermittent glitches. If the packet is 6340 bytes, read it in! As mentioned before, Pygame has some display functions that take Python Numeric classes, so I'm using Pygame rendering to 'debug' input.

(A sidenote: Pygame only renders 8 bit images with indexed color, so we made a quick-and-dirty grayscale palette by iterating through 256 positions with 256 levels. Easy! We're receiving 8 bit images from EyesWeb that, strictly speaking, only contain bit information, white-or-black, so for the purpose of blob-analysis it may be fun to change the image array to boolean, True/False. Or whatever.) - Next step: From arrays to agents!

Go to the light!

I started poking around with one of the webcams, one thing lead to another, and I ended up removing the IR filter and picking up IR light!!!

  1. The silly ball-shaped camera is a-cloven in twain!
  2. Removed the lens casing thingy - turns out the good stuff is actually in the lens (meta) casing mounting bit, circled here
  3. Lens sub-casing removed from meta-casing. I wasted a good deal of time poking around with this.
  4. I delicate smashed the IR filter in the lens-meta-casing mounting and got little pieces of IR-filter glass all over my desk.

The above images are vaguely cannibalistic (or is it incestuous?) because I took them using the other webcam. Best not to think too much about it

Success!

For lack of another IR source, this is my TV remote flashing into the now-modified webcam. The image is strange because the TV remote is flashing faster than the camera's framerate. Once I get some film to make a visible light filter, I'll get some really cool pictures.

For the MADT Festival:
  1. working motion input (I hope our IR LED arrays arrive soon!)
  2. sound, any sound
  3. more interesting agents


Posted by David - February 8, 2007 - 0:23 (Edit)

Shopping Spree!

Grant me the grace of infrared video

I applied for a grant for our project. Whether or not we get that fat wad of cash, we're going to need some hardware for this project and we're going to need it soon. That which separates Cosmosis from a glorified screensaver is interactivity (through vision hardware of one sort or another), so our goal is to get input working.

I'll quote my own grant application:

"...ACAD's VR Lab has a good setup of computers, projectors, and a sound system. We have mounted a webcam over the screen at the longest string of USB extension cables that the signal can support. When we attempt to run the system with the ceiling lights off (so that the projected images can be seen clearly) the webcam cannot pick up enough light to make any kind of image. Minimal lighting directed away from the screen but kind-of [did I really write "kind-of" in a grant application!?] on the viewers works poorly because only highlighted edges of people can be seen, and the light from a directed lamp is blindingly bright in a dark space. ...Also, software image enhancement[/interpretation] of the very dark images creates far too much video noise to be of use.

We need to illuminate the viewers so that their motion can be sampled, but we need to do it without visible light."

The solution is light that cannot be seen, infra-red!

The sensors of digital cameras of all types, including webcams, actually can pick up IR, but IR is unwanted for most applications which aim to replicate images in visual-light. So webcams and digital cameras have been built with IR filters to block IR light. For our purposes we need to remove the IR filter and replace it with a visible light filter that allows IR to pass and blocks visible light.

Webcam modification

A good site on modifying webcams for infrared use is the aptly-titled 'How to make a webcam work in infra red'. Choosing from the list of working webcams on the above page, I ordered two "Logitech quickcam messenger" cameras from Memory Express. They were especially cheap and I wouldn't want to saw apart an expensive webcam; too much pressure.

Reading IR light is only half the story - the space needs to be illuminated with IR light for anything to be "seen".

IR Illumination: dark light?

The market for IR illumination is geared toward security, outdoors-adventurism, and a few scientific applications. People who have something important enough to protect with infrared security cameras seem to be rich enough to pay outrageous prices for pre-built LED arrays with nice mountings. We are students; artists; scientists! And can afford no such luxury.

IR LEDs can be bought per-piece quite cheaply but they must be soldered together with resistors and other little bits of electronic hardware that are lost on me. I'm no electrical engineer and we've got quite enough tangential subject-areas on this project to last years, so I found some small pre-built yet relatively inexpensive IR LED arrays here; I hope five of them will produce sufficient illumination for our purposes. If you look closely at the picture you may note that the array's power cable ends in bare wires. After some freaking out then a brief online chat with a friend who has built electronics, I found some appropriate AC/DC power supplies and jacks from his favorite parts-supplier that will work quite nicely with the IR LED arrays.

Everything should come in over the next couple weeks, and any other of sundry cables, hubs, connectors, and mountings we can surely purchase locally.

Render me this

There are two new forms above which I've called "circles" (poetic, no?) and "urchin". Maybe the first should be "crystal spheres"; that'd be nice. The complex lines produce a very different aesthetic look than the cloudy-blobs do, and we shall want to combine all of these rendering functions with others, dynamically, to produce yet more novel effects. Especially pertinent is re-structuring the program to allow for Andrew's "megagents". I do worry about looking like an OpenGL graphics demo, though. And there's only so much we can do, I guess, and the appeal of this project shall lie in the behaviour and interaction, not the graphics themselves. Enough said.

Grid Collision Detection

While discussing the problem of collision detection in Banff we threw a lot of seemingly crazy ideas back and forth, and for a time at least dismissed most and settled for filtering the set of agents which would attempt collisions at all. Then, over our discussion-over-tea two weeks ago, Andrew pointed out that Kaye Mason had used a collision method that we had dismissed as too ridiculous to use.

From Kaye Mason's paper, Negotiating Gestalt: Artistic Expression by Coalition Formation Between Agents:

"When an agent is placed or moves, it registers with the grid. Each grid cell keeps a list of registered agents in its territory. An agent can query its own cell or neighbouring cells for a list of registered agents."

As Sheelagh pointed out during our presentation, this grid collision method keeps the size of collision detection process more-or-less uniform over a space/set filled with any number of agents. A conceptual illustration of this process appears below: The viewport is divided into a 20x20 matrix of cells with which agents register. We then consider agents which share a cell to have collided - the "collision radius" is the (adjustable) cell size. This actually works, contrary to our expectations, because it gives the impression of physical simulation, but doesn't require the processing expense of performing a complex and more "realistic" collision detection.

It still needs working-on and optimization, of course; would you believe that we're iterating through the agent list rather than the grid? Madness! And surely it can act as more than just a grid (to address Paul's concern). David O. suggested we use a quad tree, which is roughly equivalent to an adjustable grid, but is much more clever, and kindly sent us a lengthy explanation. We shall see how far we need to take this for our application. (I'll say it again: This project has so many applicable areas that we could work for a long time on just collision detection, just agent rendering, or just video input. As is, we've got to jump from place to place to try to put together a coherant and operable whole that works well-enough.)

Image Analysis

It isn't working yet. It's hard work, especially dealing with the large image-data arrays efficiently (and getting the stupid Numeric module to work properly with everything). And we're going to work on it.

I show this next image because I labored far too long to code a random "video" generator that would give ever-changing shapes for interpretation by our image-analyzer in lieu of input from EyesWeb (curse its inscrutable network-transmission format!). It works with a motion reminiscent of Cosmosis, actually. The raw values of the pixels from the image surface can be seen displayed in the console in the background there.

"You! Yes, you. I shall see you producing useful information to input to the virtual ecosystem. Soon."



Posted by Andrew - January 24, 2007 - 21:18 (Edit)

cave!

I have not yet addressed aesthetics here, and although David's posts are informed by discussions we've had, they are not particularly representative of my views. I'm reluctant to write about aesthetics since I've never studied them formally ... however, I have very stong feelings about the power of aesthetic sensibility to elevate the soul (as it were), and can stay silent no longer.

Aesthetic response has precious little to do with the acquisition of knowledge. Reading critical analyses of artworks may be interesting, but the knowledge can not induce the experience, it can only reflect on it, and generally quite ineffectually since it does so through expository language, and the most acutely aesthetic art forms are not discursive. Giving them a narrative context, even if it was actually the artist's own, does not even in the best case add much to the aesthetic experience, and as often damages it.

It is true that the seeker of knowledge by keeping an attentive mind and fresh impressions may be more receptive to new or deeper transports, but this is an indirect (and far from necessary) tool. Let us consider three of the most immortal and universally admired artists in Western history:
  • Music: Beethoven ("Music is the one incorporeal entrance into the higher world ... which comprehends mankind but which mankind cannot comprehend.")
  • Poetry: Emily Brontë (or Dickinson as you like!) ("Stern Reason is to judgement come / Arrayed in all her forms of gloom: / Wilt thou, my advocate, be dumb?")
  • Painting: Constable ("Painting is but another word for feeling.")
None of these were highly educated or cultured, although their irresistable greatness has finally won them perpetual recognition in such circles. If anything, there is an inverse correlation between creativity and acquired knowledge. (Obviously I'm not talking about experience and technical excellence though -- these are building character and honing expressivity, and are inevitable with predilection and practise.) And indeed, most artists who achieved greatness had scant patience for intellectualisations. Some outright scorn to write about their art, others do so in a way which appears to be a mockery of our critical expectations.

I definitely maintain that modern scientific attitudes, and art-critical ones, are a threat to aesthetic sensibility and anyone who values art deeply should be wary of knowledge-mongering and polemics. Not everything is enhanced by rationalisation, great art being a prominent case in point.


Posted by David - January 24, 2007 - 11:01 (Edit)
A few more quick thoughts on aesthetics

Andrew and I met over tea (which, I daresay, was excellent) and discussed a great many things last night. A common theme in our conversation was a question of aesthetics, perhaps starting in part from what I wrote yesterday about the known and unknown, and how I said I changed my mind, a bit, after only a day and would like to reword some of what I said.

To start: I quite like the imagery of following the "horizon of the unknown", and that it is the possibilities or potential of the unknown, based on the known, which makes something interesting. We talked about how so many systems work like this -- Human vision, for example, is based on actually seeing only a very few details while the brain builds an impression of the scene that one believes is detailed but is actually largely interpolation. Just so regarding the computer graphics method I believe Jeff Boyd mentioned yesterday about, what's it called, adaptive polygon something? - in which the world closer to the camera gains more detail while those areas that aren't seen, or aren't seen well, are rendered with little detail. This gives the impression that the whole world is built in great detail because someone can only ever look in one place at once and their mind builds a detailed impression of the world.

And just so in literature! An author uses language to construct a world by giving just the right touches of description, then it comes alive as a rich and detailed impression of a world in the mind of the reader.

"Good" art perhaps gives the right touches of description (or 'information', 'the known') so that the viewer actively wonders what lies "beyond the horizon of the known". It is not about keeping things hidden or withholding information but rather about telling in a manner that reveals beyond what is told. To do this well, perhaps, is good art.

I can only hope that our project can be so suggestive beyond itself.



Posted by David - January 22, 2007 - 19:17 (Edit)

Looks can be deceiving!
... and glRotate means that I don't need trigonometry


Video Input:

Frankly, getting video from EyesWeb is a pain in the ass due to EW's weird formatting. Andrew's lovely "Loom" program allowed us to examine the raw data output and attempt to figure out how to format the mess back into an image. Here is a link to an image page he made with some of these images, which we showed at our first presentation.

At the bottom of that page one can see that we can get undistorted image data if the video from EyeWeb is black and white. That's not grayscale: black or white. But this is quite good enough for our purposes because we just want a quick and dirty method of finding "active space".

I won't get too much into my frustration with attempting to extract a renderable array of information from the Eyesweb output. It was a bad week to quit drinking soda/pop.



There's something in those pictures; more noise would appear when I waved my hand around faster in front of the webcam. I rendered the images by stuffing 8bit per pixel EW output into Numeric Python arrays which, in turn, can be rendered more or less directly using Pygame's surfarray module. As I said, we're supposed to be getting black/white images from EW, but the above images are drawn in some pretty awful colors. The data is getting mixed up and Pygame.surfarray's 8bit surface expects a certain type of compressed color information, not B/W.

Then Andrew gave (tacit) permission to work on something else for a while, thank god.

Agent Rendering Methods:



The first two sections of the diagram above show a couple approaches to multi-agent renderings to make interactions/relationships visible. They're pretty straightforward.

The third section shows what came to me yesterday after freaking out about the possibility of having to use SIN and COS calls, and in hindsight it's obvious. Instead of calculating each point of a complex shape at angle X, just draw the complex shape then rotate it by angle X. In this way, we-the-coders can precalculate polygon-groups or make (kinda) simple polygon-drawing-algorithms then fit them to the coordinates of the agents calling the render functions. It's just like how games/applications don't calculate the angle for every line when drawing a 3d model -- they transform/rotate the coordinates, draw the model, then reload the identity aka default settings. It's hardly any work at all!

This calls to mind the way our visual rendering approach appears to be evolving: There will be a number of rendering functions that take one, two, or a whole list of agents as input and Meg/agents will know what rendering function(s) to call based on their particular "species" or what-have-you. Visual complexity will arise from the unique combinations and settings of these renderings.

(I don't think it'd be reasonable for agents to actually modify the code of their openGL rendering functions -- GL can be finicky. Agents' choices will have to take place one step above the building blocks we provide.)

And a bit on Aesthetics
A quote from some book meta-quoted from Rudy Rucker's (we)blog (He's a math prof. at San Jose State who's into cellular automata):


"Here’s a quote along these lines from David Kushner, Masters of Doom, (Random House, 2003) p. 295. Kushner is describing the programmer John Carmack, who developed most of the code for the first-person-shooter computer games Doom and Quake.

“...after so many years immersed in the science of graphics, he [John Carmack] had achieved an
almost Zen-like understanding of his craft. In the shower, he would see a few bars of light on the wall and think, Hey, that’s a diffuse specular reflection from the overhead lights reflected off the faucet. Rather than detaching him from the natural world, this viewpoint only made him appreciate it more deeply. ‘These are things I find enchanting and miraculous,’ he said...”
"



I'm of a mind that believes that having more knowledge of a subject makes a subject more interesting and grants greater and more meaningful depth of experience than a subject I know less about or that there is less to know about. If knowing about the subject "ruins" it then the subject may well have been the aesthetic equivalent of a "one-liner" anyway.
So if things get more interesting with more context, history, background and content, then because the universe is effectively infinite to the scale of human knowledge, so to is the possibility for beauty (if you'll pardon the term). The unknown is so interesting, in this case, because it's the ever-receding horizon of the known. The best art never stops, maybe, and maybe I just can't see that it ends right past the horizon.

To bring this back around, the Cosmosis project is, to me, a (comparatively small and dull) reflection by us-the-creators on the natural systems that complexity arises from in the universe. And these are the most amazing things of all, I think.

The next day: On further consideration, I think I've changed my mind. I consider aesthetics differently than I did yesterday, and will probably change my mind again tomorrow -- hopefully this is a constructive and progressive process.

Next time: Blob analysis and Infrared!


Previous 10 entries
Archive
Info
Post