Looks can be deceiving!
... and glRotate means that I don't need trigonometry
Frankly, getting video from EyesWeb is a pain in the ass due to EW's weird formatting. Andrew's lovely "Loom" program allowed us to examine the raw data output and attempt to figure out how to format the mess back into an image. Here is a link to an image page he made with some of these images, which we showed at our first presentation.
At the bottom of that page one can see that we can get undistorted image data if the video from EyeWeb is black and white. That's not grayscale: black or white. But this is quite good enough for our purposes because we just want a quick and dirty method of finding "active space".
I won't get too much into my frustration with attempting to extract a renderable array of information from the Eyesweb output. It was a bad week to quit drinking soda/pop.
There's something in those pictures; more noise would appear when I waved my hand around faster in front of the webcam. I rendered the images by stuffing 8bit per pixel EW output into Numeric Python arrays which, in turn, can be rendered more or less directly using Pygame's surfarray module. As I said, we're supposed to be getting black/white images from EW, but the above images are drawn in some pretty awful colors. The data is getting mixed up and Pygame.surfarray's 8bit surface expects a certain type of compressed color information, not B/W.
Then Andrew gave (tacit) permission to work on something else for a while, thank god.
Agent Rendering Methods:
The first two sections of the diagram above show a couple approaches to multi-agent renderings to make interactions/relationships visible. They're pretty straightforward.
The third section shows what came to me yesterday after freaking out about the possibility of having to use SIN and COS calls, and in hindsight it's obvious. Instead of calculating each point of a complex shape at angle X, just draw the complex shape then rotate it by angle X. In this way, we-the-coders can precalculate polygon-groups or make (kinda) simple polygon-drawing-algorithms then fit them to the coordinates of the agents calling the render functions. It's just like how games/applications don't calculate the angle for every line when drawing a 3d model -- they transform/rotate the coordinates, draw the model, then reload the identity aka default settings. It's hardly any work at all!
This calls to mind the way our visual rendering approach appears to be evolving: There will be a number of rendering functions that take one, two, or a whole list of agents as input and Meg/agents will know what rendering function(s) to call based on their particular "species" or what-have-you. Visual complexity will arise from the unique combinations and settings of these renderings.
(I don't think it'd be reasonable for agents to actually modify the code of their openGL rendering functions -- GL can be finicky. Agents' choices will have to take place one step above the building blocks we provide.)
And a bit on Aesthetics
A quote from some book meta-quoted from Rudy Rucker's (we)blog (He's a math prof. at San Jose State who's into cellular automata):
"Here’s a quote along these lines from David Kushner, Masters of Doom, (Random House, 2003) p. 295. Kushner is describing the programmer John Carmack, who developed most of the code for the first-person-shooter computer games Doom and Quake.
“...after so many years immersed in the science of graphics, he [John Carmack] had achieved an
almost Zen-like understanding of his craft. In the shower, he would see a few bars of light on the wall and think, Hey, that’s a diffuse specular reflection from the overhead lights reflected off the faucet. Rather than detaching him from the natural world, this viewpoint only made him appreciate it more deeply. ‘These are things I find enchanting and miraculous,’ he said...” "
I'm of a mind that believes that having more knowledge of a subject makes a subject more interesting and grants greater and more meaningful depth of experience than a subject I know less about or that there is less to know about. If knowing about the subject "ruins" it then the subject may well have been the aesthetic equivalent of a "one-liner" anyway.
So if things get more interesting with more context, history, background and content, then because the universe is effectively infinite to the scale of human knowledge, so to is the possibility for beauty (if you'll pardon the term). The unknown is so interesting, in this case, because it's the ever-receding horizon of the known. The best art never stops, maybe, and maybe I just can't see that it ends right past the horizon.
To bring this back around, the Cosmosis project is, to me, a (comparatively small and dull) reflection by us-the-creators on the natural systems that complexity arises from in the universe. And these are the most amazing things of all, I think.
The next day: On further consideration, I think I've changed my mind. I consider aesthetics differently than I did yesterday, and will probably change my mind again tomorrow -- hopefully this is a constructive and progressive process.
Next time: Blob analysis and Infrared!