Cosmosis

Previous Entry
Main
Archive
Next Entry


Posted by David - January 3, 2007 - 13:44

In the Grind


I know what we did in mid-December...



EyesWeb Video Interpretation
Fig. 1: EyesWeb running our video interpretation patch

Webcam video input is clipped and shrunk to a 90x90 pixel square to decrease the processing necessary with acceptable losee of image quality because we will only deal with broad movements at this point and the video input is not meant to be viewed, just interpretted. It's then greyscaled, crudely background-subtracted, and thresholded so that movement produces white pixels.

Subfield Sampling
Fig. 2: nine points of motion being tracked on webcam video.

We expanded on our small Eyesweb "Conductor" project to allow multiple points of motion sampling.
The webcam video input is cut into a nine-piece grid of subfields. Each subfield is sampled for the amount of motion (# of white pixels) and for the center (in X and Y coordinates) of the area of motion.

(I should mention that this is an extremely crude way to try to interpret multiple points of motion from video because, well, it's limited to nine fixed subfields which output just one center of motion each. Not only is this a very small amount of information, it is inaccurate, especially if motion crosses over multiple fields. Eyesweb doesn't /easily/ support really nice interpretation for our purposes. Sure, there is a function meant to track a human body, but it only tracks -one- human body, and if Eyesweb doesn't have clear video it will give some very strange data. We'll ultimately have to output simplified video from Eyesweb and write our own interpretter, I think. But for now, this makes things run.)

Three numbers (X position, Y position, and magnitude) are taken from each of the nine subfields and assembled into a 1x27 matrix (because Eyesweb 3 has very limited arrays) which is sent to a proxy program running on the same machine that, in turn, sends this information over the network to the machine running the ecosystem/renderer.

OpenGL Rendering
Fig. 3: a simple rendering of the nine points sent from EyesWeb to the renderer

The ecosystem/renderer receives data from the network and, at this point, renders it pretty much as-is, without a simulated ecosystem.

But the point is that it all works; The data flows!

A Modest Proposal for Simple Simulation



Fig. 1: Particle Decay
A point will perhaps be the simplest form of agent in the multi-agent ecosystem, and for now are the basic form of input. Points may decay over time with regard to how interesting they are, and if they are given a magnitude based on the sampled sub-field's area of motion, then clearly points created with more motion will have more magnitude and more life.

Fig 2: Field of Interaction
A point-agent should interact within a certain field. This interaction can be a bounce, or it could combine the properties of the two point-agents into one more interesting agent. From the standpoint of reducing processing time it is useful to eliminate agents while creating interest.

Fig 3: Interactions
Again, to reduce processing, we may allow only energetic "hot" particles to detect collisions and produce interactions. So "hot" can interact with "hot" and "cold", but "cold" particles won't even bother running the collision calculations.

Fig 4: Up the evolutionary ladder...
From simple particles we can build up primitive shapes and behaviours, hot particles will collide to form "larger" particles which may combine to form line segments, which may collide and collect to form polygons, which will act in more complex ways, and so on...


Previous Entry
Main
Archive
Next Entry