Previous Entry
Next Entry

Posted by David - December 7, 2006 - 18:29

Networking with EyesWeb, Staring into the Abyss

Andrew appears to discuss something important in this entirely spontaneous picture.

No, I'm not trying not to laugh - that's just what I look like when I'm thinking deeply.

Simple sample output of, what was it?, 1000 sets of 40 integers or so .. as colored lines! drawn between points of a grid.

We've been working in the VR room (where else?) with the additional small pressure of my upcoming MADT 401 revue, then need to have something displayable in Banff.

Small EyesWeb Project
Our small EyesWeb project (Webcam tracking to MIDI output) was a bit lacking in presentation, I think, when we showed it at the U of C. The lighting and background conditions were entirely different; the VR room is painted black, has a grey floor, and a white screen; simple. The interactions lab has computers, lights, posters all over the place (not to mention a crowd of onlookers), so our previously clean-cut motion detection was filled with noise.
Clearly our environment in the VR room is insufficiently hostile to produce the motivation for an extremely adaptable background-subtraction system. This is good because it means our job (Cosmosis) is easier in the controlled environment, but it makes me want to invest effort in doing a better job at interpretting the video to extract motion. Time to re-read that paper!

And what's it mean to Cosmosis?
Which brings us to the topic of what exactly we're going to get EyesWeb to send to the various Cosmosis components -- in short, it looks to be arrays of integers.
How do you transform video into a meaningful array of integers? And how do you reduce distortion produced by image artifacts - like someone's shirt which happens to be the same tone as the carpet, or the back wall. Perhaps there is some way to make sure that whatever people walk in with is -always- different from the environment.

Decoding EyesWeb Output
To illuminate the undocumented gloom of EyesWeb network output Andrew used his lovely "Loom" program to examine the raw data spit out by Eyesweb. The opening shot of the connection is a bunch of mysterious junk (but did Uta find documentation on it?), with sets of 4 bytes following.

An int: 8,0,0,0 / x,x,x,x / 0,0,0,0
The 8 tells how many bytes will follow the initial set of four bytes, X is the integer (reading from left to right, mind you, so if the int is "2", the set is "2,0,0,0"... but in hex or something), and that last set of bytes is just zero. Maybe it's what always ends a packet of data, maybe it's used for something else. I'm pretty sure we didn't find out, but I'd have to check with Andrew.

An array with 3 ints, [x,y,z]: 12,0,0,0 / x,x,x,x / y,y,y,y / z,z,z,z
So there are 12 bytes following the initial 4, and no trailing set of 0's. I think.

Fascinating, 'eh?

Network Instability
Networking programming is a pain. Or maybe I should say that getting it to work without messing up is a pain. Is there an issue with ACAD's network security? We don't think so, and don't really know. I'm not sure how positively the network people would respond if we told them about what we were doing because institutions love to be network "security" fascists. Do we fully understand how socket networking operates -- in Window? Not really. But we'll learn.

What's clear is we don't trust EyesWeb all that much, so the plan is to have EyesWeb to send data to a program (the "Repeater") running on its localhost, and that sends data across the network to the (presently combined and bare-bone) Ecosystem/Display program.
It works with simulated EyesWeb output!

Short-term Goals

  • Get it working with actual EyesWeb output
  • Make the display more interesting - Andrew did well to change the colors from Ugly Green to the more interesting set you see in the above screenshot
  • Perhaps get a basic ecosystem running, on the level of "Life"?
  • Refine reading list, and divide it into two sections for scientific papers and art/philosophy

I think that covers it.

Previous Entry
Next Entry