Cosmosis

Previous Entry
Main
Archive
Next Entry


Posted by David - March 6, 2007 - 23:16

Groovy Blob Detection

or: "Come rain or snow...?"

IR Lighting:

The IR LED arrays arrived in the mail. They're really cool-looking, it's just too bad the power supplies aren't here so we can't see 'em in action

Blob Detection

Ya'know, image analysis can be a lot of fun. I won't go into great detail, but once I figured out how to work a particular recursive function to propagate a Blob's identity through a group of attached pixels, it all came together.

See - for this visualization, separate groups of pixels from my scowling mug are assigned random colors:

There's a value to disregard cells without a certain number of neighbours to keep the number of blobs sane. The image below shows how the raw input (1) is interpretted as distinct blobs (2) and some simple information about these blobs is displayed (3) whereas the radius of the circle refers to the total number of pixels, the vertical line to the min and max Y range, and the horizontal line to the min and max X range.

The interaction doesn't really "feel" like anything right now, unfortunately. More information must be extracted from the blobs because, as-is, these are ambiguous blobs, not meaningful shapes. For example, I'd like to check the ratio of the X and Y bounds as well as the percentage of area which is 'active' and use those to decide whether to sub-divide the blob for more accuracy and create a, ah, "quad-tree".

I think some of the instructors urged us to scan some of our "analogue" notes, and I now recall Andrew saying I should do the same, so I finally did. Click on the helpful button below to see the full page of unreadable handwriting and unclear diagrams I scrawled today.

I have some very early notes around here somewhere as well. It may be fun to scan those to see how much our concept has changed. (And oh, has it ever - as observed last semester by instructors, the full scope of our project is incredibly ambitious. To get everything polished would take a long, long time, so necessity dictates that the scope of our project must be rather more limited than we may have hoped & dreamed.
Whereas n = number of features and t = time: n^2 = t.

A quick list of things to implement:

  • Sub/Mega Agents (tree structures of agents-within-agents)
  • Proper collision grid (per cell, not per agent!) + rendering of agent interactions, ie: bolt2a()
  • Render-lists to reduce unnecessary GL calls + gain framerate
  • More variety w/ particles + fun particle calls for visualizing input & agent events
  • Get more information from Input!: velocities, trees of sub/mega-agents
  • which requires: Better Input Parsing (ie: 8 ints, first? of which is for setting a tree structure?)
  • make a Video-In "lite" w/o PyGame visualization & update it to use new Numpy rather than outdated Numeric (so it works on linux)
  • Perhaps eliminate need for EyesWeb entirely-- do image processing w/ video direct from WebCam, so, again, it works on linux
  • Make GravWell(s) more interesting - link to agents? as well as input?
  • Sound render-lists?

And so on.



Previous Entry
Main
Archive
Next Entry