My last post has me thinking about more upcoming or perhaps already released automated processes.
For example, automated z-depth analysis (the computer analyses motion parallax and so forth to ascertain which objects are in front of each other (and how far away they are) and then gives each object a greyscale color relative to that depth. For example if I am waving my hand in front of the camera, my hand would be white (close) and the house behind me would be grey (farther) and the verdant hills in the background would be black (farther still). These guys claim they will do something called automated z-depth exraction with their PFTrack 3 detailed in this press release, but don't really say what automated means.
Which brings me to my main question - we have some software that can perform pretty fancy edge detection analysis (Twixtor is pretty good at this, for example), but how come programs can't indentify basic moving shapes as basic moving shapes and perform a sort of foreground/background analysis? It seems like it would be so easy - let's isolate the moving orange thing from the grey drab things, but (with the possible exception of some of the techniques from my last post) I can't think of anybody who's doing this.
So, everybody, have any of you seen research on this kind of stuff? Even ANY kind of automated motion or subject analysis? There are lots of specialized medical research gizmos that isolate and track eye movement and I recall a government program designed to analyse people's gait (and until you have to walk in front of a greenscreen to get on a plane, I assume they are able to ascertain foreground/background), but I would love to see a sifter that could analyse a media library and, say, pull up all instances of a white Buick.
This is vaguely project related, so any help/thought is appreciated.