next up previous contents index
Next: 6.5.1 Multigrid Method with Up: 6 Synchronous Applications II Previous: Implications

A Hierarchical Scheme for SurfaceReconstruction and Discontinuity Detection

   

Vision  (both biological and computer-based) is a complex process that can be characterized by multiple stages where the original iconic information is progressively distilled and refined. The first researchers to approach the problem underestimated the difficulty of the task-after all, it does not require a lot of effort for a human to open the eyes, form a model of the environment, recognize objects, move, and so on. But in the last years a scientific basis has been given to the first stages of the process (low- and intermediate-level vision) and a large set of special-purpose algorithms are available for high-level vision.

It is already possible to execute low-level operations (like filtering, edge detection, intensity normalization) in real time (30 frames/sec) using special-purpose digital hardware (like digital signal processors). On the contrary, higher level visual tasks tend to be specialized to the different applications, and require general-purpose hardware and software facilities.

Parallelism and multiresolution processing are two effective strategies to reduce the computational requirements of higher visual tasks (see, for example, [Battiti:91a;91b], [Furmanski:88c], [Marr:76a]). We describe a general software environment for implementing medium-level computer vision on large-grain-size MIMD computers. The purpose has been to implement a multiresolution strategy based on iconic data structures (two-dimensional arrays that can be indexed with the pixels' coordinates) distributed to the computing nodes using domain decomposition.

In particular, the environment has been applied successfully to the visible surface reconstruction and discontinuity detection problems. Initial constraints are transformed into a robust and explicit representation of the space around the viewer. In the shape from shading  problem, the constraints are on the orientation of surface patches, while in the shape from motion problem (for example), the constraints are on the depth values.

We will describe a way to compute the motion (optical flow) from the intensity arrays of images taken at different times in Section 6.7.

Discontinuities are necessary both to avoid mixing constraints pertaining to different physical objects during the reconstruction, and to provide a primitive perceptual organization of the visual input into different elements related to the human notion of objects.




Other References

HPFA Applications



next up previous contents index
Next: 6.5.1 Multigrid Method with Up: 6 Synchronous Applications II Previous: Implications



Guy Robinson
Wed Mar 1 10:19:35 EST 1995