In the functionality oriented approach to object recognition, the system is given knowledge about object categories in terms of the function that such an object serves. For example, the system could be told that a ``straight back chair'' is something which ``provides sittability,'' ``provides back support'' and ``provides stability.'' Of course the definition of these functional properties must be given in more detail than just a symbolic label. In our implementation [43, 44, 36], we use five `chunks' of parameterized procedural knowledge: stability, free space, dimensions, relative orientation and proximity. Functional properties such as those mentioned above are defined in terms of some sequence of invocations of these low-level primitives. We are continuing to investigate extensions and generalizations of the functionality oriented approach to object recognition. One interesting area for extensions is the area of articulated objects. To date our work on function-based recognition has assumed that objects are rigid. We would like to also be able to handle non-rigid objects. Initially we are considering just articulated assemblies of rigid parts (such as scissors) rather than general deformable objects (such as cloth). A first step is create a visualization tool which would give an animated display of a defined object assembly as it moved through the allowed range(s) of articulation. Given the existing software tools that are available in our lab, this level of project completion is reasonable for any of our REU participants. This level of the project also has a very well-defined structure and should be exciting to students because of the large computer graphics component involved. Another level of completion would be to define a set of representative object models for some interesting class of example objects. (Perhaps the super-ordinate category hand tools, which would have basic-level categories such as hammer, screw driver, wrench and shears.) This would give an empirical demonstration of the generality of our scheme for defining articulated assemblies. A further level of completion would involve computational methods of recovering an articulated shape model from a time sequence of individual 3-D shape models. From this level of completion, it should then be possible to hypothesize a well-founded function-based definition of the various object categories. This could then be tested experimentally by implementing appropriate extensions to our function-based recognition system. This last level of completion is certain beyond the level of any one student in one year. It may well be possible for several students to cooperate on different sub-projects in this area.
Additional possible project areas for students at USF include range image processing [65, 42] and medical imaging applications [58, 68, 69, 70, 71, 72, 73].