Pattern Recogn. Phys., 1, 63-74, 2013
www.pattern-recogn-phys.net/1/63/2013/
doi:10.5194/prp-1-63-2013
© Author(s) 2013. This work is distributed
under the Creative Commons Attribution 3.0 License.
Regular Research Article
24 Jul 2013
Unsupervised and self-mapping category formation and semantic object recognition for mobile robot vision used in an actual environment
H. Madokoro1, M. Tsukada2, and K. Sato1
1Faculty of Systems Science and Technology, Akita Prefectural University, Akita, Japan
2Meiji Co., Ltd., Tokyo, Japan

Abstract. This paper presents an unsupervised learning-based object category formation and recognition method for mobile robot vision. Our method has the following features: detection of feature points and description of features using a scale-invariant feature transform (SIFT), selection of target feature points using one class support vector machines (OC-SVMs), generation of visual words using self-organizing maps (SOMs), formation of labels using adaptive resonance theory 2 (ART-2), and creation and classification of categories on a category map of counter propagation networks (CPNs) for visualizing spatial relations between categories. Classification results of dynamic images using time-series images obtained using two different-size robots and according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category formation of appearance changes of objects.

Citation: Madokoro, H., Tsukada, M., and Sato, K.: Unsupervised and self-mapping category formation and semantic object recognition for mobile robot vision used in an actual environment, Pattern Recogn. Phys., 1, 63-74, doi:10.5194/prp-1-63-2013, 2013.
 
Search PRP
Download
PDF XML
Citation
Share