Please use this identifier to cite or link to this item:
Object Detection and 3D Localization Using Color/Shape Features with Novel Fuzzy Classifiers
|關鍵字:||物體偵測;object detection;模糊分類器;分群;三維定位;顏色特徵;fuzzy classifier;clustering;3D localization;color feature||出版社:||電機工程學系所||引用:|| B. Bhanu and J. Peng, “Adaptive integrated image segmentation and object recognition,” IEEE Trans. Syst., Man, and Cyber.,- Part C: Applications and Reviews, vol. 30, no. 4, pp. 427-441, 2000.  T. Kawanishi, H. Murase, S. Takagi, and M. Werner, “Dynamic active search for quick object detection with pan-tilt-zoom camera,” Proc. Int. Conf. Image Processing, vol. 3, pp. 716-719, Oct. 2001.  K. Schindler and D. Suter, “Object detection by global contour shape,” Pattern Recognition, vol. 41, no. 12, pp. 3763-3748, Dec. 2008.  C. F. Juang and G. C. Chen, “A TS fuzzy system learned through a support vector machine in principal component space for real-time object detection,” IEEE Trans. Industrial Electronics, vol. 59. no. 8, pp. 3309-3320, Aug. 2012.  Y. Yu, G.K.I. Mann, and R.G. Gosine, “An object-based visual attention model for robotic application,” IEEE Trans. Syst., Man, and Cyber., Part B: Cybernetics, vol. 40, no. 5, pp. 1398-1412, Oct. 2010.  N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, pp. 886-893, June 2005.  R. Feraund, O.J. Bernier, J.E. Viallet, and M. Collobert, “A fast and accurate face detector based on neural networks,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 1, pp. 42-53, Jan. 2001.  M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The PASCAL visual object classes (VOC) challenge,” International Journal of Computer Vision, vol. 88, no. 2, pp. 303-338, 2010.  Y. L. Chen, B. F. Wu, H. Y. Huang, and C. J. Fan, “A real-time vision system for nighttime vehicle detection and traffic surveillance,” IEEE Trans. on Industrial Electronics, vol. 58, no. 5, pp. 2030-2044, May 2011.  Z. Suna , G. Bebisa, and R. Millerb, “Object detection using feature subset selection,” Pattern Recognition, vol. 37, no. 11, pp. 2165-2176, 2004.  C. Y. Chang, P. C. Chung, Y. C. Hong and C. H. Tseng, “A neural network for thyroid segmentation and volume estimation in CT images,” IEEE Computational Intelligence Magazine, vol. 6, no. 4, pp. 43-55, Nov. 2011.  S. L. Smith, “Cartesian genetic programming and its application to medical diagnosis.” IEEE Computational Intelligence Magazine, vol. 6, no. 4, pp. 56-67, Nov. 2011.  M. E. Yuksel and A. Basturk, “Application of type-2 fuzzy logic filtering to reduce noise in color images,” IEEE Computational Intelligence Magazine, vol. 7, no. 3, pp. 25-35, July 2012.  C. F. Juang and C. T. Lin, “An on-line self-constructing neural fuzzy inference network and its applications,” IEEE Trans. Fuzzy Systems, vol. 6, no.1, pp. 12-32, Feb. 1998.  C. F. Juang, T. C. Chen, and W. Y. Cheng, “Speedup of implementing fuzzy neural networks with high-dimensional inputs through parallel processing on graphic processing units,” IEEE Trans. Fuzzy Systems, vol. 19, no. 4, pp. 717-728, Aug. 2011.  W. Y. Cheng and C. F. Juang, “An incremental support vector machine-trained TS-type fuzzy system for on-line classification problems,” Fuzzy Sets and Systems, vol. 163, no. 1, pp. 24-44, Jan. 2011.  Y. Chen and J. Z.Wang, “Support vector learning for fuzzy rule-based classification systems,” IEEE Trans. Fuzzy Syst., vol. 11, no. 6, pp. 716-728, Dec. 2003.  J. H. Chiang and P. Y. Hao, “Support vector learning mechanism for fuzzy rule-based modeling: A new approach,” IEEE Trans. Fuzzy Syst., vol. 12, no. 1, pp. 1-12, Feb. 2004.  S. M. Zhou and J. Q. Gan, “Constructing L2-SVM-based fuzzy classifiers in high-dimensional space with automatic model selection and fuzzy rule ranking,” IEEE Trans. Fuzzy Systems, vol. 15, no. 3, pp. 398-409, June 2007.  C. F. Juang, S. H. Chiu, and S. W. Chang, “A self-organizing TS-type fuzzy network with support vector learning and its application to classification problems,” IEEE Trans. Fuzzy Systems, vol. 15, no. 5, pp. 998-1008, Oct. 2007.  G. D. Wu, P. H. Huang, “A maximizing-discriminability-based self-organizing fuzzy network for classification problems,” IEEE Trans. Fuzzy Systems, vol. 18, no. 2, pp. 362-373, April 2010.  C. T. Lin, C. M. Yeh, S. F. Liang, J. F. Chung, and N. Kumar, “Support-vector-based fuzzy neural network for pattern classification,” IEEE Trans. Fuzzy Systems, vol. 14, no. 1, pp. 31-41, Feb. 2006.  C. F. Juang, S. H. Chiu, and S. J. Shiu, “Fuzzy system learned through fuzzy clustering and support vector machine for human skin color segmentation,” IEEE Trans. Syst., Man, and Cyber., Part A: Systems and Humans, vol. 37, no. 6, pp. 1077-1087, Nov. 2007.  C. F. Juang, C. M. Chang, “Human body posture classification by a neural fuzzy network and home care system application,” IEEE Trans. Syst., Man, and Cyber., Part A: Systems and Humans, vol. 37, no. 6, pp. 984-994, Nov. 2007.  J. Huhn and E. Hullermeier, “FURIA: An algorithm for unordered fuzzy rule induction,” Data Mining and Knowledge Discovery, vol. 19, pp. 293-319, Jan. 2009.  D. Chakraborty and N. R. Pal, “A neuro-fuzzy scheme for simultaneous feature selection and fuzzy rule-based classification,” IEEE Trans. Neural Netw., vol. 15, no. 1, pp. 110-123, Jan. 2004.  H. Ishibuchi, T. Yamamoto, and T. Nakashima, “Hybridization of fuzzy GBML approaches for pattern classification problems,” IEEE Trans. Syst., Man, and Cyber., Part B-Cybernetics, vol. 35, no. 2, pp. 359-365, April 2005.  E. G. Mansoori, M. J. Zolghadri, and S. D. Katebi, “SGERD: A steady-state genetic algorithm for extracting fuzzy classification rules from data,” IEEE Trans. Fuzzy Systems, vol. 16, no. 4, pp. 1061-1071, Aug. 2008.  X. Pan and L. Jiao, “A granular agent evolutionary algorithm for classification,” Applied Soft Computing, vol. 11, no. 3, pp. 3093-3105, April 2011.  E. K. Aydogan, I. Karaoglan, P. M. Pardalos, “hGA: Hybrid genetic algorithm in fuzzy rule-based classification systems for high-dimensional problems,” Applied Soft Computing, vol. 12, no. 2, pp. 800-806, Feb. 2012.  C. H. Chen, C. J. Lin, and C. T. Lin, “A functional-link-based neurofuzzy network for nonlinear system control,” IEEE Trans. Fuzzy Systems, vol. 16, no. 5, pp. 1362-1378, Oct. 2008.  F. J. Lin, L. T. Teng, J. W. Lin and S. Y. Chen, “Recurrent functional-link-based fuzzy-neural-network-controlled induction-generator system using improved particle swarm optimization,” IEEE Trans. Ind. Electron., vol. 56, no. 5, pp. 1557-1577, May 2009.  D. Kukolj and E. Levi, “Identification of complex systems based on neural and Takagi-Sugeno fuzzy model,” IEEE Trans. Syst., Man, and Cyber., Part B-Cybernetics, vol. 34, no. 1, pp. 272-282, Feb. 2004.  J. M. Leski, “TSK-fuzzy modeling based on -insensitive learning,” IEEE Trans. Fuzzy Systems, vol. 13, no. 2, pp. 181-193, April 2005.  R. Lienhart and J. Maydt, “An extended set of Haar-like features for rapid object detection,” Proc. of IEEE Int. Conf. Image Processing, pp. 900-903, 2002.  D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, 2004.  J. Luo and D. Crandall, “Color object detection using spatial-color joint probability functions,” IEEE Trans. on Image Processing, vol. 15, no. 6, pp. 1443-1453, June 2006.  C. F. Juang and S. W. Chang, “Fuzzy system-based real-time face detection and tracking system with a pan-tilt-zoom camera,” Expert Systems with Applications, vol. 37, no. 6, pp. 4526-4536, June 2010.  V. V. Vinod and H. Murase, “Focused color intersection with efficient searching for object extraction,” Pattern Recognition, vol. 30, no. 10, pp. 1787-1797, 1997.  C. F. Juang, W. K. Sun and G. C. Chen, “Object detection by color histogram-based fuzzy classifier with support vector learning,” Neurocomputing, vol. 72, no. 10-12, pp. 2464-2476, June 2009.  S. Birchfield and C. Tomasi, “Depth discontinuities by pixel-to-pixel stereo,” Proc. Sixth Int. Conf. Computer Vision, pp. 1073-1080, Jan. 1998.  J. Sun, N.-N. Zheng, H.-Y. Shum, “Stereo matching using belief propagation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 7, pp. 1-14, July 2003.  Q. Yang, L. Wang, R. Yang, S. Wang, M. Liao, D. Nister, “Real-time global stereo matching using hierarchical belief propagation,” The British Machine Vision Conference, pp. 989-998 , 2006.  Yuichi Ohta, Takeo Kanade, “Stereo by inter and inter-scanline search using dynamic programming,” PAMI., 1985.  J. Kim, K.M. Lee, B.T. Choi, and S.U. Lee, “A dense stereo matching using two-pass dynamic programming with generalized ground control points,” IEEE CVPR., Vol. 2, pp. 1075-1082, 2005.  P. Javier Herrera, Gonzalo Pajares, Maria Guijarro, “A segmentation method using Otsu and fuzzy k-Means for stereovision matching in hemispherical images from forest environments,” Applied Soft Computing, vol. 11, pp. 4738-4747, Dec. 2011.  J. Banks, M. Bennamoun, P. Corke, “Nonparametric techniques for fast and robust stereo matching”, IEEE Conference on Speech and Image Technologies for Computing and Telecommunications, vol. 1, pp. 365-368, 1997.  M. Humenberger, C. Zinner, M. Weber, W. Kubinger and M. Vincze, “A fast stereo matching algorithm suitable for embedded real-time systems,” Computer Vision and Image Understanding, vol. 114, pp.1180-1202, 2010.  D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Intl. J. Comp. Vis., vol. 47, no. 1, pp. 7-42, 2002.  K. mbrosch and W. Kubinger, “Accurate hardware-based stereo vision,” Computer Vision and Image Understanding, vol. 114, pp. 1303-1316, 2010.  G. Q. Wei, W. Brauer and G. Hirzinger, “Intensity- and gradient-based stereo matching using hierarchical gaussian basis functions,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1143-1160, Nov. 1998.  Dongil Han, Real-time object segmentation of the disparity map using projection-based region merging, Scene Reconstruction Pose Estimation and Tracking, Rustam Stolkin (Ed.), ISBN: 978-3-902613-06-6, 2007.  J. Sung, C. Ponce, B. Selman and A. Saxena, “Human Activity Detection from RGBD Images,” Plan, Activity, and Intent Recognition, AAAI Workshop, 2011.  J. Fabian, T. Young, James C. Peyton Jones and Garrett M. Clayton, “Integrating the microsoft Kinect with simulink: real-time object tracking example,” IEEE/ASME Trans. on Mechatronics, 2012.  S. Belongie, J.Malik and J. Puzicha, “Shape matching and object recognition using shape contexts,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 24, April 2002.  C. Zhu, H. Zhou, R. Wang, and J. Guo, “A novel hierarchical method of ship detection from spaceborne optical image based on shape and texture features,” IEEE Trans. on Geoscience and Remote Sensing, vol. 48, no. 9, Sep. 2010.  O. Carmichael and M. Hebert, “Shape-based recognition of wiry objects,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 26, no. 12, Dec. 2004.  Clark F. Olson and Daniel P. Huttenlocher, “Automatic target recognition by matching oriented edge pixels,” IEEE Trans. on Image Processing, vol. 6, no. 1, Jan. 1997.  H. Ishibuchi, T. Nakashima, and M. Nii, “Classification and modeling with linguistic information granules,” Advanced Approaches to Linguistic Data Mining, Springer, 2004.  R. Yager and D. Filev, “Generation of fuzzy rules by mountain clustering,” Journal of Intelligent & Fuzzy Systems, vol. 2, no. 3, pp. 209-219, Sep. 1994.  P.P. Angelov and D. Filev, “An approach to online identification of Takagi-Sugeno fuzzy models,” IEEE Trans. on Systems, Man and Cybernetics, Part B: Cybernetics, vol. 34, no. 1, pp. 484-498, Feb. 2004.  P. Angelov, D. P. Filev, and N. Kasabov, Evolving intelligent systems: methodology and applications, 2010, Wiley, IEEE Press Series on Computational Intelligence, ISBN:0470287195, March 2010.  E. D. Lughofer, “FLEXFIS: A robust incremental learning approach for evolving Takagi–Sugeno fuzzy models,” IEEE Trans. Fuzzy Systems, vol. 16, no. 6, pp. 1393-1410, Dec. 2008.  N. Cristianini and J. S.-Taylor, An introduction to support vector machines and other kernel-based learning methods, Cambridge University Press, 2000.  Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations”, ICCV, pp. 666-673, 1999.  K. C. Chen and W. H. Tsai, “Vision-based autonomous vehicle guidance for indoor security patrolling by a SIFT-based vehicle-localization technique,” IEEE Trans. Vehicular Technology, vol. 59, no. 7, pp. 3261-3271, Sep. 2010.  M. Begum, F. Karray, G. K. I. Mann, and R. G. Gosine, “A probabilistic model of overt visual attention for cognitive robots,” IEEE Trans. Syst., Man, and Cyber., —Part B: Cybernetics, vol. 40, no. 5, pp. 1305-1318, Oct. 2010.  L. Oliveira, U. Nunes, and P. Peixoto, “On exploration of classifier ensemble synergism in pedestrian detection,” IEEE Trans. Intell. Transp. Syst., vol. 11, no. 1, pp. 16-27, March 2010.  Y. Freund and R. Schapire, “Experiments with a new boosting algorithm,” Proc. The 13th Int. Conf. Machine Learning, pp. 148-156, 1996.  L. Breiman, “Random forests,” Machine Learning, vol. 45, pp. 5-32, 2001.  B. Scholkopf and A. J. Smola, Learning with kernels: support vector machines, regularization, optimization, and beyond. Cambridge, MA: MIT Press, Chap. 7, 2002.  J. Demsar, “Statistical comparisons of classifiers over multiple data sets,” Journal of Machine Learning Research, vol. 7, pp. 1-30, 2006.  C. L. Blake and C. J. Merz, “UCI repository of machine learning databases,” Univ. California, Dept. Inf. Comput. Sci., Irvine, CA, 1998. [Online]. Available: http://www.ics.uci.edu/~mlearn/ML-Repository.html  E. Bernado-Mansilla, X. Llora, J. M. Garrell, XCS and GALE, “A comparative study of two learning classifier systems on data mining,” Advances in Learning Classifier Systems, vol. 2321, pp. 115-132, 2002.  J. Alcala-Fdez, A. Fernandez, J. Luengo, J. Derrac, S. Garcia, L. Sanchez, F. Herrera, “KEEL data-mining software tool: Data set repository, integration of algorithms and experimental analysis framework,” Journal of Multiple-Valued Logic and Soft Computing, vol. 17, no. 2, pp. 255-287, 2011.  A. Orriols-Puig, J. Casillas, and E. Bernado-Mansilla, “Genetic-based machine learning systems are competitive for pattern recognition,” Evol. Intel., vol. 1, pp. 209-232, 2008.  C. C. Chang and C. J. Lin, “A library for support vector machines LIBSVM,” [Online]. Available: http://www.csie.ntu.edu.tw/~cjlin/libsvm/index.html  D. H. Wolpert, “No free lunch theorems for optimization,” IEEE Trans. Evolutionary Computation, vol. 1, no. 1, pp. 67-82, April 1997.  A. Gonzalez and R. Perez, “SLAVE: a genetic learning system based on an iterative approach,” IEEE Trans Fuzzy Syst., vol. 7, no. 2, pp. 176-191, April 1999.  J. Otero J and L. Sanchez, “Induction of descriptive fuzzy classifiers with the logitboost algorithm,” Soft Comput., vol. 10, no. 9, pp. 825-835, 2006.  J. R. Quinlan, “C4.5: Programs for machine learning,” Morgan Kaufmann Publishers, San Mateo, 1995.  P. Domingos, M. Pazzani, “On the optimality of the simple Bayesian classifier under zero-one loss,” Machine Learning, vol. 29, pp. 103-137, 1997.  G. C. Chen and C. F. Juang, “Object detection using color entropies and a fuzzy classifier,” IEEE Computational Intelligence Magazine, vol. 8, no. 1, pp. 33-45, Feb. 2013.  GunillA Borgefors, “Hierarchical chamfer matching: A parametric edge matching algorithm,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 10, no. 6. Nov. 1988.||摘要:||
本篇論文提出了新型的模糊分類器和物體特徵的描述去解決基於視覺上的物體偵測、三維定位和形狀擷取的問題。偵測的物體是設定為表面包含了多種顏色且為不均勻分佈，因此難以擷取物體形狀的資訊。提出的兩個支持向量機訓練的模糊分類器結合零階和擴展式模糊映射的後件部空間。論文提出一個自我分裂分群演算法作作為模糊分類器的前件部參數學習。後件部參數學習是藉由支持向量機去賦予模糊分類器一個高的泛化能力。在一張彩色影像中偵測物體的問題，論文提出了根據物體的顏色組成和它的幾何分佈的顏色做特徵擷取的方法。首先使用自我分裂分群演算法來彈性分割色度飽和度的空間，並由此求取直方圖/熵值色彩特徵。這些顏色特徵送入模糊分類器去偵測物體。對於三維物體的定位，論文使用一個立體攝影機和一個 RGBD 攝影機 (Kinect)。使用立體 RGB 攝影機時，使用其左邊影像完成物體偵測後，再利用自我分裂分群演算法已分割的色度飽和度空間對它鄰近的區域做顏色分割。利用左右影像顏色分割的區域做彼此比對可得差異圖，由此圖可得知物體的深度和形狀。使用 RGBD 攝影機時，在利用顏色特徵產生候選物體後，從攝影機的深度資訊可以用來擷取候選物體的形狀。論文提出一個直方圖分佈為基礎的形狀特徵並用來改善物體偵測的效能，提出的模糊分類器效能、物體偵測、三維定位和形狀擷取的方法是透過不同物體的偵測和比較不同的分類器和偵測方法去評估效能。
This dissertation proposes novel fuzzy classifiers (FCs) and object description features to address vision-based object detection, three-dimensional (3D) localization, and shape extraction problems. Appearances of the detected objects are assumed to contain multiple colors in non-homogeneous distributions that make it difficult to extract the object shape information. Two support vector machine (SVM)-trained FCs with zero-order Takagi-Sugeno (TS)-type and expanded rule-mapped consequent spaces are proposed. A self-splitting clustering (SSC) algorithm is proposed to learn the antecedent parameters of the FCs. The consequent parameters are learned through SVMs to endow the FCs with high generalization ability. For object detection in a single color image, color features extracted from the color components of an object and their geometrical distributions are proposed. The SSC algorithm is first used to flexibly partition the hue-saturation (HS) space and then histograms/entropies of color features are derived from the partitioned HS space. These color features are fed to the FC to detect an object. For 3D object localization, the use a stereo red-green-blue (RGB) camera and a RGB-depth (RGBD) camera (Kinect) is studied. For the stereo RGB camera, after the detection of an object in the left image, its nearby regions are color segmented using the SSC-partitioned HS space. Depth and shape of the object are found by using the disparity map obtained from matching the left and right color segmented regions. For the RGBD camera, after the detection of an object using the color feature, the depth information available from the camera is used to extract the shape of an object. A histogram-based shape feature is proposed to improve the object detection performance. Performance of the proposed FCs, object detection, 3D localization, and shape extraction methods are verified through the detection of different objects and comparisons with various classification and detection approaches.
|Appears in Collections:||電機工程學系所|
Show full item record
TAIR Related Article
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.