Fusión temprana de descriptores extraídos de mapas de prominencia multi-nivel para clasificar imágenes

Autores/as

DOI:

https://doi.org/10.4995/riai.2019.10640

Palabras clave:

Visión por computador, Algoritmos de detección, Aprendizaje máquina, Procesamiento de imágenes, Codificación, Clasificadores

Resumen

En este artículo proponemos un método que permite mejorar la clasificación de imágenes en conjuntos de datos en los que la imagen contiene un único objeto. Para ello, consideramos los mapas de prominencia como si se trataran de mapas topográficos y filtramos las características del fondo de la imagen mejorando de esta forma la codificación que realiza sobre la imagen completa un modelo clásico basado en Bag of Visual Words (BoVW). En primer lugar, evaluamos seis conocidos algoritmos para la generación de mapas de prominencia y seleccionamos los métodos de GBVS y SIM al determinar que son los que retienen la mayor parte de la información del objeto. Utilizando la información de dichos mapas de prominencia eliminamos los descriptores SIFT extraídos de forma densa pertenecientes al fondo mediante el filtrado de características en base a imágenes binarias obtenidas a diversos niveles del mapa de prominencia. Realizamos el filtrado de descriptores obteniendo capas a diversos niveles del mapa de prominencia, y evaluamos la fusión temprana de los descriptores SIFT contenidos en dichas capas en cinco conjuntos de datos diferentes. Los resultados obtenidos en nuestra experimentación indican que el método propuesto mejora siempre al método de referencia cuando se combinan las dos primeras capas de GBVS o de SIM y el dataset contiene imágenes con un único objeto.

Descargas

Los datos de descargas todavía no están disponibles.

Biografía del autor/a

E. Fidalgo, Universidad de León

Departamento de Ingeniería Eléctrica y de Sistemas y Automática

Researcher at INCIBE (Spanish National Institute of Cybersecurity)

L. Fernández-Robles, Universidad de León

Departamento de Ingenierías Mecánica, Informática y Aeroespacial

Researcher at INCIBE (Spanish National Institute of Cybersecurity)

V. González-Castro, Universidad de León

Departamento de Ingeniería Eléctrica y de Sistemas y Automática

Researcher at INCIBE (Spanish National Institute of Cybersecurity)

Citas

Al-khafaji, S. L., Zhou, J., Zia, A., Liew, A. W. C., Feb 2018. Spectral-spatial scale invariant feature transform for hyperspectral images. IEEE Transactions on Image Processing 27 (2), 837-850. https://doi.org/10.1109/TIP.2017.2749145

Al-Nabki, W., Fidalgo, E., Alegre, E., De Paz, I., 2017. Classifying Illegal Activities on Tor Network Based on Web Textual Contents. 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017 - Proceedings of Conference 1, 35-43. https://doi.org/10.18653/v1/E17-1004

Beucher, S., Lantuejoul, C., 1979. Use of Watersheds in Contour Detection.

Biagio, M. S., Bazzani, L., Cristani, M., Murino, V., oct 2014. Weighted bag of visual words for object recognition. In: 2014 IEEE International Conference on Image Processing, ICIP 2014. IEEE, pp. 2734-2738. https://doi.org/10.1109/ICIP.2014.7025553

Biswas, R., Fidalgo, E., Alegre, E., 2017. Recognition of Service Domains on TOR Dark Net using Perceptual Hashing and Image Classification Techniques. 8th International Conference on Imaging for Crime Detection and Prevention 2017 (5). https://doi.org/10.1049/ic.2017.0041

Borji, A., Itti, L., jan 2013. State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (1), 185-207. https://doi.org/10.1109/TPAMI.2012.89

Cervantes, J., Taltempa, J., Garcíaa-Lamont, F., Castilla, J. S. R., Rendon, A. Y., Jalili, L. D., 2017. Análisis comparativo de las técnicas utilizadas en un sistema de reconocimiento de hojas de planta. Revista Iberoamericana de Automática e Informática Industrial RIAI 14 (1), 104 -114. https://doi.org/10.1016/j.riai.2016.09.005

Chatzichristofis, S. A., Iakovidou, C., Boutalis, Y., Marques, O., feb 2013. Co.Vi.Wo.: Color visual words based on non-predefined size codebooks. IEEE Transactions on Cybernetics 43 (1), 192-205. https://doi.org/10.1109/TSMCB.2012.2203300

Chaves, D., Saikia, S., Fernández-Robles, L., Alegre, E., Trujillo, M., 2018. A Systematic Review on Object Localisation Methods in Images. Revista Iberoamericana de Automática e Informática Industrial 15, 231-242. https://doi.org/10.4995/riai.2018.10229

Chen, J., Feng, B., Xu, B., 2014a. Spatial similarity measure of visual phrases for image retrieval. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 8326 LNCS. Springer, Cham, pp. 275-282. https://doi.org/10.1007/978-3-319-04117-9_25

Chen, Y., Li, X., Dick, A., Hill, R., 2014b. Ranking consistency for image matching and object retrieval. Pattern Recognition 47 (3), 1349 - 1360, handwriting Recognition and other PR Applications. https://doi.org/10.1016/j.patcog.2013.09.011

Csurka, G., Csurka, G., Dance, C. R., Fan, L., Willamowski, J., Bray, C., 2004. Visual categorization with bags of keypoints. IN WORKSHOP ON STATISTICAL LEARNING IN COMPUTER VISION, ECCV, 1--22.

Digabel, H., Lantuéjoul, C., 1978. Iterative algorithms. Actes du Second Symposium Europ'een d'Analyse Quantitative des Microstructures en Sciences des Matériaux, Biologie et Médecine 1 (1), 85-99.

Fang, Y., Lei, J., Li, J., Xu, L., Lin, W., Callet, P. L., 2017. Learning visual saliency from human fixations for stereoscopic images. Neurocomputing 266, 284 - 292. https://doi.org/10.1016/j.neucom.2017.05.050

Fidalgo, E., Alegre, E., González-Castro, V., Fernández-Robles, L., 2016. Compass radius estimation for improved image classification using Edge-SIFT. Neurocomputing 197, 119-135. https://doi.org/10.1016/j.neucom.2016.02.045

Fidalgo, E., Alegre, E., González-Castro, V., Fernández-Robles, L., 2017. Illegal activity categorisation in DarkNet based on image classification using CREIC method. 10th International Conference on Computational Intelligence in Security for Information Systems I (1), 600-609. https://doi.org/10.1007/978-3-319-67180-2_58

Fidalgo, E., Alegre, E., González-Castro, V., Fernández-Robles, L., 2018. Boosting image classification through semantic attention filtering strategies. Pattern Recognition Letters. https://doi.org/10.1016/j.patrec.2018.06.033

Field, D. J., dec 1987. Relations between the statistics of natural images and the response properties of cortical cells. Journal of the Optical Society of America. A, Optics and image science 4 (12), 2379-94. https://doi.org/10.1364/JOSAA.4.002379

Gangwar, A., Fidalgo, E., Alegre, E., González-Castro, V., 2017. Pornography and child sexual abuse detection in image and video: A comparative evaluation. In: 8th International Conference on Imaging for Crime Detection and Prevention (ICDP 2017). pp. 37–42. https://doi.org/10.1049/ic.2017.0046

García-Olalla, O., Alegre, E., Fernández-Robles, L., Fidalgo, E., Saikia, S., apr 2018. Textile retrieval based on image content from CDC and webcam cameras in indoor environments. Sensors (Switzerland) 18 (5), 1329. https://doi.org/10.3390/s18051329

Gonzalez, R., Woods, R., 2002. Digital image processing. Prentice Hall. https://doi.org/10.1016/0734-189X(90)90171-Q

González-Castro, V., Valdés Hernández, M. d. C., Chappell, F. M., Armitage, P. A., Makin, S., Wardlaw, J. M., 2017. Reliability of an automatic classifier for brain enlarged perivascular spaces burden and comparison with human performance. Clinical Science 131 (13), 1465-1481. https://doi.org/10.1042/CS20170051

Greenspan, H., Belongie, S., Goodman, R., Perona, P., Rakshit, S., Anderson, C., 1994. Overcomplete steerable pyramid filters and rotation invariance. Computer Vision and Pattern Recognition, 1994. Proceedings CVPR '94., 1994 IEEE Computer Society Conference on, 0-6. https://doi.org/10.1109/CVPR.1994.323833

Harel, J., Koch, C., Perona, P., 2007. Graph-Based Visual Saliency.

He, Y., Deng, G., Wang, Y., Wei, L., Yang, J., Li, X., Zhang, Y., 2018. Optimization of sift algorithm for fast-image feature extraction in line-scanning ophthalmoscope. Optik 152, 21 - 28. https://doi.org/10.1016/j.ijleo.2017.09.075

Hou, X., Harel, J., Koch, C., 2012. Image signature: Highlighting sparse salient regions. IEEE Transactions on Pattern Analysis and Machine Intelligence 34 (1), 194-201. https://doi.org/10.1109/TPAMI.2011.146

Itti, L., Koch, C., Niebur, E., 1998. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20 (11), 1254-1259. https://doi.org/10.1109/34.730558

Jian, M.,Wu, L., Jung, C., Fu, Q., Jia, T., 2018. Visual saliency estimation using constraints. Neurocomputing 290, 1 - 11. https://doi.org/10.1016/j.neucom.2018.02.004

Kienzle, W., Wichmann, F., Sch¨olkopf, B., Franz, M., 2007. A Nonparametric Approach to Bottom-Up Visual Saliency. Advances in Neural Information Processing Systems 19 19 (December 2006), 689-696.

Lahouli, I., Karakasis, E., Haelterman, R., Chtourou, Z., Cubber, G. D., Gasteratos, A., Attia, R., 2018. Hot spot method for pedestrian detection using saliency maps, discrete chebyshev moments and support vector machine. IET Image Processing 12 (7), 1284-1291. https://doi.org/10.1049/iet-ipr.2017.0221

Lazebnik, S., Schmid, C., Ponce, J., 2005. A maximum entropy framework for part-based texture and object recognition. Proceedings of the IEEE International Conference on Computer Vision I (1), 832-838. https://doi.org/10.1109/ICCV.2005.10

Lowe, D. G., 2004. Distinctive image features from scale invariant keypoints. Int'l Journal of Computer Vision 60, 91-11020042. https://doi.org/10.1023/B:VISI.0000029664.99615.94

Mallat, S., mar 2009. Geometrical grouplets. Applied and Computational Harmonic Analysis 26 (2), 161-180. https://doi.org/10.1016/j.acha.2008.03.004

Margolin, R., Zelnik-Manor, L., Tal, A., may 2013. Saliency for image manipulation. Visual Computer 29 (5), 381-392. https://doi.org/10.1007/s00371-012-0740-x

Murray, N., Vanrell, M., Otazu, X., Parraga, C. A., nov 2013. Low-level spatiochromatic grouping for saliency estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (11), 2810-2816. https://doi.org/10.1109/TPAMI.2013.108

Otsu, N., 1979. A threshold selection method from Gray-level. IEEE Transactions on Systems, Man, and Cybernetics SMC-9 (1), 62-66. https://doi.org/10.1109/TSMC.1979.4310076

Pinto, N., Doukhan, D., DiCarlo, J. J., Cox, D. D., nov 2009. A high-throughput screening approach to discovering good forms of biologically inspired visual representation. PLoS Computational Biology 5 (11), e1000579. https://doi.org/10.1371/journal.pcbi.1000579

Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., Fei-Fei, L., dec 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision 115 (3), 211-252. https://doi.org/10.1007/s11263-015-0816-y

Saikia, S., Fidalgo, E., Alegre, E., Fernández-Robles, L., sep 2017. Object Detection for Crime Scene Evidence Analysis Using Deep Learning. In: International Conference on Image Analysis and Processing. Springer, Cham, pp. 14-24. https://doi.org/10.1007/978-3-319-68548-9_2

Saikia, S., Fidalgo, E., Alegre, E., Fernández-Robles, L., 2018. Query based object retrieval using neural codes. In: Advances in Intelligent Systems and Computing. Vol. 649. Springer, Cham, pp. 513-523. https://doi.org/10.1007/978-3-319-67180-2_50

Sepúlveda, G. V., Torriti, M. T., Calero, M. F., 2017. Sistema de detección de señales de tráfico para la localización de intersecciones viales y frenado anticipado. Revista Iberoamericana de Automática e Informática Industrial 14 (2), 152-162. https://doi.org/10.1016/j.riai.2016.09.010

Shen, X., Wu, Y., jun 2012. A unified approach to salient object detection via low rank matrix recovery. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, pp. 853-860. https://doi.org/10.1109/CVPR.2012.6247758

Tilke, J., Ehinger, K., Durand, F., Torralba, A., sep 2009. Learning to predict where humans look. In: Proceedings of the IEEE International Conference on Computer Vision. IEEE, pp. 2106-2113. https://doi.org/10.1109/ICCV.2009.5459462

Toet, A., Sadaka, N. G., Karam, jun 2009. Frequency-tuned salient region detection. Vision Research 45 (1), II - 169-II - 172. https://doi.org/10.1109/CVPR.2009.5206596

Trzcinski, T., Christoudias, M., Lepetit, V., mar 2015. Learning image descriptors with boosting. IEEE Transactions on Pattern Analysis and Machine Intelligence 37 (3), 597-610. https://doi.org/10.1109/TPAMI.2014.2343961

van de Weijer, J., Schmid, C., 2006. Coloring Local Feature Extraction. In: Computer Vision - ECCV 2006. Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 334-348. https://doi.org/10.1007/11744047_26

Vapnik, V. N., 2000. The Nature of Statistical Learning Theory. Springer New York. https://doi.org/10.1007/978-1-4757-3264-1

Vedaldi, A., Fulkerson, B., 2010. Vlfeat. Proceedings of the international conference on Multimedia - MM '10 3 (1), 1469. https://doi.org/10.1145/1873951.1874249

Vikram, T. N., Tscherepanow, M., Wrede, B., sep 2012. A saliency map based on sampling an image into random rectangular regions of interest. In: Pattern Recognition. Vol. 45. pp. 3114-3124. https://doi.org/10.1016/j.patcog.2012.02.009

Yan, Q., Xu, L., Shi, J., Jia, J., 2013. Hierarchical saliency detection. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. No. 2. pp. 1155-1162. https://doi.org/10.1109/CVPR.2013.153

Zhang, L., Gu, Z., Li, H., sep 2013. SDSP: A novel saliency detection method by combining simple priors. In: 2013 IEEE International Conference on Image Processing, ICIP 2013 - Proceedings. pp. 171-175. https://doi.org/10.1109/ICIP.2013.6738036

Zhao, Q., Koch, C., jun 2012. Learning visual saliency by combining featuremaps in a nonlinear manner using AdaBoost. Journal of Vision 12 (6), 22. https://doi.org/10.1167/12.6.22

Zheng, L., Wang, S., Liu, Z., Tian, Q., jun 2013. Lp-Norm IDF for Large Scale Image Search. Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, 1626-1633. https://doi.org/10.1109/CVPR.2013.213

Zheng, L., Wang, S., Zhou, W., Tian, Q., jun 2014. Bayes merging of multiple vocabularies for scalable image retrieval. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, pp. 1963-1970. https://doi.org/10.1109/CVPR.2014.252

Descargas

Publicado

12-06-2019

Cómo citar

Fidalgo, E., Alegre, E., Fernández-Robles, L. y González-Castro, V. (2019) «Fusión temprana de descriptores extraídos de mapas de prominencia multi-nivel para clasificar imágenes», Revista Iberoamericana de Automática e Informática industrial, 16(3), pp. 358–368. doi: 10.4995/riai.2019.10640.

Número

Sección

Artículos