Servicios Personalizados
Revista
Articulo
Indicadores
- Citado por SciELO
- Accesos
Links relacionados
- Similares en SciELO
Compartir
Enfoque UTE
versión On-line ISSN 1390-6542versión impresa ISSN 1390-9363
Resumen
CORTES ZARTA, Juan F.; GIRALDO TIQUE, Yesica A. y VERGARA RAMIREZ, Carlos F.. Convolutional Neural Network for Spatial Perception of InMoov Robot Through Stereoscopic Vision as an Assistive Technology. Enfoque UTE [online]. 2021, vol.12, n.4, pp.88-104. ISSN 1390-6542. https://doi.org/10.29019/enfoqueute.776.
In the development of assistive robots, a major challenge is to improve the spatial perception of robots for object identification in various scenarios. For this purpose, it is necessary to develop tools for analysis and processing of artificial stereo vision data. For this reason, this paper describes a convolutional neural network (CNN) algorithm implemented on a Raspberry Pi 3, placed on the head of a replica of the open-source humanoid robot InMoov, to estimate the X, Y, Z position of an object within a controlled environment. This paper explains the construction of the InMoov robot head, the application of Transfer Learning to detect and segment an object within a controlled environment, the development of the CNN architecture, and, finally, the assignment and evaluation of training parameters. As a result, an estimated average error of 27 mm in the X coordinate, 21 mm in the Y coordinate, and 4 mm in the Z coordinate was obtained; data of great impact and necessary when using these coordinates in a robotic arm to reach and grab the object, a topic that remains pending for future work.
Palabras clave : humanoid robotic; convolutional neural networks; spatial perception; transfer learning.