>Applying Neuroscience to Robot Vision

>

Scientists have attempted to replicate human attributes and abilities such as detailed vision, spatial perception and object grasping in robots.After three years of intense work, the members of EYESHOTS* have made progress in controlling the interaction between vision and movement, and as a result have designed an advanced three-dimensional visual system synchronized with robotic arms which could allow robots to observe and be aware of their surroundings and also remember the contents of those images in order to act accordingly.

For a humanoid robot to successfully interact with its environment and develop tasks without supervision, it is first necessary to refine these basic mechanisms that are still not completely resolved, says Spanish researcher Ángel Pasqual del Pobil, director of the Robotic Intelligence Laboratory of the Universitat Jaume I. His team has validated the members’ findings with a system built at the University of Castellón (Spain) consisting of a robot head with moving eyes integrated into a torso with articulated arms.

To make the computer models the team started from the knowledge of animal and human biology, for which experts specialised in neuroscience, psychology, robotics and engineering worked together. The study began by recording monkeys’ neurons engaged in visual-motor coordination, as humans share our way of perceiving the world with primates.

The first feature of our visual system that the members replicated artificially was our saccadic eye movement which is related to the dynamic change of attention. According to Dr. Pobil: “We constantly change the point of view through very fast eye movements, so fast that we are hardly aware of it. When the eyes are moving, the image is blurred and we can’t see clearly. Therefore, the brain must integrate the fragments as if it were a puzzle to give the impression of a continuous and perfect image of our surroundings.”

From the neural data, the experts developed computer models of the section of the brain that integrates images with movements of both eyes and arms. This integration is very different from that which is normally carried out by engineers and experts in robotics. The EYESHOTS consortium set out to prove that when we make a grasping movement towards an object, our brain does not previously have to calculate the coordinates.

As the Spanish researcher explains: “The truth is that the sequence is much more straightforward: our eyes look at a point and tell our arm where to go. Babies learn this progressively by connecting neurons.” Therefore, these learning mechanisms have also been simulated in EYESHOTS through a neural network that allows robots to learn how to look, how to construct a representation of the environment, how to preserve the appropriate images, and use their memory to reach for objects even if these are out of their sight at that moment.

“Our findings can be applied to any future humanoid robot capable of moving its eyes and focusing on one point. These are priority issues for the other mechanisms to work correctly,” points out the researcher.

EYESHOTS was funded by the European Union through the Seventh Framework Programme and coordinated by the University of Genoa (Italy).

* EYESHOTS (Heterogeneous 3-D Visual Perception Across Fragments)
Courtest:ScieneDaily

Leave a Reply