Further information on the active vision and reaching project
This project will build a demonstrator that simulates human eye movements in locating objects and then reaching for them with a simulated arm. The looking will be done with motorised cameras and the reaching will really be pointing using a motorised laser pointer.
The human eye has its highest resolution in the centre of the retina, the fovea. Thus, the eye must move (rapidly) to fixate on different points of interest (gaze) in order to recognise objects. This movement space is the gaze space of the eye.
But when an object has been located (at some location in gaze space) a robot will want to move its hand to the same location in order to grasp it. The movement space of the arm is its reach space and a big question is how the reach space can be correlated (in both human brains and robots) so that any point in space has the same correspondence in the two very different gaze and reach spaces.
The goal of this project is to build some experimental software that will interactively construct the gaze and reach spaces and learn to relate them. Various models can then be tested out in the software and the results compared.
After analysis and design the software will be implemented on provided hardware consisting of two motorised cameras, a motorised laser pointer and a control computer. Software tools such as MatLab will be available.
 M. Huelse, S. McBride, J. Law, M. Lee:
Integration of active vision and reaching from a developmental robotics perspective.
IEEE Transactions on Autonomous and Mental Development, 2(4), 355-367.
 F. Chao, M.H. Lee, J.J. Lee,
A developmental algorithm for ocular motor coordination, Robotics and Autonomous Systems, 58, (2010), 239-248.
 Huelse, M., McBride, S., Lee, M:
An evaluation of gaze modulated spatial visual search for robotic active vision.
In: Belpaeme, T., et al. (Eds.): TAROS 2010, Uni. of Plymouth, UK, pp. 83-90.