From Vision To Actions - Towards Adaptive and Autonomous Humanoid Robots
Staff - Faculty of Informatics
You are cordially invited to attend the PhD Dissertation Defense of Jürgen LEITNER on Tuesday, September 23rd 2014 at 09h30 in room A24 (Red building)
Although robotics research has seen advances over the last decades robots are still not in wide-spread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises.
I present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation.The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots.
From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. A system based on Cartesian Genetic Programming for Image Processing (CGP-IP) is proposed. It is trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach is fast, scalable and robust. Additionally, it can generate human readable programs that can be further customised and tuned. Although CGP-IP is a supervised-learning technique, an integration with a biologically-based scene exploration algorithm is shown to enable the autonomous learning of object detection and identification.
To further improve vision the object is manipulated. For this a variety of frameworks and approaches are used to localise the object and then control the robot to reach and grasp while avoiding to collide even when receiving a rather noisy signal (e.g. EMG measurments).
Finally the motion and action sides are integrated to provide obstacles avoidance using the objects detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the-fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. To facilitate this a combined integration of computer vision, artificial intelligence and machine learning is described herein and employed on the robot.
- Prof. Jürgen Schmidhuber, IDSIA Manno, Switzerland (Research Advisor)
- Prof. Alexander Förster, IDSIA Manno , Switzerland (Co-Advisor )
- Prof. Michael Bronstein, Università della Svizzera italiana, Switzerland (Internal Member)
- Prof. Rolf Krause, Università della Svizzera italiana, Switzerland (Internal Member)
- Prof. Stefano Nolfi, CNR Rome, Italy (External Member)
- Prof. Peter Corke, QUT Brisbane, Australia (External Member)