With this sentence from his Metaphysica, Aristotle perfectly introduces us to the importance of eyesight for humans, as well as for any advanced living being. Since, to a large extent, robotics is concerned with the emulation of human skills in an artificial context, a natural requirement is to cope with vision for a full interaction with the world. In this respect, this Thesis explores the problem of exploiting visual information to control the motion of robotic systems equipped with onboard cameras. We build our proposals upon the Visual Servoing paradigm, which bridges Computer Vision (image processing, scene interpretation, feature extraction, etc.) with topics proper to the Control Theory field. Indeed, within Visual Servoing, a camera is modeled as a nonlinear function of the scene, i.e., of 3D states subject to rigid body kinematics. Therefore, visual pose control reduces to a problem of output regulation, or task realization if we conform to the robotic nomenclature. Once this view is adopted, any task realization algorithm can be used to fulfill a visual task, and, more in general, the problem can be tackled with the tools of Control Theory. In order to fully exploit this formulation, however, a suitable task-oriented modeling of robot manipulators is required. Therefore, in the first part of the Thesis, we develop a theoretical framework for kinematic modeling and control of such systems. While keeping the treatment at the most general level, we place some emphasis on the cases of fixed and nonholonomic mobile manipulators. Indeed, the former class is ubiquitous in robotics, and the latter conveniently merges dexterity with extended ( unlimited) workspace capabilities. A special attention is also devoted to the exploitation of possible redundancy with respect to a given task, in terms of both improvement of overall performance, and satisfaction of secondary constraints. Such modeling framework is then combined with the schemes proper to Visual Servoing to obtain a unique formulation for visual task control. Again, the two cases of fixed-base and nonholonomic mobile manipulators are explicitly considered, and any inherent peculiarity is pointed out. We also show how a suitable use of redundancy can effectively improve the fulfillment of Visual Servoing tasks. For instance, it can allow realization of tasks that would be close to singularity when addressed altogether. Another contribution to this topic is the possibility to estimate online several unmeasurable 3D quantities that are lost through the projective mapping performed by a camera. To this end, a set of estimation tools based on the nonlinear observation theory is proposed, so as to recover at runtime any 3D information needed by Visual Servoing schemes. With respect to other possible approximations, this solution takes advantage of the observation convergence in order to improve the overall closed-loop stability. Finally, the theoretical claims and simulation results presented in the Thesis are further validated by a number of experiments run on real robots equipped with cameras. The proposed methodology is also exploited within an industrial application involving pick-and-assemble tasks. A fixed-base manipulator carrying a camera on the gripper must (i) autonomously locate a set of planar parts on a table, (ii) pick them up, (iii) locate the corresponding holes on a movable plate, and (iv) insert the parts. Robot motion during the approaching phases to parts/holes is governed by Visual Servoing techniques, so as to obtain the needed degree of robustness and reactivity with respect to external ‘disturbances’, such as unexpected displacements of the target items.

Visual Estimation and Control of Robot Manipulating Systems / ROBUFFO GIORDANO, Paolo. - (2008).

Visual Estimation and Control of Robot Manipulating Systems

ROBUFFO GIORDANO, PAOLO
01/01/2008

Abstract

With this sentence from his Metaphysica, Aristotle perfectly introduces us to the importance of eyesight for humans, as well as for any advanced living being. Since, to a large extent, robotics is concerned with the emulation of human skills in an artificial context, a natural requirement is to cope with vision for a full interaction with the world. In this respect, this Thesis explores the problem of exploiting visual information to control the motion of robotic systems equipped with onboard cameras. We build our proposals upon the Visual Servoing paradigm, which bridges Computer Vision (image processing, scene interpretation, feature extraction, etc.) with topics proper to the Control Theory field. Indeed, within Visual Servoing, a camera is modeled as a nonlinear function of the scene, i.e., of 3D states subject to rigid body kinematics. Therefore, visual pose control reduces to a problem of output regulation, or task realization if we conform to the robotic nomenclature. Once this view is adopted, any task realization algorithm can be used to fulfill a visual task, and, more in general, the problem can be tackled with the tools of Control Theory. In order to fully exploit this formulation, however, a suitable task-oriented modeling of robot manipulators is required. Therefore, in the first part of the Thesis, we develop a theoretical framework for kinematic modeling and control of such systems. While keeping the treatment at the most general level, we place some emphasis on the cases of fixed and nonholonomic mobile manipulators. Indeed, the former class is ubiquitous in robotics, and the latter conveniently merges dexterity with extended ( unlimited) workspace capabilities. A special attention is also devoted to the exploitation of possible redundancy with respect to a given task, in terms of both improvement of overall performance, and satisfaction of secondary constraints. Such modeling framework is then combined with the schemes proper to Visual Servoing to obtain a unique formulation for visual task control. Again, the two cases of fixed-base and nonholonomic mobile manipulators are explicitly considered, and any inherent peculiarity is pointed out. We also show how a suitable use of redundancy can effectively improve the fulfillment of Visual Servoing tasks. For instance, it can allow realization of tasks that would be close to singularity when addressed altogether. Another contribution to this topic is the possibility to estimate online several unmeasurable 3D quantities that are lost through the projective mapping performed by a camera. To this end, a set of estimation tools based on the nonlinear observation theory is proposed, so as to recover at runtime any 3D information needed by Visual Servoing schemes. With respect to other possible approximations, this solution takes advantage of the observation convergence in order to improve the overall closed-loop stability. Finally, the theoretical claims and simulation results presented in the Thesis are further validated by a number of experiments run on real robots equipped with cameras. The proposed methodology is also exploited within an industrial application involving pick-and-assemble tasks. A fixed-base manipulator carrying a camera on the gripper must (i) autonomously locate a set of planar parts on a table, (ii) pick them up, (iii) locate the corresponding holes on a movable plate, and (iv) insert the parts. Robot motion during the approaching phases to parts/holes is governed by Visual Servoing techniques, so as to obtain the needed degree of robustness and reactivity with respect to external ‘disturbances’, such as unexpected displacements of the target items.
2008
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/917980
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact