Upper limb amputation is a traumatic event with a dramatic impact on the everyday life of a person. The available solutions to restore the functionality of the missing hand via myoelectric prostheses have become ever more advanced in terms of hardware, but they are still inadequate in providing natural and robust control. One of the main difficulties is the variability and degradation of the electromyographic signals, which are also affected by amputation-related factors. To overcome this problem, it has been posited to combine surface electromyography with other sources of information that are less affected by the amputation. Some recent studies have proposed to improve the control by integrating gaze, as visual attention is often predictive for future actions in humans. For instance, in manipulation tasks the eyes tend to fixate an object of interest even before the reach-to-grasp is initiated. However, the initial investigations reported in literature that combine vision with surface electromyography do so in an unnatural manner, meaning that the users need to alter their behavior to accommodate the system. The successful exploitation of gaze envisioned in this work is the opposite, namely that the prosthetic system would interpret the subject’s natural behavior. This requires a detailed understanding of the visuomotor coordination of amputated people to determine when and for how long gaze may provide helpful information for an upcoming grasp. Moreover, while some studies have investigated the disruption of gaze behavior when using a prosthesis, no study has considered whether there is any disruption in visuomotor coordination due to the amputation itself. In this work, we verify and quantify the gaze and motor behavior of 14 transradial amputees who were asked to grasp and manipulate common household objects with their missing limb. For comparison, we also include data from 30 able-bodied subjects who executed the same protocol with their right arm. The dataset contains gaze, first person video, angular velocities of the head, and electromyography and accelerometry of the forearm. To analyze the large amount of video, we developed a procedure based on recent deep learning methods to automatically detect and segment all objects of interest. This allowed us to accurately determine the pixel distances between the gaze point, the target object, and the limb in each individual frame. Our analysis shows a clear coordination between the eyes and the limb in the reach-to-grasp phase, confirming that both intact and amputated subjects precede the grasp with their eyes by more than 500 ms. Furthermore, we note that the gaze behavior of amputees was remarkably similar to that of the able-bodied control group, despite their inability to physically manipulate the objects. Based on this knowledge, we show in a proof of concept that the combination of gaze and surface electromyography improves grasp recognition, both for intact and amputated subjects, compared to when only the latter modality is used. To make the integration natural for the user, we devised a method that allows a simultaneous combination of these modalities and weights the visual features based on their relevance. This evaluation is addressed as a proof of concept since the experiments were executed in a standard laboratory environment. We conclude the work therefore with a study to highlight the difficulty that machine learning based approaches need to overcome to become practically relevant also in daily living conditions.

An analysis of the visuomotor behavior of upper limb amputees to improve prosthetic control / Gregori, Valentina. - (2020 Feb 28).

An analysis of the visuomotor behavior of upper limb amputees to improve prosthetic control

GREGORI, VALENTINA
28/02/2020

Abstract

Upper limb amputation is a traumatic event with a dramatic impact on the everyday life of a person. The available solutions to restore the functionality of the missing hand via myoelectric prostheses have become ever more advanced in terms of hardware, but they are still inadequate in providing natural and robust control. One of the main difficulties is the variability and degradation of the electromyographic signals, which are also affected by amputation-related factors. To overcome this problem, it has been posited to combine surface electromyography with other sources of information that are less affected by the amputation. Some recent studies have proposed to improve the control by integrating gaze, as visual attention is often predictive for future actions in humans. For instance, in manipulation tasks the eyes tend to fixate an object of interest even before the reach-to-grasp is initiated. However, the initial investigations reported in literature that combine vision with surface electromyography do so in an unnatural manner, meaning that the users need to alter their behavior to accommodate the system. The successful exploitation of gaze envisioned in this work is the opposite, namely that the prosthetic system would interpret the subject’s natural behavior. This requires a detailed understanding of the visuomotor coordination of amputated people to determine when and for how long gaze may provide helpful information for an upcoming grasp. Moreover, while some studies have investigated the disruption of gaze behavior when using a prosthesis, no study has considered whether there is any disruption in visuomotor coordination due to the amputation itself. In this work, we verify and quantify the gaze and motor behavior of 14 transradial amputees who were asked to grasp and manipulate common household objects with their missing limb. For comparison, we also include data from 30 able-bodied subjects who executed the same protocol with their right arm. The dataset contains gaze, first person video, angular velocities of the head, and electromyography and accelerometry of the forearm. To analyze the large amount of video, we developed a procedure based on recent deep learning methods to automatically detect and segment all objects of interest. This allowed us to accurately determine the pixel distances between the gaze point, the target object, and the limb in each individual frame. Our analysis shows a clear coordination between the eyes and the limb in the reach-to-grasp phase, confirming that both intact and amputated subjects precede the grasp with their eyes by more than 500 ms. Furthermore, we note that the gaze behavior of amputees was remarkably similar to that of the able-bodied control group, despite their inability to physically manipulate the objects. Based on this knowledge, we show in a proof of concept that the combination of gaze and surface electromyography improves grasp recognition, both for intact and amputated subjects, compared to when only the latter modality is used. To make the integration natural for the user, we devised a method that allows a simultaneous combination of these modalities and weights the visual features based on their relevance. This evaluation is addressed as a proof of concept since the experiments were executed in a standard laboratory environment. We conclude the work therefore with a study to highlight the difficulty that machine learning based approaches need to overcome to become practically relevant also in daily living conditions.
28-feb-2020
File allegati a questo prodotto
File Dimensione Formato  
Tesi_dottorato_Gregori.pdf

accesso aperto

Tipologia: Tesi di dottorato
Licenza: Creative commons
Dimensione 5.87 MB
Formato Adobe PDF
5.87 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1373581
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact