The relevance of autonomy in space systems during rendezvous and docking operations has been lately increasing. At the scope, a robust GNC architecture is required, which strictly relies on the navigation system’s performance and must assure both high efficiency and safety, i.e. low errors and no collisions with the target satellite. One of the newly explored fields is the optical navigation one. Using passive optical sensors such as cameras can give high benefit in terms of characterization of the observed scene, thus enlarging the consciousness of what is going on in the mission scenario. The present research investigates the development of a filter which can estimate the shape and relative attitude, position and velocity of a non-cooperative, possibly unknown satellite orbiting around Earth, observed by a camera and a LIDAR mounted on a chaser satellite, whose objective is to successfully complete a docking maneuver. The image taken at a certain time is processed, features are extracted from it and matched with the ones extracted from the image at the previous time step. The matched features along with the relative distance measured by the LIDAR are merged inside an unscented Kalman filter, which predicts, updates and improves the state’s estimate throughout the iterations. The expedient used in the filter is to give a 3D characterization to the 2D features used as measurements. The filter estimates the 3D coordinates of these points, i.e. the target’s shape, in the camera reference frame, whose cinematic differential equations depend on the target’s attitude dynamics and the chaser’s relative orbital dynamics. Thus, the target’s attitude parameters, i.e. the quaternions, and angular velocity vector, the relative position and velocity vectors and the tracked 3D points are all included in the state vector and estimated by the filter. The tracked and estimated 3D point corresponds to a point in the target’s body reference frame. Subtracting the 3D point and relative position vectors and rotating the result by means of the estimated target’s attitude matrix, the 3D point coordinates are determined in the body reference frame. By doing this for all the tracked points, a 3D map of the target can be built.
Monocular and LIDAR based determination of shape, attitude and relative state of a non-cooperative, earth-orbiting satellite / Volpe, Renato; Palmerini, Giovanni Battista; Sabatini, Marco. - STAMPA. - (2017). (Intervento presentato al convegno International Astronautical Congress 2017 tenutosi a Adelaide).
Monocular and LIDAR based determination of shape, attitude and relative state of a non-cooperative, earth-orbiting satellite
VOLPE, RENATO;PALMERINI, Giovanni Battista;SABATINI, MARCO
2017
Abstract
The relevance of autonomy in space systems during rendezvous and docking operations has been lately increasing. At the scope, a robust GNC architecture is required, which strictly relies on the navigation system’s performance and must assure both high efficiency and safety, i.e. low errors and no collisions with the target satellite. One of the newly explored fields is the optical navigation one. Using passive optical sensors such as cameras can give high benefit in terms of characterization of the observed scene, thus enlarging the consciousness of what is going on in the mission scenario. The present research investigates the development of a filter which can estimate the shape and relative attitude, position and velocity of a non-cooperative, possibly unknown satellite orbiting around Earth, observed by a camera and a LIDAR mounted on a chaser satellite, whose objective is to successfully complete a docking maneuver. The image taken at a certain time is processed, features are extracted from it and matched with the ones extracted from the image at the previous time step. The matched features along with the relative distance measured by the LIDAR are merged inside an unscented Kalman filter, which predicts, updates and improves the state’s estimate throughout the iterations. The expedient used in the filter is to give a 3D characterization to the 2D features used as measurements. The filter estimates the 3D coordinates of these points, i.e. the target’s shape, in the camera reference frame, whose cinematic differential equations depend on the target’s attitude dynamics and the chaser’s relative orbital dynamics. Thus, the target’s attitude parameters, i.e. the quaternions, and angular velocity vector, the relative position and velocity vectors and the tracked 3D points are all included in the state vector and estimated by the filter. The tracked and estimated 3D point corresponds to a point in the target’s body reference frame. Subtracting the 3D point and relative position vectors and rotating the result by means of the estimated target’s attitude matrix, the 3D point coordinates are determined in the body reference frame. By doing this for all the tracked points, a 3D map of the target can be built.File | Dimensione | Formato | |
---|---|---|---|
Volpe_monocular_2017.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
923.14 kB
Formato
Adobe PDF
|
923.14 kB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.