Human action recognition is one of the most pressing questions in societal emergencies of any kind. Technology is helping to solve such problems at the cost of stealing human privacy. Several approaches have considered the relevance of privacy in the pervasive process of observing people. New algorithms have been proposed to deal with low-resolution images hiding people identity. However, many of these methods do not consider that social security asks for real-time solutions: active cameras require flexible distributed systems in sensible areas as airports, hospitals, stations, squares and roads. To conjugate both human privacy and real-time supervision, we propose a novel deep architecture, the Multi Streams Network. This model works in real-time and performs action recognition on extremely low-resolution videos, exploiting three sources of information: RGB images, optical flow and slack mask data. Experiments on two datasets show that our architecture improves the recognition accuracy compared to the two-streams approach and ensure real-time execution on Edge TPU (Tensor Processing Unit).
Learning to See through a Few Pixels: Multi Streams Network for Extreme Low-Resolution Action Recognition / Russo, P.; Ticca, S.; Alati, E.; Pirri, F.. - In: IEEE ACCESS. - ISSN 2169-3536. - 9:(2021), pp. 12019-12026. [10.1109/ACCESS.2021.3050514]
Learning to See through a Few Pixels: Multi Streams Network for Extreme Low-Resolution Action Recognition
Russo P.
;Ticca S.;Alati E.;Pirri F.
2021
Abstract
Human action recognition is one of the most pressing questions in societal emergencies of any kind. Technology is helping to solve such problems at the cost of stealing human privacy. Several approaches have considered the relevance of privacy in the pervasive process of observing people. New algorithms have been proposed to deal with low-resolution images hiding people identity. However, many of these methods do not consider that social security asks for real-time solutions: active cameras require flexible distributed systems in sensible areas as airports, hospitals, stations, squares and roads. To conjugate both human privacy and real-time supervision, we propose a novel deep architecture, the Multi Streams Network. This model works in real-time and performs action recognition on extremely low-resolution videos, exploiting three sources of information: RGB images, optical flow and slack mask data. Experiments on two datasets show that our architecture improves the recognition accuracy compared to the two-streams approach and ensure real-time execution on Edge TPU (Tensor Processing Unit).File | Dimensione | Formato | |
---|---|---|---|
Russo_Learning_2021.pdf
accesso aperto
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Creative commons
Dimensione
3.01 MB
Formato
Adobe PDF
|
3.01 MB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.