The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Como a caracterização de vídeos simultaneamente a partir de pistas espaciais e temporais tem se mostrado crucial para o processamento de vídeo, com a escassez de informações temporais de atribuição suave, o vetor de descritor agregado localmente (VLAD) deve ser considerado como uma estrutura subótima para aprender o vídeo espaço-temporal representação. Com o desenvolvimento de mecanismos de atenção no processamento de linguagem natural, neste trabalho, apresentamos um novo modelo com VLAD seguindo operações de autoatenção espaço-temporal, denominado VLAD de autoatenção espaço-temporal ponderado (ST-SAWVLAD). Em particular, mapas de características convolucionais sequenciais extraídos de duas modalidades i.e., RGB e Flow são alimentados receptivamente no módulo de autoatenção para aprender parâmetros de atribuições espaço-temporais suaves, o que permite agregar não apenas informações espaciais detalhadas, mas também informações de movimento fino de quadros de vídeo sucessivos. Em experimentos, avaliamos o ST-SAWVLAD usando conjuntos de dados de reconhecimento de ação competitiva, UCF101 e HMDB51, os resultados atingindo um desempenho notável. O código fonte está disponível em:https://github.com/badstones/st-sawvlad.
Shilei CHENG
University of Electronic Science and Technology of China
Mei XIE
University of Electronic Science and Technology of China
Zheng MA
University of Electronic Science and Technology of China
Siqi LI
University of Electronic Science and Technology of China
Song GU
Chengdu Aeronautic Polytechnic
Feng YANG
University of Electronic Science and Technology of China
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Shilei CHENG, Mei XIE, Zheng MA, Siqi LI, Song GU, Feng YANG, "Spatio-Temporal Self-Attention Weighted VLAD Neural Network for Action Recognition" in IEICE TRANSACTIONS on Information,
vol. E104-D, no. 1, pp. 220-224, January 2021, doi: 10.1587/transinf.2020EDL0002.
Abstract: As characterizing videos simultaneously from spatial and temporal cues have been shown crucial for video processing, with the shortage of temporal information of soft assignment, the vector of locally aggregated descriptor (VLAD) should be considered as a suboptimal framework for learning the spatio-temporal video representation. With the development of attention mechanisms in natural language processing, in this work, we present a novel model with VLAD following spatio-temporal self-attention operations, named spatio-temporal self-attention weighted VLAD (ST-SAWVLAD). In particular, sequential convolutional feature maps extracted from two modalities i.e., RGB and Flow are receptively fed into the self-attention module to learn soft spatio-temporal assignments parameters, which enabling aggregate not only detailed spatial information but also fine motion information from successive video frames. In experiments, we evaluate ST-SAWVLAD by using competitive action recognition datasets, UCF101 and HMDB51, the results shcoutstanding performance. The source code is available at:https://github.com/badstones/st-sawvlad.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2020EDL0002/_p
Copiar
@ARTICLE{e104-d_1_220,
author={Shilei CHENG, Mei XIE, Zheng MA, Siqi LI, Song GU, Feng YANG, },
journal={IEICE TRANSACTIONS on Information},
title={Spatio-Temporal Self-Attention Weighted VLAD Neural Network for Action Recognition},
year={2021},
volume={E104-D},
number={1},
pages={220-224},
abstract={As characterizing videos simultaneously from spatial and temporal cues have been shown crucial for video processing, with the shortage of temporal information of soft assignment, the vector of locally aggregated descriptor (VLAD) should be considered as a suboptimal framework for learning the spatio-temporal video representation. With the development of attention mechanisms in natural language processing, in this work, we present a novel model with VLAD following spatio-temporal self-attention operations, named spatio-temporal self-attention weighted VLAD (ST-SAWVLAD). In particular, sequential convolutional feature maps extracted from two modalities i.e., RGB and Flow are receptively fed into the self-attention module to learn soft spatio-temporal assignments parameters, which enabling aggregate not only detailed spatial information but also fine motion information from successive video frames. In experiments, we evaluate ST-SAWVLAD by using competitive action recognition datasets, UCF101 and HMDB51, the results shcoutstanding performance. The source code is available at:https://github.com/badstones/st-sawvlad.},
keywords={},
doi={10.1587/transinf.2020EDL0002},
ISSN={1745-1361},
month={January},}
Copiar
TY - JOUR
TI - Spatio-Temporal Self-Attention Weighted VLAD Neural Network for Action Recognition
T2 - IEICE TRANSACTIONS on Information
SP - 220
EP - 224
AU - Shilei CHENG
AU - Mei XIE
AU - Zheng MA
AU - Siqi LI
AU - Song GU
AU - Feng YANG
PY - 2021
DO - 10.1587/transinf.2020EDL0002
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E104-D
IS - 1
JA - IEICE TRANSACTIONS on Information
Y1 - January 2021
AB - As characterizing videos simultaneously from spatial and temporal cues have been shown crucial for video processing, with the shortage of temporal information of soft assignment, the vector of locally aggregated descriptor (VLAD) should be considered as a suboptimal framework for learning the spatio-temporal video representation. With the development of attention mechanisms in natural language processing, in this work, we present a novel model with VLAD following spatio-temporal self-attention operations, named spatio-temporal self-attention weighted VLAD (ST-SAWVLAD). In particular, sequential convolutional feature maps extracted from two modalities i.e., RGB and Flow are receptively fed into the self-attention module to learn soft spatio-temporal assignments parameters, which enabling aggregate not only detailed spatial information but also fine motion information from successive video frames. In experiments, we evaluate ST-SAWVLAD by using competitive action recognition datasets, UCF101 and HMDB51, the results shcoutstanding performance. The source code is available at:https://github.com/badstones/st-sawvlad.
ER -