The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
A detecção de saliência em vídeos tem recebido grande atenção e extensivamente estudada nos últimos anos. No entanto, várias cenas visuais com movimentos complicados levam a um ruído de fundo perceptível e ao destaque não uniforme dos objetos em primeiro plano. Neste artigo, propusemos um modelo de detecção de saliência de vídeo usando pistas espaço-temporais. No domínio espacial, a localização da região de primeiro plano é utilizada como dica espacial para restringir o acúmulo de contraste nas regiões de fundo. No domínio temporal, a distribuição espacial de regiões semelhantes ao movimento é adotada como sugestão temporal para suprimir ainda mais o ruído de fundo. Além disso, um método de previsão temporal baseado em correspondência retroativa é desenvolvido para ajustar a saliência temporal de acordo com sua previsão correspondente do quadro anterior, reforçando assim a consistência ao longo do eixo do tempo. A avaliação de desempenho em vários conjuntos de dados de benchmark populares valida que nossa abordagem supera o estado da arte existente.
Yu CHEN
Wuhan University
Jing XIAO
Wuhan University
Liuyi HU
Wuhan University
Dan CHEN
Wuhan University
Zhongyuan WANG
Wuhan University
Dengshi LI
Jianghan University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Yu CHEN, Jing XIAO, Liuyi HU, Dan CHEN, Zhongyuan WANG, Dengshi LI, "Video Saliency Detection Using Spatiotemporal Cues" in IEICE TRANSACTIONS on Information,
vol. E101-D, no. 9, pp. 2201-2208, September 2018, doi: 10.1587/transinf.2017PCP0011.
Abstract: Saliency detection for videos has been paid great attention and extensively studied in recent years. However, various visual scene with complicated motions leads to noticeable background noise and non-uniformly highlighting the foreground objects. In this paper, we proposed a video saliency detection model using spatio-temporal cues. In spatial domain, the location of foreground region is utilized as spatial cue to constrain the accumulation of contrast for background regions. In temporal domain, the spatial distribution of motion-similar regions is adopted as temporal cue to further suppress the background noise. Moreover, a backward matching based temporal prediction method is developed to adjust the temporal saliency according to its corresponding prediction from the previous frame, thus enforcing the consistency along time axis. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2017PCP0011/_p
Copiar
@ARTICLE{e101-d_9_2201,
author={Yu CHEN, Jing XIAO, Liuyi HU, Dan CHEN, Zhongyuan WANG, Dengshi LI, },
journal={IEICE TRANSACTIONS on Information},
title={Video Saliency Detection Using Spatiotemporal Cues},
year={2018},
volume={E101-D},
number={9},
pages={2201-2208},
abstract={Saliency detection for videos has been paid great attention and extensively studied in recent years. However, various visual scene with complicated motions leads to noticeable background noise and non-uniformly highlighting the foreground objects. In this paper, we proposed a video saliency detection model using spatio-temporal cues. In spatial domain, the location of foreground region is utilized as spatial cue to constrain the accumulation of contrast for background regions. In temporal domain, the spatial distribution of motion-similar regions is adopted as temporal cue to further suppress the background noise. Moreover, a backward matching based temporal prediction method is developed to adjust the temporal saliency according to its corresponding prediction from the previous frame, thus enforcing the consistency along time axis. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.},
keywords={},
doi={10.1587/transinf.2017PCP0011},
ISSN={1745-1361},
month={September},}
Copiar
TY - JOUR
TI - Video Saliency Detection Using Spatiotemporal Cues
T2 - IEICE TRANSACTIONS on Information
SP - 2201
EP - 2208
AU - Yu CHEN
AU - Jing XIAO
AU - Liuyi HU
AU - Dan CHEN
AU - Zhongyuan WANG
AU - Dengshi LI
PY - 2018
DO - 10.1587/transinf.2017PCP0011
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E101-D
IS - 9
JA - IEICE TRANSACTIONS on Information
Y1 - September 2018
AB - Saliency detection for videos has been paid great attention and extensively studied in recent years. However, various visual scene with complicated motions leads to noticeable background noise and non-uniformly highlighting the foreground objects. In this paper, we proposed a video saliency detection model using spatio-temporal cues. In spatial domain, the location of foreground region is utilized as spatial cue to constrain the accumulation of contrast for background regions. In temporal domain, the spatial distribution of motion-similar regions is adopted as temporal cue to further suppress the background noise. Moreover, a backward matching based temporal prediction method is developed to adjust the temporal saliency according to its corresponding prediction from the previous frame, thus enforcing the consistency along time axis. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.
ER -