The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Na reclassificação da busca de vídeos, além da conhecida lacuna semântica, a lacuna de intenção, que é a lacuna entre a representação da demanda dos usuários e a real intenção de busca, está se tornando um grande problema que restringe a melhoria do desempenho da reclassificação. Para resolver este problema, propomos a reclassificação da pesquisa de vídeos com base em uma representação semântica por múltiplas tags. No método proposto, utilizamos feedback de relevância, com o qual o usuário pode interagir especificando alguns exemplos de vídeos dos resultados iniciais da pesquisa. Aplicamos o feedback de relevância para reduzir a lacuna entre a real intenção dos usuários e os resultados da pesquisa de vídeo. Além disso, focamos no fato de que múltiplas tags são usadas para representar conteúdos de vídeo. Ao vetorizar múltiplas tags associadas a vídeos com base no algoritmo Word2Vec e calcular o centróide do vetor de tags como uma representação coletiva, podemos avaliar a semelhança semântica entre vídeos usando recursos de tags. Conduzimos experimentos no conjunto de dados YouTube-8M e os resultados mostram que nossa abordagem de reclassificação é eficaz e eficiente.
Takamasa FUJII
Kansai University
Soh YOSHIDA
Kansai University
Mitsuji MUNEYASU
Kansai University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Takamasa FUJII, Soh YOSHIDA, Mitsuji MUNEYASU, "Video Search Reranking with Relevance Feedback Using Visual and Textual Similarities" in IEICE TRANSACTIONS on Fundamentals,
vol. E102-A, no. 12, pp. 1900-1909, December 2019, doi: 10.1587/transfun.E102.A.1900.
Abstract: In video search reranking, in addition to the well-known semantic gap, the intent gap, which is the gap between the representation of the users' demand and the real search intention, is becoming a major problem restricting the improvement of reranking performance. To address this problem, we propose video search reranking based on a semantic representation by multiple tags. In the proposed method, we use relevance feedback, which the user can interact with by specifying some example videos from the initial search results. We apply the relevance feedback to reduce the gap between the real intent of the users and the video search results. In addition, we focus on the fact that multiple tags are used to represent video contents. By vectorizing multiple tags associated with videos on the basis of the Word2Vec algorithm and calculating the centroid of the tag vector as a collective representation, we can evaluate the semantic similarity between videos by using tag features. We conduct experiments on the YouTube-8M dataset, and the results show that our reranking approach is effective and efficient.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E102.A.1900/_p
Copiar
@ARTICLE{e102-a_12_1900,
author={Takamasa FUJII, Soh YOSHIDA, Mitsuji MUNEYASU, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Video Search Reranking with Relevance Feedback Using Visual and Textual Similarities},
year={2019},
volume={E102-A},
number={12},
pages={1900-1909},
abstract={In video search reranking, in addition to the well-known semantic gap, the intent gap, which is the gap between the representation of the users' demand and the real search intention, is becoming a major problem restricting the improvement of reranking performance. To address this problem, we propose video search reranking based on a semantic representation by multiple tags. In the proposed method, we use relevance feedback, which the user can interact with by specifying some example videos from the initial search results. We apply the relevance feedback to reduce the gap between the real intent of the users and the video search results. In addition, we focus on the fact that multiple tags are used to represent video contents. By vectorizing multiple tags associated with videos on the basis of the Word2Vec algorithm and calculating the centroid of the tag vector as a collective representation, we can evaluate the semantic similarity between videos by using tag features. We conduct experiments on the YouTube-8M dataset, and the results show that our reranking approach is effective and efficient.},
keywords={},
doi={10.1587/transfun.E102.A.1900},
ISSN={1745-1337},
month={December},}
Copiar
TY - JOUR
TI - Video Search Reranking with Relevance Feedback Using Visual and Textual Similarities
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 1900
EP - 1909
AU - Takamasa FUJII
AU - Soh YOSHIDA
AU - Mitsuji MUNEYASU
PY - 2019
DO - 10.1587/transfun.E102.A.1900
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E102-A
IS - 12
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - December 2019
AB - In video search reranking, in addition to the well-known semantic gap, the intent gap, which is the gap between the representation of the users' demand and the real search intention, is becoming a major problem restricting the improvement of reranking performance. To address this problem, we propose video search reranking based on a semantic representation by multiple tags. In the proposed method, we use relevance feedback, which the user can interact with by specifying some example videos from the initial search results. We apply the relevance feedback to reduce the gap between the real intent of the users and the video search results. In addition, we focus on the fact that multiple tags are used to represent video contents. By vectorizing multiple tags associated with videos on the basis of the Word2Vec algorithm and calculating the centroid of the tag vector as a collective representation, we can evaluate the semantic similarity between videos by using tag features. We conduct experiments on the YouTube-8M dataset, and the results show that our reranking approach is effective and efficient.
ER -