The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Exibições de texto completo
297
A previsão de dados de séries temporais é útil em muitos campos, como sistema de previsão de preços de ações, sistema de direção autônoma, previsão do tempo, etc. Muitos modelos de previsão existentes tendem a funcionar bem ao prever séries temporais de sequência curta. No entanto, ao trabalhar com séries temporais de sequência longa, o desempenho é significativamente prejudicado. Recentemente, tem havido pesquisas mais intensas nessa direção, e o Informer é atualmente o modelo de previsão mais eficiente. A principal desvantagem do Informer é que ele não permite aprendizagem incremental. Neste artigo, propomos um Fast Informer denominado Finformer, que aborda o gargalo acima, reduzindo o tempo de treinamento/predição do Informer. O Finformer pode calcular com eficiência a incorporação posicional/temporal/de valor e a consulta/chave/valor da autoatenção de forma incremental. Teoricamente, o Finformer pode melhorar a velocidade de treinamento e previsão em relação ao modelo de última geração Informer. Extensos experimentos mostram que o Finformer é cerca de 26% mais rápido que o Informer para previsão de séries temporais de sequência curta e longa. Além disso, o Finformer é cerca de 20% mais rápido que o InTrans para o Conv1d geral, que é um de nossos trabalhos anteriores e é o antecessor do Finformer.
Savong BOU
University of Tsukuba
Toshiyuki AMAGASA
University of Tsukuba
Hiroyuki KITAGAWA
University of Tsukuba
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Savong BOU, Toshiyuki AMAGASA, Hiroyuki KITAGAWA, "Finformer: Fast Incremental and General Time Series Data Prediction" in IEICE TRANSACTIONS on Information,
vol. E107-D, no. 5, pp. 625-637, May 2024, doi: 10.1587/transinf.2023DAP0003.
Abstract: Forecasting time-series data is useful in many fields, such as stock price predicting system, autonomous driving system, weather forecast, etc. Many existing forecasting models tend to work well when forecasting short-sequence time series. However, when working with long sequence time series, the performance suffers significantly. Recently, there has been more intense research in this direction, and Informer is currently the most efficient predicting model. Informer’s main drawback is that it does not allow for incremental learning. In this paper, we propose a Fast Informer called Finformer, which addresses the above bottleneck by reducing the training/predicting time of Informer. Finformer can efficiently compute the positional/temporal/value embedding and Query/Key/Value of the self-attention incrementally. Theoretically, Finformer can improve the speed of both training and predicting over the state-of-the-art model Informer. Extensive experiments show that Finformer is about 26% faster than Informer for both short and long sequence time series prediction. In addition, Finformer is about 20% faster than InTrans for the general Conv1d, which is one of our previous works and is the predecessor of Finformer.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2023DAP0003/_f
Copiar
@ARTICLE{e107-d_5_625,
author={Savong BOU, Toshiyuki AMAGASA, Hiroyuki KITAGAWA, },
journal={IEICE TRANSACTIONS on Information},
title={Finformer: Fast Incremental and General Time Series Data Prediction},
year={2024},
volume={E107-D},
number={5},
pages={625-637},
abstract={Forecasting time-series data is useful in many fields, such as stock price predicting system, autonomous driving system, weather forecast, etc. Many existing forecasting models tend to work well when forecasting short-sequence time series. However, when working with long sequence time series, the performance suffers significantly. Recently, there has been more intense research in this direction, and Informer is currently the most efficient predicting model. Informer’s main drawback is that it does not allow for incremental learning. In this paper, we propose a Fast Informer called Finformer, which addresses the above bottleneck by reducing the training/predicting time of Informer. Finformer can efficiently compute the positional/temporal/value embedding and Query/Key/Value of the self-attention incrementally. Theoretically, Finformer can improve the speed of both training and predicting over the state-of-the-art model Informer. Extensive experiments show that Finformer is about 26% faster than Informer for both short and long sequence time series prediction. In addition, Finformer is about 20% faster than InTrans for the general Conv1d, which is one of our previous works and is the predecessor of Finformer.},
keywords={},
doi={10.1587/transinf.2023DAP0003},
ISSN={1745-1361},
month={May},}
Copiar
TY - JOUR
TI - Finformer: Fast Incremental and General Time Series Data Prediction
T2 - IEICE TRANSACTIONS on Information
SP - 625
EP - 637
AU - Savong BOU
AU - Toshiyuki AMAGASA
AU - Hiroyuki KITAGAWA
PY - 2024
DO - 10.1587/transinf.2023DAP0003
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E107-D
IS - 5
JA - IEICE TRANSACTIONS on Information
Y1 - May 2024
AB - Forecasting time-series data is useful in many fields, such as stock price predicting system, autonomous driving system, weather forecast, etc. Many existing forecasting models tend to work well when forecasting short-sequence time series. However, when working with long sequence time series, the performance suffers significantly. Recently, there has been more intense research in this direction, and Informer is currently the most efficient predicting model. Informer’s main drawback is that it does not allow for incremental learning. In this paper, we propose a Fast Informer called Finformer, which addresses the above bottleneck by reducing the training/predicting time of Informer. Finformer can efficiently compute the positional/temporal/value embedding and Query/Key/Value of the self-attention incrementally. Theoretically, Finformer can improve the speed of both training and predicting over the state-of-the-art model Informer. Extensive experiments show that Finformer is about 26% faster than Informer for both short and long sequence time series prediction. In addition, Finformer is about 20% faster than InTrans for the general Conv1d, which is one of our previous works and is the predecessor of Finformer.
ER -