The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Nos últimos anos, em comparação com o algoritmo tradicional de super-resolução facial (SR), o SR facial baseado em rede neural profunda tem mostrado forte desempenho. Entre esses métodos, o mecanismo de atenção tem sido amplamente utilizado na SR facial devido à sua forte capacidade de expressão de características. No entanto, os métodos existentes de SR facial baseados em atenção não podem extrair completamente as informações de pixels ausentes de imagens faciais de baixa resolução (LR) (estrutural anterior). E consideram apenas um único mecanismo de atenção para aproveitar a estrutura do rosto. O uso de atenção múltipla pode ajudar a melhorar a representação de recursos. Para resolver este problema, propomos primeiro um novo mecanismo de atenção de pixels, que pode recuperar os detalhes estruturais dos pixels perdidos. Em seguida, projetamos um módulo de fusão de atenção para melhor integrar as diferentes características da atenção tripla. Resultados experimentais em conjuntos de dados FFHQ mostram que este método é superior aos métodos SR faciais existentes baseados em redes neurais profundas.
Kanghui ZHAO
Wuhan Institute of Technology
Tao LU
Wuhan Institute of Technology
Yanduo ZHANG
Wuhan Institute of Technology
Yu WANG
Wuhan Institute of Technology
Yuanzhi WANG
Wuhan Institute of Technology
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Kanghui ZHAO, Tao LU, Yanduo ZHANG, Yu WANG, Yuanzhi WANG, "Face Super-Resolution via Triple-Attention Feature Fusion Network" in IEICE TRANSACTIONS on Fundamentals,
vol. E105-A, no. 4, pp. 748-752, April 2022, doi: 10.1587/transfun.2021EAL2056.
Abstract: In recent years, compared with the traditional face super-resolution (SR) algorithm, the face SR based on deep neural network has shown strong performance. Among these methods, attention mechanism has been widely used in face SR because of its strong feature expression ability. However, the existing attention-based face SR methods can not fully mine the missing pixel information of low-resolution (LR) face images (structural prior). And they only consider a single attention mechanism to take advantage of the structure of the face. The use of multi-attention could help to enhance feature representation. In order to solve this problem, we first propose a new pixel attention mechanism, which can recover the structural details of lost pixels. Then, we design an attention fusion module to better integrate the different characteristics of triple attention. Experimental results on FFHQ data sets show that this method is superior to the existing face SR methods based on deep neural network.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2021EAL2056/_p
Copiar
@ARTICLE{e105-a_4_748,
author={Kanghui ZHAO, Tao LU, Yanduo ZHANG, Yu WANG, Yuanzhi WANG, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Face Super-Resolution via Triple-Attention Feature Fusion Network},
year={2022},
volume={E105-A},
number={4},
pages={748-752},
abstract={In recent years, compared with the traditional face super-resolution (SR) algorithm, the face SR based on deep neural network has shown strong performance. Among these methods, attention mechanism has been widely used in face SR because of its strong feature expression ability. However, the existing attention-based face SR methods can not fully mine the missing pixel information of low-resolution (LR) face images (structural prior). And they only consider a single attention mechanism to take advantage of the structure of the face. The use of multi-attention could help to enhance feature representation. In order to solve this problem, we first propose a new pixel attention mechanism, which can recover the structural details of lost pixels. Then, we design an attention fusion module to better integrate the different characteristics of triple attention. Experimental results on FFHQ data sets show that this method is superior to the existing face SR methods based on deep neural network.},
keywords={},
doi={10.1587/transfun.2021EAL2056},
ISSN={1745-1337},
month={April},}
Copiar
TY - JOUR
TI - Face Super-Resolution via Triple-Attention Feature Fusion Network
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 748
EP - 752
AU - Kanghui ZHAO
AU - Tao LU
AU - Yanduo ZHANG
AU - Yu WANG
AU - Yuanzhi WANG
PY - 2022
DO - 10.1587/transfun.2021EAL2056
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E105-A
IS - 4
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - April 2022
AB - In recent years, compared with the traditional face super-resolution (SR) algorithm, the face SR based on deep neural network has shown strong performance. Among these methods, attention mechanism has been widely used in face SR because of its strong feature expression ability. However, the existing attention-based face SR methods can not fully mine the missing pixel information of low-resolution (LR) face images (structural prior). And they only consider a single attention mechanism to take advantage of the structure of the face. The use of multi-attention could help to enhance feature representation. In order to solve this problem, we first propose a new pixel attention mechanism, which can recover the structural details of lost pixels. Then, we design an attention fusion module to better integrate the different characteristics of triple attention. Experimental results on FFHQ data sets show that this method is superior to the existing face SR methods based on deep neural network.
ER -