The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
A tecnologia de super-resolução é uma das soluções para preencher a lacuna entre telas de alta resolução e imagens de baixa resolução. Existem vários algoritmos para interpolar as informações perdidas, um dos quais é o uso de uma rede neural convolucional (CNN). Este artigo mostra uma implementação de FPGA e uma avaliação de desempenho de um novo sistema de super-resolução baseado em CNN, que pode processar imagens em movimento em tempo real. Aplicamos inversões horizontais e verticais às imagens de entrada em vez de ampliação. Esse método de inversão evita a perda de informações e permite que a rede faça o melhor uso do tamanho do patch. Além disso, adotamos o sistema de numeração de resíduos (RNS) na rede para reduzir a utilização de recursos do FPGA. A multiplicação e adição eficientes com LUTs aumentaram a escala da rede que pode ser implementada no mesmo FPGA em aproximadamente 54% em comparação com uma implementação com operações de ponto fixo. O sistema proposto pode executar super-resolução de 960×540 a 1920×1080 a 60fps com latência inferior a 1ms. Apesar da restrição de recursos do FPGA, o sistema pode gerar imagens nítidas de super-resolução com bordas suaves. Os resultados da avaliação também revelaram a qualidade superior em termos da relação sinal-ruído de pico (PSNR) e do índice de similaridade estrutural (SSIM), em comparação com sistemas com outros métodos.
Taito MANABE
Nagasaki University
Yuichiro SHIBATA
Nagasaki University
Kiyoshi OGURI
Nagasaki University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Taito MANABE, Yuichiro SHIBATA, Kiyoshi OGURI, "FPGA Implementation of a Real-Time Super-Resolution System Using Flips and an RNS-Based CNN" in IEICE TRANSACTIONS on Fundamentals,
vol. E101-A, no. 12, pp. 2280-2289, December 2018, doi: 10.1587/transfun.E101.A.2280.
Abstract: The super-resolution technology is one of the solutions to fill the gap between high-resolution displays and lower-resolution images. There are various algorithms to interpolate the lost information, one of which is using a convolutional neural network (CNN). This paper shows an FPGA implementation and a performance evaluation of a novel CNN-based super-resolution system, which can process moving images in real time. We apply horizontal and vertical flips to input images instead of enlargement. This flip method prevents information loss and enables the network to make the best use of its patch size. In addition, we adopted the residue number system (RNS) in the network to reduce FPGA resource utilization. Efficient multiplication and addition with LUTs increased a network scale that can be implemented on the same FPGA by approximately 54% compared to an implementation with fixed-point operations. The proposed system can perform super-resolution from 960×540 to 1920×1080 at 60fps with a latency of less than 1ms. Despite resource restriction of the FPGA, the system can generate clear super-resolution images with smooth edges. The evaluation results also revealed the superior quality in terms of the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) index, compared to systems with other methods.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E101.A.2280/_p
Copiar
@ARTICLE{e101-a_12_2280,
author={Taito MANABE, Yuichiro SHIBATA, Kiyoshi OGURI, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={FPGA Implementation of a Real-Time Super-Resolution System Using Flips and an RNS-Based CNN},
year={2018},
volume={E101-A},
number={12},
pages={2280-2289},
abstract={The super-resolution technology is one of the solutions to fill the gap between high-resolution displays and lower-resolution images. There are various algorithms to interpolate the lost information, one of which is using a convolutional neural network (CNN). This paper shows an FPGA implementation and a performance evaluation of a novel CNN-based super-resolution system, which can process moving images in real time. We apply horizontal and vertical flips to input images instead of enlargement. This flip method prevents information loss and enables the network to make the best use of its patch size. In addition, we adopted the residue number system (RNS) in the network to reduce FPGA resource utilization. Efficient multiplication and addition with LUTs increased a network scale that can be implemented on the same FPGA by approximately 54% compared to an implementation with fixed-point operations. The proposed system can perform super-resolution from 960×540 to 1920×1080 at 60fps with a latency of less than 1ms. Despite resource restriction of the FPGA, the system can generate clear super-resolution images with smooth edges. The evaluation results also revealed the superior quality in terms of the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) index, compared to systems with other methods.},
keywords={},
doi={10.1587/transfun.E101.A.2280},
ISSN={1745-1337},
month={December},}
Copiar
TY - JOUR
TI - FPGA Implementation of a Real-Time Super-Resolution System Using Flips and an RNS-Based CNN
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 2280
EP - 2289
AU - Taito MANABE
AU - Yuichiro SHIBATA
AU - Kiyoshi OGURI
PY - 2018
DO - 10.1587/transfun.E101.A.2280
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E101-A
IS - 12
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - December 2018
AB - The super-resolution technology is one of the solutions to fill the gap between high-resolution displays and lower-resolution images. There are various algorithms to interpolate the lost information, one of which is using a convolutional neural network (CNN). This paper shows an FPGA implementation and a performance evaluation of a novel CNN-based super-resolution system, which can process moving images in real time. We apply horizontal and vertical flips to input images instead of enlargement. This flip method prevents information loss and enables the network to make the best use of its patch size. In addition, we adopted the residue number system (RNS) in the network to reduce FPGA resource utilization. Efficient multiplication and addition with LUTs increased a network scale that can be implemented on the same FPGA by approximately 54% compared to an implementation with fixed-point operations. The proposed system can perform super-resolution from 960×540 to 1920×1080 at 60fps with a latency of less than 1ms. Despite resource restriction of the FPGA, the system can generate clear super-resolution images with smooth edges. The evaluation results also revealed the superior quality in terms of the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) index, compared to systems with other methods.
ER -