The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
A avaliação da qualidade de imagem (IQA) é um problema inerente na área de processamento de imagens. Recentemente, a avaliação da qualidade de imagem baseada em aprendizagem profunda atraiu cada vez mais atenção, devido à sua alta precisão de previsão. Neste artigo, propomos um preditor de qualidade de imagem rápido e totalmente cego (FFIQP) usando redes neurais convolucionais incluindo duas estratégias. Primeiro, propomos uma estratégia de agrupamento de distorção baseada na função de distribuição dos resultados da camada intermediária na rede neural convolucional (CNN) para tornar o IQA totalmente cego. Em segundo lugar, ao analisar a relação entre as informações de saliência da imagem e o erro de previsão da CNN, utilizamos um mapa de pré-saliência para ignorar os patches não salientes para a aceleração do IQA. Os resultados experimentais verificam que nosso método pode atingir alta precisão (0.978) com índices de qualidade subjetivos, superando os métodos de IQA existentes. Além disso, o método proposto é altamente atraente computacionalmente, alcançando um desempenho de complexidade flexível ao atribuir diferentes limites no mapa de saliência.
Zhengxue CHENG
Waseda University
Masaru TAKEUCHI
Waseda University
Kenji KANAI
Waseda University
Jiro KATTO
Waseda University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Zhengxue CHENG, Masaru TAKEUCHI, Kenji KANAI, Jiro KATTO, "A Fully-Blind and Fast Image Quality Predictor with Convolutional Neural Networks" in IEICE TRANSACTIONS on Fundamentals,
vol. E101-A, no. 9, pp. 1557-1566, September 2018, doi: 10.1587/transfun.E101.A.1557.
Abstract: Image quality assessment (IQA) is an inherent problem in the field of image processing. Recently, deep learning-based image quality assessment has attracted increased attention, owing to its high prediction accuracy. In this paper, we propose a fully-blind and fast image quality predictor (FFIQP) using convolutional neural networks including two strategies. First, we propose a distortion clustering strategy based on the distribution function of intermediate-layer results in the convolutional neural network (CNN) to make IQA fully blind. Second, by analyzing the relationship between image saliency information and CNN prediction error, we utilize a pre-saliency map to skip the non-salient patches for IQA acceleration. Experimental results verify that our method can achieve the high accuracy (0.978) with subjective quality scores, outperforming existing IQA methods. Moreover, the proposed method is highly computationally appealing, achieving flexible complexity performance by assigning different thresholds in the saliency map.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E101.A.1557/_p
Copiar
@ARTICLE{e101-a_9_1557,
author={Zhengxue CHENG, Masaru TAKEUCHI, Kenji KANAI, Jiro KATTO, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={A Fully-Blind and Fast Image Quality Predictor with Convolutional Neural Networks},
year={2018},
volume={E101-A},
number={9},
pages={1557-1566},
abstract={Image quality assessment (IQA) is an inherent problem in the field of image processing. Recently, deep learning-based image quality assessment has attracted increased attention, owing to its high prediction accuracy. In this paper, we propose a fully-blind and fast image quality predictor (FFIQP) using convolutional neural networks including two strategies. First, we propose a distortion clustering strategy based on the distribution function of intermediate-layer results in the convolutional neural network (CNN) to make IQA fully blind. Second, by analyzing the relationship between image saliency information and CNN prediction error, we utilize a pre-saliency map to skip the non-salient patches for IQA acceleration. Experimental results verify that our method can achieve the high accuracy (0.978) with subjective quality scores, outperforming existing IQA methods. Moreover, the proposed method is highly computationally appealing, achieving flexible complexity performance by assigning different thresholds in the saliency map.},
keywords={},
doi={10.1587/transfun.E101.A.1557},
ISSN={1745-1337},
month={September},}
Copiar
TY - JOUR
TI - A Fully-Blind and Fast Image Quality Predictor with Convolutional Neural Networks
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 1557
EP - 1566
AU - Zhengxue CHENG
AU - Masaru TAKEUCHI
AU - Kenji KANAI
AU - Jiro KATTO
PY - 2018
DO - 10.1587/transfun.E101.A.1557
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E101-A
IS - 9
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - September 2018
AB - Image quality assessment (IQA) is an inherent problem in the field of image processing. Recently, deep learning-based image quality assessment has attracted increased attention, owing to its high prediction accuracy. In this paper, we propose a fully-blind and fast image quality predictor (FFIQP) using convolutional neural networks including two strategies. First, we propose a distortion clustering strategy based on the distribution function of intermediate-layer results in the convolutional neural network (CNN) to make IQA fully blind. Second, by analyzing the relationship between image saliency information and CNN prediction error, we utilize a pre-saliency map to skip the non-salient patches for IQA acceleration. Experimental results verify that our method can achieve the high accuracy (0.978) with subjective quality scores, outperforming existing IQA methods. Moreover, the proposed method is highly computationally appealing, achieving flexible complexity performance by assigning different thresholds in the saliency map.
ER -