The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Neste artigo semitutorial, são investigadas as abordagens de decodificação baseadas em confiabilidade usando o reprocessamento do conjunto de informações mais confiável. Este artigo de alguma forma homogeneiza e compara diferentes estudos anteriores, melhorando a transparência geral e completando cada um com truques fornecidos pelos outros. Algumas melhorias sensatas também são sugeridas. No entanto, o principal objectivo continua a ser integrar e comparar trabalhos recentes baseados numa abordagem geral semelhante, que infelizmente foram realizados em paralelo sem grandes esforços de comparação até agora. São elaboradas suas respectivas (des)vantagens, principalmente em termos de complexidade média ou máxima. Nós nos concentramos na decodificação subótima, enquanto alguns trabalhos aos quais nos referimos foram desenvolvidos para decodificação de máxima verossimilhança (MLD). Não é fornecida qualquer análise quantitativa do desempenho dos erros, embora estejamos em condições de beneficiar de algumas considerações qualitativas e de comparar diferentes estratégias em termos de desempenhos de erros esperados superiores ou inferiores para uma mesma complexidade. Com as simulações, no entanto, verifica-se que todas as abordagens consideradas têm um desempenho muito próximo umas das outras, o que não era especialmente óbvio à primeira vista. A estratégia mais simples mostra-se também a mais rápida em termos de tempo de CPU, mas indicamos formas de implementar as outras para que fiquem muito próximas umas das outras também deste ponto de vista. Além de se basearem no mesmo princípio intuitivo, os algoritmos estudados são, portanto, também unificados do ponto de vista de seu desempenho em erros e custo computacional.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Antoine VALEMBOIS, Marc FOSSORIER, "A Comparison between "Most-Reliable-Basis Reprocessing" Strategies" in IEICE TRANSACTIONS on Fundamentals,
vol. E85-A, no. 7, pp. 1727-1741, July 2002, doi: .
Abstract: In this semi-tutorial paper, the reliability-based decoding approaches using the reprocessing of the most reliable information set are investigated. This paper somehow homogenizes and compares former different studies, hopefully improving the overall transparency, and completing each one with tricks provided by the others. A couple of sensible improvements are also suggested. However, the main goal remains to integrate and compare recent works based on a similar general approach, which have unfortunately been performed in parallel without much efforts of comparison up to now. Their respective (dis)advantages, especially in terms of average or maximum complexity are elaborated. We focus on suboptimum decoding while some works to which we refer were developed for maximum likelihood decoding (MLD). No quantitative error performance analysis is provided, although we are in a position to benefit from some qualitative considerations, and to compare different strategies in terms of higher or lower expected error performances for a same complexity. With simulations, however, it turns out that all considered approaches perform very closely to each other, which was not especially obvious at first sight. The simplest strategy proves also the fastest in terms of CPU-time, but we indicate ways to implement the other ones so that they get very close to each other from this point of view also. On top of relying on the same intuitive principle, the studied algorithms are thus also unified from the point of view of their error performances and computational cost.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/e85-a_7_1727/_p
Copiar
@ARTICLE{e85-a_7_1727,
author={Antoine VALEMBOIS, Marc FOSSORIER, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={A Comparison between "Most-Reliable-Basis Reprocessing" Strategies},
year={2002},
volume={E85-A},
number={7},
pages={1727-1741},
abstract={In this semi-tutorial paper, the reliability-based decoding approaches using the reprocessing of the most reliable information set are investigated. This paper somehow homogenizes and compares former different studies, hopefully improving the overall transparency, and completing each one with tricks provided by the others. A couple of sensible improvements are also suggested. However, the main goal remains to integrate and compare recent works based on a similar general approach, which have unfortunately been performed in parallel without much efforts of comparison up to now. Their respective (dis)advantages, especially in terms of average or maximum complexity are elaborated. We focus on suboptimum decoding while some works to which we refer were developed for maximum likelihood decoding (MLD). No quantitative error performance analysis is provided, although we are in a position to benefit from some qualitative considerations, and to compare different strategies in terms of higher or lower expected error performances for a same complexity. With simulations, however, it turns out that all considered approaches perform very closely to each other, which was not especially obvious at first sight. The simplest strategy proves also the fastest in terms of CPU-time, but we indicate ways to implement the other ones so that they get very close to each other from this point of view also. On top of relying on the same intuitive principle, the studied algorithms are thus also unified from the point of view of their error performances and computational cost.},
keywords={},
doi={},
ISSN={},
month={July},}
Copiar
TY - JOUR
TI - A Comparison between "Most-Reliable-Basis Reprocessing" Strategies
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 1727
EP - 1741
AU - Antoine VALEMBOIS
AU - Marc FOSSORIER
PY - 2002
DO -
JO - IEICE TRANSACTIONS on Fundamentals
SN -
VL - E85-A
IS - 7
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - July 2002
AB - In this semi-tutorial paper, the reliability-based decoding approaches using the reprocessing of the most reliable information set are investigated. This paper somehow homogenizes and compares former different studies, hopefully improving the overall transparency, and completing each one with tricks provided by the others. A couple of sensible improvements are also suggested. However, the main goal remains to integrate and compare recent works based on a similar general approach, which have unfortunately been performed in parallel without much efforts of comparison up to now. Their respective (dis)advantages, especially in terms of average or maximum complexity are elaborated. We focus on suboptimum decoding while some works to which we refer were developed for maximum likelihood decoding (MLD). No quantitative error performance analysis is provided, although we are in a position to benefit from some qualitative considerations, and to compare different strategies in terms of higher or lower expected error performances for a same complexity. With simulations, however, it turns out that all considered approaches perform very closely to each other, which was not especially obvious at first sight. The simplest strategy proves also the fastest in terms of CPU-time, but we indicate ways to implement the other ones so that they get very close to each other from this point of view also. On top of relying on the same intuitive principle, the studied algorithms are thus also unified from the point of view of their error performances and computational cost.
ER -