The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
A crescente atenção à interpretabilidade dos modelos de aprendizado de máquina levou ao desenvolvimento de métodos para explicar o comportamento dos modelos de caixa preta de maneira post-hoc. No entanto, tais abordagens post-hoc geram uma nova explicação para cada novo input, e estas explicações não podem ser verificadas antecipadamente pelos humanos. Foi proposto um método que seleciona regras de decisão de um conjunto de regras finito como explicação para redes neurais, mas não pode ser usado para outros modelos. Neste artigo, propomos um método de explicação independente de modelo para encontrar um conjunto de regras finito pré-verificável a partir do qual uma regra de decisão é selecionada para apoiar cada previsão feita por um determinado modelo de caixa preta. Primeiro, definimos um modelo de explicação que seleciona a regra, a partir de um conjunto de regras, que fornece a previsão mais próxima; esta regra funciona como uma explicação alternativa ou evidência de apoio para a previsão de um modelo de caixa preta. O conjunto de regras deve ter uma cobertura elevada para fornecer previsões precisas para inputs futuros, mas também deve ser suficientemente pequeno para poder ser verificado antecipadamente por humanos. No entanto, minimizar o conjunto de regras e manter uma alta cobertura leva a um problema combinatório computacionalmente difícil. Assim, mostramos que este problema pode ser reduzido a um problema MaxSAT ponderado composto apenas por cláusulas de Horn, que pode ser resolvido de forma eficiente com solucionadores modernos. Os resultados experimentais mostraram que nosso método encontrou pequenos conjuntos de regras, de modo que as regras selecionadas a partir deles podem alcançar maior precisão para dados estruturados em comparação com o método existente usando conjuntos de regras quase do mesmo tamanho. Também comparamos experimentalmente o método proposto com dois modelos puramente baseados em regras, CORELS e defragTrees. Além disso, examinamos conjuntos de regras construídos para conjuntos de dados reais e discutimos as características do método proposto sob diferentes pontos de vista, incluindo interpretabilidade, limitação e possíveis casos de uso.
Yoichi SASAKI
NEC Corporation
Yuzuru OKAJIMA
NEC Corporation
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Yoichi SASAKI, Yuzuru OKAJIMA, "Alternative Ruleset Discovery to Support Black-Box Model Predictions" in IEICE TRANSACTIONS on Information,
vol. E106-D, no. 6, pp. 1130-1141, June 2023, doi: 10.1587/transinf.2022EDP7176.
Abstract: The increasing attention to the interpretability of machine learning models has led to the development of methods to explain the behavior of black-box models in a post-hoc manner. However, such post-hoc approaches generate a new explanation for every new input, and these explanations cannot be checked by humans in advance. A method that selects decision rules from a finite ruleset as explanation for neural networks has been proposed, but it cannot be used for other models. In this paper, we propose a model-agnostic explanation method to find a pre-verifiable finite ruleset from which a decision rule is selected to support every prediction made by a given black-box model. First, we define an explanation model that selects the rule, from a ruleset, that gives the closest prediction; this rule works as an alternative explanation or supportive evidence for the prediction of a black-box model. The ruleset should have high coverage to give close predictions for future inputs, but it should also be small enough to be checkable by humans in advance. However, minimizing the ruleset while keeping high coverage leads to a computationally hard combinatorial problem. Hence, we show that this problem can be reduced to a weighted MaxSAT problem composed only of Horn clauses, which can be efficiently solved with modern solvers. Experimental results showed that our method found small rulesets such that the rules selected from them can achieve higher accuracy for structured data as compared to the existing method using rulesets of almost the same size. We also experimentally compared the proposed method with two purely rule-based models, CORELS and defragTrees. Furthermore, we examine rulesets constructed for real datasets and discuss the characteristics of the proposed method from different viewpoints including interpretability, limitation, and possible use cases.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2022EDP7176/_p
Copiar
@ARTICLE{e106-d_6_1130,
author={Yoichi SASAKI, Yuzuru OKAJIMA, },
journal={IEICE TRANSACTIONS on Information},
title={Alternative Ruleset Discovery to Support Black-Box Model Predictions},
year={2023},
volume={E106-D},
number={6},
pages={1130-1141},
abstract={The increasing attention to the interpretability of machine learning models has led to the development of methods to explain the behavior of black-box models in a post-hoc manner. However, such post-hoc approaches generate a new explanation for every new input, and these explanations cannot be checked by humans in advance. A method that selects decision rules from a finite ruleset as explanation for neural networks has been proposed, but it cannot be used for other models. In this paper, we propose a model-agnostic explanation method to find a pre-verifiable finite ruleset from which a decision rule is selected to support every prediction made by a given black-box model. First, we define an explanation model that selects the rule, from a ruleset, that gives the closest prediction; this rule works as an alternative explanation or supportive evidence for the prediction of a black-box model. The ruleset should have high coverage to give close predictions for future inputs, but it should also be small enough to be checkable by humans in advance. However, minimizing the ruleset while keeping high coverage leads to a computationally hard combinatorial problem. Hence, we show that this problem can be reduced to a weighted MaxSAT problem composed only of Horn clauses, which can be efficiently solved with modern solvers. Experimental results showed that our method found small rulesets such that the rules selected from them can achieve higher accuracy for structured data as compared to the existing method using rulesets of almost the same size. We also experimentally compared the proposed method with two purely rule-based models, CORELS and defragTrees. Furthermore, we examine rulesets constructed for real datasets and discuss the characteristics of the proposed method from different viewpoints including interpretability, limitation, and possible use cases.},
keywords={},
doi={10.1587/transinf.2022EDP7176},
ISSN={1745-1361},
month={June},}
Copiar
TY - JOUR
TI - Alternative Ruleset Discovery to Support Black-Box Model Predictions
T2 - IEICE TRANSACTIONS on Information
SP - 1130
EP - 1141
AU - Yoichi SASAKI
AU - Yuzuru OKAJIMA
PY - 2023
DO - 10.1587/transinf.2022EDP7176
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E106-D
IS - 6
JA - IEICE TRANSACTIONS on Information
Y1 - June 2023
AB - The increasing attention to the interpretability of machine learning models has led to the development of methods to explain the behavior of black-box models in a post-hoc manner. However, such post-hoc approaches generate a new explanation for every new input, and these explanations cannot be checked by humans in advance. A method that selects decision rules from a finite ruleset as explanation for neural networks has been proposed, but it cannot be used for other models. In this paper, we propose a model-agnostic explanation method to find a pre-verifiable finite ruleset from which a decision rule is selected to support every prediction made by a given black-box model. First, we define an explanation model that selects the rule, from a ruleset, that gives the closest prediction; this rule works as an alternative explanation or supportive evidence for the prediction of a black-box model. The ruleset should have high coverage to give close predictions for future inputs, but it should also be small enough to be checkable by humans in advance. However, minimizing the ruleset while keeping high coverage leads to a computationally hard combinatorial problem. Hence, we show that this problem can be reduced to a weighted MaxSAT problem composed only of Horn clauses, which can be efficiently solved with modern solvers. Experimental results showed that our method found small rulesets such that the rules selected from them can achieve higher accuracy for structured data as compared to the existing method using rulesets of almost the same size. We also experimentally compared the proposed method with two purely rule-based models, CORELS and defragTrees. Furthermore, we examine rulesets constructed for real datasets and discuss the characteristics of the proposed method from different viewpoints including interpretability, limitation, and possible use cases.
ER -