dc.rights.license | Attribution 4.0 International | * |
dc.contributor.author | Labaien Soto, Jokin | |
dc.contributor.author | Zugasti, Ekhi | |
dc.contributor.other | De Carlos Garcia, Xabier | |
dc.date.accessioned | 2023-03-28T18:03:20Z | |
dc.date.available | 2023-03-28T18:03:20Z | |
dc.date.issued | 2023 | |
dc.identifier.issn | 2076-3417 | en |
dc.identifier.other | https://katalogoa.mondragon.edu/janium-bin/janium_login_opac.pl?find&ficha_no=172008 | en |
dc.identifier.uri | https://hdl.handle.net/20.500.11984/6066 | |
dc.description.abstract | Explainable Artificial Intelligence (XAI) has gained significant attention in recent years due to concerns over the lack of interpretability of Deep Learning models, which hinders their decision-making processes. To address this issue, counterfactual explanations have been proposed to elucidate the reasoning behind a model’s decisions by providing what-if statements as explanations. However, generating counterfactuals traditionally involves solving an optimization problem for each input, making it impractical for real-time feedback. Moreover, counterfactuals must meet specific criteria, including being user-driven, causing minimal changes, and staying within the data distribution. To overcome these challenges, a novel model-agnostic approach called Real-Time Guided Counterfactual Explanations (RTGCEx) is proposed. This approach utilizes autoencoders to generate real-time counterfactual explanations that adhere to these criteria by optimizing a multiobjective loss function. The performance of RTGCEx has been evaluated on two datasets: MNIST and Gearbox, a synthetic time series dataset. The results demonstrate that RTGCEx outperforms traditional methods in terms of speed and efficacy on MNIST, while also effectively identifying and rectifying anomalies in the Gearbox dataset, highlighting its versatility across different scenarios. | en |
dc.description.sponsorship | Gobierno Vasco-Eusko Jaurlaritza | es |
dc.language.iso | eng | en |
dc.publisher | MDPI | en |
dc.rights | © 2023 The Authors | en |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | * |
dc.subject | explainable AI | en |
dc.subject | autoencoders | en |
dc.subject | counterfactual explanations | en |
dc.title | Real-Time, Model-Agnostic and User-Driven Counterfactual Explanations Using Autoencoders | en |
dcterms.accessRights | http://purl.org/coar/access_right/c_abf2 | en |
dcterms.source | Applied Sciences | en |
local.contributor.group | Análisis de datos y ciberseguridad | es |
local.description.peerreviewed | true | en |
local.identifier.doi | https://doi.org/10.3390/app13052912 | en |
local.relation.projectID | info:eu-repo/grantAgreement/GV/Elkartek 2022/KK-2022-00049/CAPV/Deeplearning REcomendation Manufacturing Imperfection Novelty Detection/DREMIND | en |
local.contributor.otherinstitution | https://ror.org/03hp1m080 | es |
local.source.details | Vol. 13. N. 5. N. artículo 2912 | en |
oaire.format.mimetype | application/pdf | en |
oaire.file | $DSPACE\assetstore | en |
oaire.resourceType | http://purl.org/coar/resource_type/c_6501 | en |
oaire.version | http://purl.org/coar/version/c_970fb48d4fbd8a85 | en |