Registro sencillo

dc.rights.licenseAttribution 4.0 International*
dc.contributor.authorLabaien Soto, Jokin
dc.contributor.authorZugasti, Ekhi
dc.contributor.otherDe Carlos Garcia, Xabier
dc.date.accessioned2023-03-28T18:03:20Z
dc.date.available2023-03-28T18:03:20Z
dc.date.issued2023
dc.identifier.issn2076-3417en
dc.identifier.otherhttps://katalogoa.mondragon.edu/janium-bin/janium_login_opac.pl?find&ficha_no=172008en
dc.identifier.urihttps://hdl.handle.net/20.500.11984/6066
dc.description.abstractExplainable Artificial Intelligence (XAI) has gained significant attention in recent years due to concerns over the lack of interpretability of Deep Learning models, which hinders their decision-making processes. To address this issue, counterfactual explanations have been proposed to elucidate the reasoning behind a model’s decisions by providing what-if statements as explanations. However, generating counterfactuals traditionally involves solving an optimization problem for each input, making it impractical for real-time feedback. Moreover, counterfactuals must meet specific criteria, including being user-driven, causing minimal changes, and staying within the data distribution. To overcome these challenges, a novel model-agnostic approach called Real-Time Guided Counterfactual Explanations (RTGCEx) is proposed. This approach utilizes autoencoders to generate real-time counterfactual explanations that adhere to these criteria by optimizing a multiobjective loss function. The performance of RTGCEx has been evaluated on two datasets: MNIST and Gearbox, a synthetic time series dataset. The results demonstrate that RTGCEx outperforms traditional methods in terms of speed and efficacy on MNIST, while also effectively identifying and rectifying anomalies in the Gearbox dataset, highlighting its versatility across different scenarios.en
dc.description.sponsorshipGobierno Vasco-Eusko Jaurlaritzaes
dc.language.isoengen
dc.publisherMDPIen
dc.rights© 2023 The Authorsen
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectexplainable AIen
dc.subjectautoencodersen
dc.subjectcounterfactual explanationsen
dc.titleReal-Time, Model-Agnostic and User-Driven Counterfactual Explanations Using Autoencodersen
dcterms.accessRightshttp://purl.org/coar/access_right/c_abf2en
dcterms.sourceApplied Sciencesen
local.contributor.groupAnálisis de datos y ciberseguridades
local.description.peerreviewedtrueen
local.identifier.doihttps://doi.org/10.3390/app13052912en
local.relation.projectIDinfo:eu-repo/grantAgreement/GV/Elkartek 2022/KK-2022-00049/CAPV/Deeplearning REcomendation Manufacturing Imperfection Novelty Detection/DREMINDen
local.contributor.otherinstitutionhttps://ror.org/03hp1m080es
local.source.detailsVol. 13. N. 5. N. artículo 2912en
oaire.format.mimetypeapplication/pdfen
oaire.file$DSPACE\assetstoreen
oaire.resourceTypehttp://purl.org/coar/resource_type/c_6501en
oaire.versionhttp://purl.org/coar/version/c_970fb48d4fbd8a85en


Ficheros en el ítem

Thumbnail
Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(es)

Registro sencillo

Attribution 4.0 International
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution 4.0 International