Registro sencillo

dc.rights.licenseAttribution 4.0 International*
dc.contributor.authorOdriozola Olalde, Haritz
dc.contributor.authorArana-Arexolaleiba, Nestor
dc.contributor.otherZamalloa, Maider
dc.contributor.otherPerez-Cerrolaza, Jon
dc.contributor.otherArozamena-Rodríguez, Jokin
dc.date.accessioned2024-03-21T13:35:22Z
dc.date.available2024-03-21T13:35:22Z
dc.date.issued2023
dc.identifier.issn1613-0073en
dc.identifier.otherhttps://katalogoa.mondragon.edu/janium-bin/janium_login_opac.pl?find&ficha_no=174296en
dc.identifier.urihttps://hdl.handle.net/20.500.11984/6301
dc.description.abstractShielding methods for Reinforcement Learning agents show potential for safety-critical industrial applications. However, they still lack robustness on nominal safety, a key property for safety control systems. In the case of a significant change in the environment dynamic, shielding methods cannot guarantee safety until their inherent dynamics model is updated to the new scenario. The agent could reach risky states because the model cannot predict well. These situations could lead to catastrophic outcomes, such as damage to the cyber-physical system or loss of human lives, which are not allowed on safety-critical applications. The novel method presented in this paper, Fear Field, replicates human behaviour in those scenarios, adapting safety constraints whenever a drastic environmental change is introduced. Fear Field reduces safety violations by one order of magnitude compared to an RL agent implementing only a shield.en
dc.language.isoengen
dc.publisherCEUR-WS.orgen
dc.rights© 2023 The Authorsen
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectReinforcement Learningen
dc.subjectShieldingen
dc.subjectAdaptive constraintsen
dc.subjectRobustnessen
dc.subjectSafe AIen
dc.subjectODS 9 Industria, innovación e infraestructura
dc.titleFear Field: Adaptive constraints for safe environment transitions in Shielded Reinforcement Learningen
dcterms.accessRightshttp://purl.org/coar/access_right/c_abf2en
dcterms.sourceProceedings of the IJCAI-23 Joint Workshop on Artificial Intelligence Safety and Safe Reinforcement Learning (AISafety-SafeRL), co-located with the 32nd International Joint Conference on Artificial Intelligence (IJCAI2023)en
local.contributor.groupRobótica y automatizaciónes
local.description.peerreviewedtrueen
local.contributor.otherinstitutionIkerlanes
local.source.detailsMacao, June, 2023en
oaire.format.mimetypeapplication/pdfen
oaire.file$DSPACE\assetstoreen
oaire.resourceTypehttp://purl.org/coar/resource_type/c_c94fen
oaire.versionhttp://purl.org/coar/version/c_970fb48d4fbd8a85en


Ficheros en el ítem

Thumbnail
Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(es)

Registro sencillo

Attribution 4.0 International
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution 4.0 International