dc.rights.license | Attribution-NonCommercial-NoDerivatives 4.0 International | * |
dc.contributor.author | Abu-Dakka, Fares J. | |
dc.contributor.other | Saveriano, Matteo | |
dc.contributor.other | Kyrki, Ville | |
dc.date.accessioned | 2024-06-17T07:01:42Z | |
dc.date.available | 2024-06-17T07:01:42Z | |
dc.date.issued | 2024 | |
dc.identifier.issn | 1872-8286 | en |
dc.identifier.other | https://katalogoa.mondragon.edu/janium-bin/janium_login_opac.pl?find&ficha_no=177540 | en |
dc.identifier.uri | https://hdl.handle.net/20.500.11984/6530 | |
dc.description.abstract | Learning from demonstration (LfD) is considered as an efficient way to transfer skills from humans to robots. Traditionally, LfD has been used to transfer Cartesian and joint positions and forces from human demonstrations. The traditional approach works well for some robotic tasks, but for many tasks of interest, it is necessary to learn skills such as orientation, impedance, and/or manipulability that have specific geometric characteristics. An effective encoding of such skills can be only achieved if the underlying geometric structure of the skill manifold is considered and the constrains arising from this structure are fulfilled during both learning and execution. However, typical learned skill models such as dynamic movement primitives (DMPs) are limited to Euclidean data and fail in correctly embedding quantities with geometric constraints. In this paper, we propose a novel and mathematically principled framework that uses concepts from Riemannian geometry to allow DMPs to properly embed geometric constrains. The resulting DMP formulation can deal with data sampled from any Riemannian manifold including, but not limited to, unit quaternions and symmetric and positive definite matrices. The proposed approach has been extensively evaluated both on simulated data and real robot experiments. The performed evaluation demonstrates that beneficial properties of DMPs, such as convergence to a given goal and the possibility to change the goal during operation, apply also to the proposed formulation. | en |
dc.language.iso | eng | en |
dc.publisher | Elsevier | en |
dc.rights | © 2024 The Authors | en |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | * |
dc.subject | motor control of artificial systems | en |
dc.subject | movement primitives theory | en |
dc.subject | dynamic movement primitives | en |
dc.subject | learning from demonstration | en |
dc.subject | Riemannian manifolds | en |
dc.title | A unified formulation of geometry-aware discrete dynamic movement primitives | en |
dcterms.accessRights | http://purl.org/coar/access_right/c_abf2 | en |
dcterms.source | Neurocomputing | en |
local.contributor.group | Robótica y automatización | es |
local.description.peerreviewed | true | en |
local.identifier.doi | https://doi.org/10.1016/j.neucom.2024.128056 | en |
local.contributor.otherinstitution | https://ror.org/020hwjq30 | en |
local.contributor.otherinstitution | https://ror.org/05trd4x28 | |
oaire.format.mimetype | application/pdf | en |
oaire.file | $DSPACE\assetstore | en |
oaire.resourceType | http://purl.org/coar/resource_type/c_6501 | en |
oaire.version | http://purl.org/coar/version/c_ab4af688f83e57aa | en |
oaire.funderName | Gobierno Vasco | en |
oaire.funderName | Gobierno Vasco | en |
oaire.funderName | Comisión Europea | en |
oaire.funderName | Academy of Finland | en |
oaire.funderIdentifier | https://ror.org/00pz2fp31 / http://data.crossref.org/fundingdata/funder/10.13039/501100003086 | en |
oaire.funderIdentifier | https://ror.org/00pz2fp31 / http://data.crossref.org/fundingdata/funder/10.13039/501100003086 | en |
oaire.funderIdentifier | https://ror.org/00k4n6c32 / http://data.crossref.org/fundingdata/funder/10.13039/501100000780 | |
oaire.funderIdentifier | https://ror.org/05k73zm37 | |
oaire.fundingStream | Elkartek 2022 | en |
oaire.fundingStream | Elkartek 2023 | en |
oaire.fundingStream | Horizon-RIA | en |
oaire.fundingStream | CHIST-ERA | en |
oaire.awardNumber | KK-2022-00024 | en |
oaire.awardNumber | KK-2023-00055 | en |
oaire.awardNumber | 101136067 | en |
oaire.awardNumber | 326304 | en |
oaire.awardTitle | Producción Fluída y Resiliente para la Industria inteligente (PROFLOW) | en |
oaire.awardTitle | Tecnologías de Inteligencia Artificial para la percepción visual y háptica y la planificación y control de tareas de manipulación (HELDU) | en |
oaire.awardTitle | Refining robotic skills through experience and human feedback (INVERSE) | en |
oaire.awardTitle | Interactive Perception-Action-Learning for Modelling Objects (IPALM) | en |
oaire.awardURI | Sin información | en |
oaire.awardURI | Sin información | en |
oaire.awardURI | https://doi.org/10.3030/101136067 | en |
oaire.awardURI | Sin información | en |