Registro sencillo

dc.rights.licenseAttribution-NonCommercial-NoDerivatives 4.0 International*
dc.contributor.authorAbu-Dakka, Fares J.
dc.contributor.otherSaveriano, Matteo
dc.contributor.otherKyrki, Ville
dc.date.accessioned2024-06-17T07:01:42Z
dc.date.available2024-06-17T07:01:42Z
dc.date.issued2024
dc.identifier.issn1872-8286en
dc.identifier.otherhttps://katalogoa.mondragon.edu/janium-bin/janium_login_opac.pl?find&ficha_no=177540en
dc.identifier.urihttps://hdl.handle.net/20.500.11984/6530
dc.description.abstractLearning from demonstration (LfD) is considered as an efficient way to transfer skills from humans to robots. Traditionally, LfD has been used to transfer Cartesian and joint positions and forces from human demonstrations. The traditional approach works well for some robotic tasks, but for many tasks of interest, it is necessary to learn skills such as orientation, impedance, and/or manipulability that have specific geometric characteristics. An effective encoding of such skills can be only achieved if the underlying geometric structure of the skill manifold is considered and the constrains arising from this structure are fulfilled during both learning and execution. However, typical learned skill models such as dynamic movement primitives (DMPs) are limited to Euclidean data and fail in correctly embedding quantities with geometric constraints. In this paper, we propose a novel and mathematically principled framework that uses concepts from Riemannian geometry to allow DMPs to properly embed geometric constrains. The resulting DMP formulation can deal with data sampled from any Riemannian manifold including, but not limited to, unit quaternions and symmetric and positive definite matrices. The proposed approach has been extensively evaluated both on simulated data and real robot experiments. The performed evaluation demonstrates that beneficial properties of DMPs, such as convergence to a given goal and the possibility to change the goal during operation, apply also to the proposed formulation.en
dc.language.isoengen
dc.publisherElsevieren
dc.rights© 2024 The Authorsen
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectmotor control of artificial systemsen
dc.subjectmovement primitives theoryen
dc.subjectdynamic movement primitivesen
dc.subjectlearning from demonstrationen
dc.subjectRiemannian manifoldsen
dc.titleA unified formulation of geometry-aware discrete dynamic movement primitivesen
dcterms.accessRightshttp://purl.org/coar/access_right/c_abf2en
dcterms.sourceNeurocomputingen
local.contributor.groupRobótica y automatizaciónes
local.description.peerreviewedtrueen
local.identifier.doihttps://doi.org/10.1016/j.neucom.2024.128056en
local.contributor.otherinstitutionhttps://ror.org/020hwjq30en
local.contributor.otherinstitutionhttps://ror.org/05trd4x28
oaire.format.mimetypeapplication/pdfen
oaire.file$DSPACE\assetstoreen
oaire.resourceTypehttp://purl.org/coar/resource_type/c_6501en
oaire.versionhttp://purl.org/coar/version/c_ab4af688f83e57aaen
oaire.funderNameGobierno Vascoen
oaire.funderNameGobierno Vascoen
oaire.funderNameComisión Europeaen
oaire.funderNameAcademy of Finlanden
oaire.funderIdentifierhttps://ror.org/00pz2fp31 / http://data.crossref.org/fundingdata/funder/10.13039/501100003086en
oaire.funderIdentifierhttps://ror.org/00pz2fp31 / http://data.crossref.org/fundingdata/funder/10.13039/501100003086en
oaire.funderIdentifierhttps://ror.org/00k4n6c32 / http://data.crossref.org/fundingdata/funder/10.13039/501100000780
oaire.funderIdentifierhttps://ror.org/05k73zm37
oaire.fundingStreamElkartek 2022en
oaire.fundingStreamElkartek 2023en
oaire.fundingStreamHorizon-RIAen
oaire.fundingStreamCHIST-ERAen
oaire.awardNumberKK-2022-00024en
oaire.awardNumberKK-2023-00055en
oaire.awardNumber101136067en
oaire.awardNumber326304en
oaire.awardTitleProducción Fluída y Resiliente para la Industria inteligente (PROFLOW)en
oaire.awardTitleTecnologías de Inteligencia Artificial para la percepción visual y háptica y la planificación y control de tareas de manipulación (HELDU)en
oaire.awardTitleRefining robotic skills through experience and human feedback (INVERSE)en
oaire.awardTitleInteractive Perception-Action-Learning for Modelling Objects (IPALM)en
oaire.awardURISin informaciónen
oaire.awardURISin informaciónen
oaire.awardURIhttps://doi.org/10.3030/101136067en
oaire.awardURISin informaciónen


Ficheros en el ítem

Thumbnail
Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(es)

Registro sencillo

Attribution-NonCommercial-NoDerivatives 4.0 International
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 International