Kirjojen hintavertailu. Mukana 12 306 158 kirjaa ja 12 kauppaa.

Kirjailija

Jens Kober

Kirjat ja teokset yhdessä paikassa: 3 kirjaa, julkaisuja vuosilta 2013-2022, suosituimpien joukossa Interactive Imitation Learning in Robotics. Vertaile teosten hintoja ja tarkista saatavuus suomalaisista kirjakaupoista.

3 kirjaa

Kirjojen julkaisuhaarukka 2013-2022.

Interactive Imitation Learning in Robotics

Interactive Imitation Learning in Robotics

Carlos Celemin; Rodrigo Pérez-Dattari; Eugenio Chisari; Giovanni Franzese; Leandro de Souza Rosa; Ravi Prakash; Zlatan Ajanovic; Marta Ferraz; Abhinav Valada; Jens Kober

Now Publishers Inc
2022
nidottu
Existing robotics technology is still mostly limited to being used by expert programmers who can adapt the systems to new required conditions, but not flexible and adaptable by non-expert workers or end-users. Imitation Learning (IL) has obtained considerable attention as a potential direction for enabling all kinds of users to easily program the behavior of robots or virtual agents. Interactive Imitation Learning (IIL) is a branch of Imitation Learning (IL) where human feedback is provided intermittently during robot execution allowing an online improvement of the robot’s behavior.In this monograph, research in IIL is presented and low entry barriers for new practitioners are facilitated by providing a survey of the field that unifies and structures it. In addition, awareness of its potential is raised, what has been accomplished and what are still open research questions being covered.Highlighted are the most relevant works in IIL in terms of human-robot interaction (i.e., types of feedback), interfaces (i.e., means of providing feedback), learning (i.e., models learned from feedback and function approximators), user experience (i.e., human perception about the learning process), applications, and benchmarks. Furthermore, similarities and differences between IIL and Reinforcement Learning (RL) are analyzed, providing a discussion on how the concepts offline, online, off-policy and on-policy learning should be transferred to IIL from the RL literature.Particular focus is given to robotic applications in the real world and their implications are discussed, and limitations and promising future areas of research are provided.
Learning Motor Skills

Learning Motor Skills

Jens Kober; Jan Peters

Springer International Publishing AG
2016
nidottu
This book presents the state of the art in reinforcement learning applied to robotics both in terms of novel algorithms and applications. It discusses recent approaches that allow robots to learn motor.skills and presents tasks that need to take into account the dynamic behavior of the robot and its environment, where a kinematic movement plan is not sufficient. The book illustrates a method that learns to generalize parameterized motor plans which is obtained by imitation or reinforcement learning, by adapting a small set of global parameters and appropriate kernel-based reinforcement learning algorithms. The presented applications explore highly dynamic tasks and exhibit a very efficient learning process. All proposed approaches have been extensively validated with benchmarks tasks, in simulation and on real robots. These tasks correspond to sports and games but the presented techniques are also applicable to more mundane household tasks. The book is based on the first author’s doctoral thesis, which won the 2013 EURON Georges Giralt PhD Award.
Learning Motor Skills

Learning Motor Skills

Jens Kober; Jan Peters

Springer International Publishing AG
2013
sidottu
This book presents the state of the art in reinforcement learning applied to robotics both in terms of novel algorithms and applications. It discusses recent approaches that allow robots to learn motor.skills and presents tasks that need to take into account the dynamic behavior of the robot and its environment, where a kinematic movement plan is not sufficient. The book illustrates a method that learns to generalize parameterized motor plans which is obtained by imitation or reinforcement learning, by adapting a small set of global parameters and appropriate kernel-based reinforcement learning algorithms. The presented applications explore highly dynamic tasks and exhibit a very efficient learning process. All proposed approaches have been extensively validated with benchmarks tasks, in simulation and on real robots. These tasks correspond to sports and games but the presented techniques are also applicable to more mundane household tasks. The book is based on the first author’s doctoral thesis, which won the 2013 EURON Georges Giralt PhD Award.