Workshop on Hand-OBject Interaction: From human demonstrations to robot manipulation.
29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020)
Date: September 7 2020
NOTE: Because of the COVID-19 outbreak the workshop will take place completely online.
Humans use their hands to interact with the environment, and they do so for a broad spectrum of activities, from the physical manipulation of objects to non-verbal communication. Examples range from simple grasping of everyday objects, tool usage, deictic gestures, or communication via the sign language.
It is therefore of the utmost importance to focus on how humans use their hands with the aim of developing novel robot capabilities to deal with tasks usually considered a human prerogative, and in general being able to interact, collaborate or communicate with humans in a socially acceptable and safe manner. For example, a robot should be able to dexterously make use of tools, to synchronize its movements with the human it is collaborating with, either for joint work or turn-taking, or to manipulate objects such as to enhance a sense of trust in humans.
These examples require robot hands to coordinate with human motions, as well as advanced capabilities to deduce objects’ affordance, their intrinsic characteristics, and to understand how objects can be manipulated according to the social context.
The Workshop aims at gathering new approaches and experience from different fields to discuss which conceptual and engineering tools are better suited to sense human hand motions, to recognize objects and their physical characteristics, as well as to model and encode this knowledge to develop new robot behaviors.
The Workshop aims also at being a starting point for further activities:
- We will set up a mailing list including all participants, aiming to build a scientific community interested in the Workshop topics. The mailing list will be used for the Workshop organization, to foster debate after the workshop and to share the community results.
- We aim to write a paper reviewing ideas emerged during the Workshop, asking the contribution of all the participants. The spirit will be that of synthesizing a contribution putting forth a research agenda for future activities, common challenges to address, and sustaining reproducible research.
- A journal special issue will be proposed, open to all Workshop participants, related to the Workshop topics. Possible target journals are Robotics and Autonomous Systems and IEEE Transactions on Human-Machine Systems.
List of topics:
- Data extraction of human handling tasks
- Datasets of in-hand manipulation
- Hand pose estimation and tracking
- Gesture, action, and intent recognition
- Learning from demonstration
- Imitation learning
- Transfer learning
- Object modelling, recognition, pose estimation and tracking
- Object grasping
- Control of anthropomorphic hands
We invite extended abstracts (max 2 pages) followed by camera-ready submission of accepted papers. Submissions can be original research or late-breaking results that fall under the scope of the workshop. All accepted submission will give an oral presentation of their work.
Papers should be submitted through the EasyChair page and should use the IEEE template.
- Deadline: July 31st
- Notification of acceptance: August 15th
- Camera-ready: August 21st
What is this object? On-the-fly learning from a human demonstrator: experiments with a humanoid robot
Abstract: We have proposed a human-robot interaction scenario, in which a human teaches a robot to recognize new objects. In this scenario, image views and labels are given by the user during natural interaction with the robot. We have implemented a system for extracting training images from the user demonstrations and released iCubWorld, a dataset which contains images acquired in this setting. We have also proposed an object detection architecture derived from Faster-R CNN that allows the robot to learn online in a few seconds of interaction with the user. Yet, we also realized that because objects are acquired in a very specific setting, there is a drop in performance when the robot observes the scene in a different context. To address this issue, we have proposed an active strategy which allows the robot to autonomously acquire new examples and ask human intervention only when strictly required, thus reducing the need for external supervision.
Learning grasping for manipulation of rigids and clothing
Abstract: Inspired from the observation of manipulations performed by people, we developed methods to learn grasping actions. For deformables, we propose a taxonomy to help to express the grasping and, at the same time, the state of the clothing. This has inspired the design of new grippers with some fancy capabilities, and also tools for the explainability of the manipulation sequences.
Aude Billard (Tentatively)
Alessandro Carfì, University of Genoa, firstname.lastname@example.org
Timothy Patten, Technical University of Wien, email@example.com
Abraham Itzhak Weinberg, Aston University, firstname.lastname@example.org
Ali Hammoud, Sorbonne Université, email@example.com
Fulvio Mastrogiovanni, University of Genoa, firstname.lastname@example.org
Markus Vincze, Technical University of Wien, email@example.com
Diego Faria, Aston University, firstname.lastname@example.org
Véronique Perdereau, Sorbonne Université, email@example.com